Milvus
Zilliz

Can developers fine-tune GPT 5.4 with custom data?

As of March 2026, developers can generally fine-tune OpenAI’s GPT models with custom data, and while GPT-5.4 is a recent addition to the GPT-5 family, the availability of fine-tuning capabilities for this specific iteration might be subject to its rollout schedule. OpenAI’s platform is designed to allow developers to take a pre-trained base model and adapt it to particular tasks or to respond in ways that align more closely with specific needs through fine-tuning. This process involves providing custom datasets that exemplify the desired inputs and outputs, effectively specializing the model for a unique use case.

Fine-tuning involves several steps, beginning with the collection and preparation of a dataset of examples. This dataset is typically formatted in JSONL (JSON Lines) and then uploaded to the OpenAI platform. Developers then initiate a fine-tuning job, which trains the chosen base model on this custom data. After the training is complete, the fine-tuned model can be evaluated and then deployed within applications. This iterative process allows developers to continuously refine the model’s performance by tweaking the fine-tuning dataset based on evaluation feedback. Techniques such as Supervised Fine-Tuning (SFT), Direct Preference Optimization (DPO), and Reinforcement Fine-Tuning (RFT) are used to train models on labeled data, align them with human preferences, or optimize complex behaviors with reward signals, respectively.

While fine-tuning is a core offering for many OpenAI models, including various GPT-4 and earlier GPT-3.5 versions, newly released GPT-5 models, such as gpt-5-o, gpt-5-turbo, gpt-5-mini, and gpt-5-nano, were not immediately available for fine-tuning upon their initial release through the Azure OpenAI Service or the public OpenAI API as of late 2025. This was often considered a temporary limitation, with expectations that fine-tuning support would be expanded over time as these advanced models matured within the API ecosystem. Given that GPT-5.4 was recently mentioned around March 2026, developers should check the latest OpenAI API documentation for the most up-to-date information regarding its fine-tuning availability. When fine-tuning becomes available, it offers a powerful way to leverage the advanced capabilities of models like GPT-5.4 for specialized applications, perhaps even integrating with vector databases such as Milvus to enhance retrieval-augmented generation by storing and retrieving custom, domain-specific information that the fine-tuned model can then process.

Like the article? Spread the word