This site uses cookies and by using the site you are consenting to this. We utilize cookies to optimize our brand’s web presence and website experience. To learn more about cookies, click here to read our privacy statement.

Building Fine-Tuned Models in ChatGPT

Author: Steven Devoe Posted In: Data

At this point, most people have seen ChatGPT’s impressive capabilities, but there’s lot more underlying the chat-bot-like interface than most people are familiar with. By using fine-tuned models on top of ChatGPT’s large language model, you can unlock further potential in specific tasks or domains.

In this blog post, we will explore when it is useful to build a fine-tuned model and what is involved in the process involved.

What is Fine Tuning?

Fine tuning is the process of retraining an existing pre-trained model on a specific task or domain. The pre-trained model has already learned general patterns in language and can be fine-tuned to adapt to specific patterns for a new task or domain based upon your training data. Fine-tuning can help improve the performance of the model on the specific task, and it can also reduce the amount of data needed to train a model from scratch.

When would a fine-tuned model make sense?

A fine-tuned model typically makes sense in situations where you want to leverage the extensive capabilities of ChatGPT, but you need to supplement the model’s knowledge with other information.. It’s useful in situations where you want to leverage the extensive training of ChatGPT models, while expanding the level of knowledge on a specific topic. It can still have the same downsides of generic ChatGPT models, such as stating wrong information and needing to be trained as the information changes, so it is not a great choice for applications that require 100% factual certainty or with information that changes frequently.



Building a Fine-Tuned Model in ChatGPT

To build a fine-tuned model in ChatGPT, follow these steps:

  1. Define the task and select the pre-trained model to use as a base. ChatGPT offers several pre-trained models with different token limits and capabilities. Choose the one that best
  2. Prepare the data for training. Collect and clean the data, divide it into training and validation sets, then convert it into a format that can be fed into the model.
  3. Train the model on the fine-tuning task. Using OpenAI’s API, feed the training data into the existing model.

Check the results and use the model. Measure the model’s performance on the specific task and adjust the model as necessary. Now let’s dive into each of those steps with an example. In our example, we’ll fine tune the model on a topic ChatGPT was not trained ona fictitious character named Jonny Whizbang, so that we know ChatGPT’s knowledge of the subject comes solely from our fine-tuning and not its pre-existing knowledge.






Before we begin, we will need an OpenAI API Key. Log in to your OpenAI account and create a secret key here. Make sure to save the key somewhere safe and add it to your environment variables.

Additionally, make sure you have the OpenAI library available in your environment. We’re using Python so a simple pip install works here.

Define the task and select the pre-trained model

For our example, we’ll be creating a model that can answer questions about our fictional character. There are currently four base GPT-3 models (davinci, curie, babbage, and ada) available for fine tuning, each one with their respective strengths, weaknesses, and costs. For ours we will use curie. You can find more information here to decide what is best for your use case.

Prepare the data for training

As with all models, the data we put into it matters can make a huge difference. The data needs to be in a prompt-=completion format to train the model.

Our example is unique because the data about our fictional character must appear in the completion (answer) of questions we create and pose in the data.


{“prompt”: “When was Jonny Whizbang born?”, “completion”: “Jonny Whizbang was born on February 22, 1957.”}

For our example, I create a CSV with questions and answers you would expect to find in a basic biography such as birthdate, what the individual is known for, etc.


From there, we are ready to format the training data to fine tune the model.

Train the model on the fine-tuning task

Once the data is prepared, fine tuning the model is straightforward. One command with a few parameters does everything you need, although this can take some time to complete.

Once the tuning job starts, there are a few commands you can use to check the status.

To use the model, you must have the fine-tuned model from the training job. The list command returns it below:


Check the results and use the model

Once you have your model, you can start asking it questions and seeing the responses. Let’s see if it knows the birthday of our fictional character.

Oops… It looks like that was incorrect. Let’s try something more specific.

Cool! It looks like it answered correctly.

Further Exploration

Practically speaking, it makes more sense to enhance this data a little bit more before training. We would like our data to be able to answer these questions even if they aren’t worded exactly like they are in the inputs To do this, we could programmatically ask ChatGPT other ways to ask the questions in our data set and use the same response. This augmented data set will then be fed into the training data to make the model more robust.

Additionally, you could experiment with the different parameters involved in training the model, such as temperature which controls how ‘creative’ the model is. Beyond that, there are other model types that can be fine tuned, including classification models. In an interesting application on OpenAI’s website, they use a classification model to choose whether to respond to an input or determine if a question was  = “off limits.” If the model decides to respond, then they leverage a question-and-answer model like we have here.




Fine-tuning is a powerful technique for adapting pre-trained models like ChatGPT to specific tasks and domains, but still has the same chance of “hallucinating” and giving wrong answers. By following the steps outlined in this blog post, you can build and use fine-tuned models in ChatGPT to generate high-quality text that is specific to your needs. Whether you are building a chatbot, writing a book, or analyzing text data, ChatGPT’s fine-tuning capabilities can help you achieve your goals with greater accuracy and efficiency.