Fine tuning ChatGPT is super easy. The most annoying part is to transform the data to the desired format. You need to provide a file where each row is a small JSON object consisting of three things: a prompt to explain the system what the AI assistant should do, a sample user input (the problem you want solved) and the expected assistant output (the answer you would like to get). Each row should look like this:
{'messages': [{'content': 'Prompt to explain the task to the assistant','role': 'system'}, {'content': 'Sample user input (query, text, message, etc)','role': 'user'}, {'content': 'Expected output corresponding to the input','role': 'assistant'}]}
Now, we need to upload these files to OpenAI:
import openai from dotenv import load_dotenv load_dotenv() openai.api_key = os.environ["OPENAI_API_KEY"] training_file_name = "tmp_recipe_finetune_training.jsonl" validation_file_name = "tmp_recipe_finetune_validation.jsonl" training_response = openai.File.create( file=open(training_file_name, "rb"), purpose="fine-tune" ) training_file_id = training_response["id"] validation_response = openai.File.create( file=open(validation_file_name, "rb"), purpose="fine-tune" ) validation_file_id = validation_response["id"]
Once the files have been uploaded, you can launch the job:
response = openai.FineTuningJob.create( training_file=training_file_id, validation_file=validation_file_id, model="gpt-3.5-turbo", suffix="recipe-ner", ) job_id = response["id"] print("Job ID:", response["id"]) print("Model name:", response["model"]) print("Status:", response["status"])
It took around 20 min for fine-tuning with 10 examples, which is not great, probably this will improve. At least for corporate clients.
Once the job is completed, the easiest way to test it is to go straight here: Playground – OpenAI API.
To use it, you can use the model’s name, which you can get from openai.FineTuningJob.list()
Once you have the name, you can use the ChatCompletion
class to query it, as any other model:
fine_tuned_model = response["model"] response = openai.ChatCompletion.create( model=fine_tuned_model, messages=test_msg, temperature=0, max_tokens=500 ) print(response["choices"][0]["message"]["content"])