OpenAI now allows users to incorporate custom data into the streamlined iteration of GPT-3.5, known as GPT-3.5 Turbo, simplifying the process of enhancing the text-generation AI model’s dependability while instilling specific behaviors.
OpenAI asserts that fine-tuned adaptations of GPT-3.5 have the potential to match or even surpass the foundational capabilities of GPT-4, the company’s flagship model, in “particular specific tasks.”
In a blog post released today, the company stated, “Since the launch of GPT-3.5 Turbo, developers and businesses have expressed interest in the option to tailor the model for unique and distinct user experiences. This update grants developers the capability to modify models that exhibit improved performance for their specific use cases and run these customized models at scale.”
Through the implementation of fine-tuning, enterprises utilizing GPT-3.5 Turbo via OpenAI’s API can enhance the model’s adherence to instructions. For instance, they can ensure it consistently responds in a designated language or improve its consistency in formatting responses, such as for code snippets. Furthermore, fine-tuning allows for refining the model’s output “tone” to align better with a brand or voice.
Additionally, fine-tuning empowers OpenAI’s clients to condense their text prompts, leading to accelerated API calls and reduced expenses. According to the blog post, “Initial testers have minimized prompt length by up to 90% by integrating fine-tuned instructions into the model itself.”
At present, the fine-tuning procedure entails preparing data, uploading requisite files, and establishing a fine-tuning task using OpenAI’s API. All fine-tuning data is subjected to an “examination” API and a moderation system fueled by GPT-4 to ascertain whether it complies with OpenAI’s safety standards. The company plans to introduce a fine-tuning user interface down the line, complete with a dashboard for tracking ongoing fine-tuning tasks.
The costs associated with fine-tuning are as follows:
- Training: $0.008 per 1,000 tokens
- Input usage: $0.012 per 1,000 tokens
- Output usage: $0.016 per 1,000 tokens
A fine-tuning project for GPT-3.5 Turbo encompassing a training file of 100,000 tokens (approximately 75,000 words) would amount to roughly $2.40, as stated by OpenAI.
In other updates, OpenAI has launched two revised GPT-3 base models (babbage-002 and davinci-002), both eligible for fine-tuning and equipped with pagination and enhanced extensibility. As announced earlier, the initial GPT-3 base models will be phased out by January 4, 2024.
While GPT-3.5 does not possess image comprehension capabilities like GPT-4, OpenAI intends to introduce fine-tuning support for GPT-4 later in the fall, although precise details remain undisclosed.