A day after OpenAI announced that it will offer new built-in support for users to fine-tune its GPT-3.5 Turbo large language model (LLM), which allows enterprises to bring their proprietary data for training the model and run it at scale, the company named Scale AI as its “preferred partner” to provide GPT-3.5 fine tuning.
“We are excited to partner with OpenAI to supercharge model performance — helping every enterprise utilize AI most effectively for their unique needs,” said Alexandr Wang, founder and CEO of Scale AI, in a press release. “Prompting alone — atop even the best LLMs like GPT-3.5 — is not enough model customization to produce the most accurate, efficient results. As with software, an incredible amount of value comes from fine-grained optimizations, and fine tuning is critical for that.”
In a press release, Scale said the partnership “brings together OpenAI’s advanced base model GPT-3.5 with Scale’s fine-tuning expertise and industry-leading Data Engine to help every company create custom state-of-the-art models for their specific business needs.” Scale’s Data Engine accelerates the development of models by generating prompts and ranking model outputs.
Scale said it already performs fine-tuning for many commercial and open-source models. “As OpenAI’s preferred fine-tuning partner for GPT-3.5, we are excited to leverage their powerful APIs to help even more enterprises build the most powerful custom LLMs that increase efficiency while reducing costs,” the press release said.
Brad Lightcap, COO of OpenAI, said that “Scale extends our ability to bring the power of fine-tuning to more companies, building on their enterprise AI experience to help businesses better apply OpenAI models for their unique needs.”
Scale AI, a buzzy unicorn company that shot to fame as a data labeling service and enjoyed a $7 billion valuation in April 2021, has had an up and down year — it laid off 20% of its 700-person staff back in January, but bounced back with the launch of a full-stack generative AI platform in May.
Now, it is touting a fine-tuning case study with fintech company Brex, which has been using LLMs to generate high-quality expense memos that help ease the burden of compliance requirements for employees.
According to the Scale press release, the Brex team had previously been using GPT-4 for memo generation, but wanted to explore if they could improve cost and latency, while maintaining quality, by using a fine-tuned GPT-3.5 model.
“By using the GPT-3.5 fine-tuning API on Brex data annotated with Scale’s Data Engine, we
saw that the fine-tuned GPT-3.5 model outperformed the stock GPT-3.5 turbo model 66%
of the time,” the press release said.
Henrique Dubugras, CEO at Brex, said that “our ongoing partnership with OpenAI and Scale AI position us at the cutting edge by employing state-of-the-art techniques to enhance employee compliance and help finance teams close the books faster. In particular, fine-tuning GPT-3.5 has been a game changer for us, enabling us to deliver high-quality AI experiences, comparable to GPT-4, with much lower cost and lower latency. This unlocks a whole new set of capabilities for us that were previously not viable.”
(Copyright: VentureBeat OpenAI names Scale AI 'preferred partner' to fine-tune GPT-3.5 | VentureBeat)