OpenAI is likely to make fine-tuning UI available in the coming months according to Logan Kilpatrick. Users will have a seamless experience through the UI, allowing them to easily view their fine-tunes and create them effortlessly using the same interface, according to Kilpatrick.
Furthermore, OpenAI has increased the concurrent training limit from 1 to 3, enabling users to fine-tune multiple models simultaneously. Currently, OpenAI permits developers to customize only a part of the model name using a suffix. However, in the future, developers might have the opportunity to customize the entire model name, as indicated by Kilpatrick.
Many developers around the world are hoping that OpenAI might make fine-tuning available at their inaugural developers’ conference ‘OpenAI DevDay’ which is taking place on November 6, 2023 in San Francisco. There has been a lot of anticipation about what the company is going to announce, though CEO Sam Altman has said that there is going to be no announcement about GPT-5.
Recently, OpenAI also silently unveiled “gpt-3.5-turbo-instruct,” a new instruction language model designed for giving specific instructions efficiently, similar to the chat-focused GPT-3.5 Turbo.
This new model will replace existing Instruct models and certain text-based models. It maintains the same cost and performance as other GPT-3.5 models within a 4K context window, using training data up to September 2021.
The post OpenAI to Make Fine-Tuning UI Available Soon appeared first on Analytics India Magazine.