OpenAI’s CEO Sam Altman believes that the era of giant large language models (LLMs) may be coming to an end. In a recent interview, Altman argued that focusing solely on the size of LLMs is a false measurement of model quality and compared it to the gigahertz race in computer chips in the 1990s and 2000s. Altman believes that it’s more important to keep the focus on rapidly increasing capability rather than parameter count.
Altman’s comments come as OpenAI recently released its latest large language model, GPT-4, which has been generating significant buzz in the tech community. However, Altman takes a humble approach and doesn’t necessarily believe that bigger is always better. He sees the trend towards ever-larger models as coming to an end and expects that improvements will come in other ways.
Altman’s focus on capability and usefulness is at the core of OpenAI’s mission. The company has been working on building the most capable and safe models for years, and Altman credits the company’s success to the team’s willingness to grind and sweat every detail for a long time.
However, OpenAI has faced criticism from some quarters, including a recent letter requesting that the company pause its work for six months. While Altman defended OpenAI’s approach, he did agree with some parts of the letter. The company is committed to producing useful and safe models, and Altman believes that this goal can be achieved without a focus on model size alone.
As the tech industry continues to grapple with the implications of large language models, Altman’s comments offer a thoughtful and measured perspective on the future of LLMs and the role they will play in shaping the world around us.