AI21 Labs’ new AI model can handle more context than most

Share via:


Increasingly, the AI industry is moving toward generative AI models with longer contexts. But models with large context windows tend to be compute-intensive. Or Dagan, product lead at AI startup AI21 Labs, asserts that this doesn’t have to be the case — and his company is releasing a generative model to prove it.

Contexts, or context windows, refer to input data (e.g. text) that a model considers before generating output (more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

AI21 Labs’ Jamba, a new text-generating and -analyzing model, can perform many of the same tasks that models like OpenAI’s ChatGPT and Google’s Gemini can. Trained on a mix of public and proprietary data, Jamba can write text in English, French, Spanish and Portuguese.

Jamba can handle up to 140,000 tokens while running on a single GPU with at least 80GB of memory (like a high-end Nvidia A100). That translates to around 105,000 words, or 210 pages — a decent-sized novel.

Meta’s Llama 2, by comparison, has a 32,000-token context window — on the smaller side by today’s standards — but only requires a GPU with ~12GB of memory in order to run. (Context windows are typically measured in tokens, which are bits of raw text and other data.)

On its face, Jamba is unremarkable. Loads of freely available, downloadable generative AI models exist, from Databricks’ recently released DBRX to the aforementioned Llama 2.

But what makes Jamba unique is what’s under the hood. It uses a combination of two model architectures: transformers and state space models (SSMs).

Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4 and Google’s Gemini, for example. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (e.g. a sentence), transformers weigh the relevance of every other input (other sentences) and draw from them to generate the output (a new sentence).

SSMs, on the other hand, combine several qualities of older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of handling long sequences of data.

Now, SSMs have their limitations. But some of the early incarnations, including an open source model called Mamba from Princeton and Carnegie Mellon researchers, can handle larger inputs than their transformer-based equivalents while outperforming them on language generation tasks.

Jamba in fact uses Mamba as part of the core model — and Dagan claims it delivers three times the throughput on long contexts compared to transformer-based models of comparable sizes.

“While there are a few initial academic examples of SSM models, this is the first commercial-grade, production-scale model,” Dagan said in an interview with TechCrunch. “This architecture, in addition to being innovative and interesting for further research by the community, opens up great efficiency and throughput possibilities.”

Now, while Jamba has been released under the Apache 2.0 license, an open source license with relatively few usage restrictions, Dagan stresses that it’s a research release not intended to be used commercially. The model doesn’t have safeguards to prevent it from generating toxic text or mitigations to address potential bias; a fine-tuned, ostensibly “safer” version will be made available in the coming weeks.

But Dagan asserts that Jamba demonstrates the promise of the SSM architecture even at this early stage.

“The added value of this model, both because of its size and its innovative architecture, is that it can be easily fitted onto a single GPU,” he said. “We believe performance will further improve as Mamba gets additional tweaks.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

AI21 Labs’ new AI model can handle more context than most


Increasingly, the AI industry is moving toward generative AI models with longer contexts. But models with large context windows tend to be compute-intensive. Or Dagan, product lead at AI startup AI21 Labs, asserts that this doesn’t have to be the case — and his company is releasing a generative model to prove it.

Contexts, or context windows, refer to input data (e.g. text) that a model considers before generating output (more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

AI21 Labs’ Jamba, a new text-generating and -analyzing model, can perform many of the same tasks that models like OpenAI’s ChatGPT and Google’s Gemini can. Trained on a mix of public and proprietary data, Jamba can write text in English, French, Spanish and Portuguese.

Jamba can handle up to 140,000 tokens while running on a single GPU with at least 80GB of memory (like a high-end Nvidia A100). That translates to around 105,000 words, or 210 pages — a decent-sized novel.

Meta’s Llama 2, by comparison, has a 32,000-token context window — on the smaller side by today’s standards — but only requires a GPU with ~12GB of memory in order to run. (Context windows are typically measured in tokens, which are bits of raw text and other data.)

On its face, Jamba is unremarkable. Loads of freely available, downloadable generative AI models exist, from Databricks’ recently released DBRX to the aforementioned Llama 2.

But what makes Jamba unique is what’s under the hood. It uses a combination of two model architectures: transformers and state space models (SSMs).

Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4 and Google’s Gemini, for example. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (e.g. a sentence), transformers weigh the relevance of every other input (other sentences) and draw from them to generate the output (a new sentence).

SSMs, on the other hand, combine several qualities of older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of handling long sequences of data.

Now, SSMs have their limitations. But some of the early incarnations, including an open source model called Mamba from Princeton and Carnegie Mellon researchers, can handle larger inputs than their transformer-based equivalents while outperforming them on language generation tasks.

Jamba in fact uses Mamba as part of the core model — and Dagan claims it delivers three times the throughput on long contexts compared to transformer-based models of comparable sizes.

“While there are a few initial academic examples of SSM models, this is the first commercial-grade, production-scale model,” Dagan said in an interview with TechCrunch. “This architecture, in addition to being innovative and interesting for further research by the community, opens up great efficiency and throughput possibilities.”

Now, while Jamba has been released under the Apache 2.0 license, an open source license with relatively few usage restrictions, Dagan stresses that it’s a research release not intended to be used commercially. The model doesn’t have safeguards to prevent it from generating toxic text or mitigations to address potential bias; a fine-tuned, ostensibly “safer” version will be made available in the coming weeks.

But Dagan asserts that Jamba demonstrates the promise of the SSM architecture even at this early stage.

“The added value of this model, both because of its size and its innovative architecture, is that it can be easily fitted onto a single GPU,” he said. “We believe performance will further improve as Mamba gets additional tweaks.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Indian edtech unicorn Vedantu cuts losses by 58%

It was supported by a 21% increase in...

Trump nominates Stephen Miran as Council of Economic Advisors...

"I'm a huge believer in innovation, in powering...

Haven’t picked up your holiday gifts yet? Apple now...

From now through December 24th, Apple will be...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!