Too many models | TechCrunch

Share via:


How many AI models is too many? It depends on how you look at it… but 10 a week is probably a bit much. That’s how many we had in the last few days, or close to it, and it’s increasingly hard to say whether and how these models compare to one another — if it was ever possible to begin with. So what’s the point?

We’re at a weird time in the evolution of AI, though of course it’s been pretty weird the whole time. We’re seeing a proliferation of models large and small, from niche developers to large, well-funded ones.

Let’s just run down the list from this week, shall we? I’ve tried to condense as far as possible what sets each model apart.

  • LLaMa-3: Meta’s latest “open” flagship large language model. (The term “open” is disputed right now, but this project is widely used by the community regardless.)
  • Mistral 8×22: A ‘mixture of experts’ model, on the large side, from a French outfit that has shied away from the openness they once embraced.
  • Stable Diffusion 3 Turbo: An upgraded SD3 to go with the open-ish Stability’s new API. Borrowing “turbo” from OpenAI’s model nomenclature is a little weird, but OK.
  • Adobe Acrobat AI Assistant: “Talk to your documents” from the 800-lb document gorilla. Pretty sure this is mostly a wrapper for ChatGPT, though.
  • Reka Core: From a small team formerly employed by Big AI, a multimodal model baked from scratch that is at least nominally competitive with the big dogs.
  • Idefics2: A more open multimodal model, built on top of recent, smaller Mistral and Google models.
  • OLMo-1.7-7B: A larger version of AI2’s LLM, among the most open out there, and a stepping stone to a future 70B-scale model.
  • Pile-T5: A version of the ol’ reliable T5 model fine-tuned on code database the Pile. The same T5 you know and love but better coding.
  • Cohere Compass: An “embedding model” (if you don’t know already, don’t worry about it) focused on incorporating multiple data types to cover more use cases.
  • Imagine Flash: Meta’s newest image generation model, relying on a new distillation method to accelerate diffusion without overly compromising quality.
  • Limitless: “A personalized AI powered by what you’ve seen, said, or heard. It’s a web app, Mac app, Windows app, and a wearable.” 😬

That’s 11, because one was announced while I was writing this. And let’s be clear, this is not all of the models released or previewed this week! It’s just the ones we saw and discussed. If we were to relax the conditions for inclusion a bit, there would dozens: some fine-tuned existing models, some combos like Idefics 2, some experimental or niche, and so on. Not to mention this week’s new tools for building (torchtune) and battling against (Glaze 2.0) generative AI!

What are we to make of this never-ending avalanche? Because next week, while it might not have the ten or twenty releases we saw in the previous one, will surely have at least five or six of the tier noted above. We can’t “review” them all. So how can we help you, our readers, understand and keep up with all these things?

Well… the truth is you don’t need to keep up, and nor does nearly anyone else. There has been a shift in the AI space: some models, like ChatGPT and Gemini, have evolved into entire web platforms spanning multiple use cases and access points. Other large language models like LLaMa or OLMo, though technically speaking they share a basic architecture, don’t actually fill the same role. They are intended to live in the background as a service or component, not in the foreground as a name brand.

There’s been a deliberate confusion of these two things, because the developers of models want to borrow a little of the fanfare we tend to associate with major AI platform releases like your GPT-4V or Gemini Ultra. Everyone wants you to think that their release is an important one. And while it’s probably important to somebody, that somebody is almost certainly not you.

Think about it in the sense of another broad, diverse category like cars. When they were first invented, you just bought “a car.” Then a little later, you could choose between a big car, a small car, and a tractor. Nowadays there are hundreds of cars released every year, but you probably don’t need to be aware of even one in ten of them — because nine out of ten are not a car you need, or really even a car as you understand the term. We’re moving from the big/small/tractor era of AI towards the proliferation era, and even AI specialists can’t keep up with and test all the models coming out.

The other side of this story is that we were already in this stage long before ChatGPT and the other big models came out. Far fewer people were reading about this 7 or 8 years ago, but we covered it nevertheless because it was clearly a technology waiting for its breakout moment — which came in due time. There were papers, models, and research constantly coming out, and conferences like SIGGRAPH and NeurIPS were filled with machine learning engineers comparing notes and building on one another’s work. Here’s a visual understanding story I wrote in 2011!

That activity is still underway every day. But because AI has become big business — arguably the biggest in tech right now — these developments have been lent a bit of extra weight, since people are curious whether one of these might be the big leap over ChatGPT that ChatGPT was over its predecessors.

The simple truth is that none of these models is going to be that kind of big step, since OpenAI’s advance was built on a fundamental change to machine learning architecture that every other company has now adopted, and which has not been superseded. Incremental improvements like a point or two better on a synthetic benchmark, or marginally more convincing language or imagery, is all we have to look forward to for the present.

Does that mean none of these models matter? Certainly they do. You don’t get from version 2.0 to 3.0 without 2.1, 2.2, 2.2.1, and so on — and that is what researchers and engineers are diligently working at. And sometimes those advances are meaningful, address serious shortcomings, or expose unexpected vulnerabilities. We try to cover the interesting ones, but that’s just a fraction of the full number. We’re actually working on a piece now collecting all the models we think the ML-curious should be aware of, and it’s on the order of a dozen.

Don’t worry: when a big one comes along, you’ll know, and not just because TechCrunch is covering it. It’s going to be as obvious to you as it is to us.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Too many models | TechCrunch


How many AI models is too many? It depends on how you look at it… but 10 a week is probably a bit much. That’s how many we had in the last few days, or close to it, and it’s increasingly hard to say whether and how these models compare to one another — if it was ever possible to begin with. So what’s the point?

We’re at a weird time in the evolution of AI, though of course it’s been pretty weird the whole time. We’re seeing a proliferation of models large and small, from niche developers to large, well-funded ones.

Let’s just run down the list from this week, shall we? I’ve tried to condense as far as possible what sets each model apart.

  • LLaMa-3: Meta’s latest “open” flagship large language model. (The term “open” is disputed right now, but this project is widely used by the community regardless.)
  • Mistral 8×22: A ‘mixture of experts’ model, on the large side, from a French outfit that has shied away from the openness they once embraced.
  • Stable Diffusion 3 Turbo: An upgraded SD3 to go with the open-ish Stability’s new API. Borrowing “turbo” from OpenAI’s model nomenclature is a little weird, but OK.
  • Adobe Acrobat AI Assistant: “Talk to your documents” from the 800-lb document gorilla. Pretty sure this is mostly a wrapper for ChatGPT, though.
  • Reka Core: From a small team formerly employed by Big AI, a multimodal model baked from scratch that is at least nominally competitive with the big dogs.
  • Idefics2: A more open multimodal model, built on top of recent, smaller Mistral and Google models.
  • OLMo-1.7-7B: A larger version of AI2’s LLM, among the most open out there, and a stepping stone to a future 70B-scale model.
  • Pile-T5: A version of the ol’ reliable T5 model fine-tuned on code database the Pile. The same T5 you know and love but better coding.
  • Cohere Compass: An “embedding model” (if you don’t know already, don’t worry about it) focused on incorporating multiple data types to cover more use cases.
  • Imagine Flash: Meta’s newest image generation model, relying on a new distillation method to accelerate diffusion without overly compromising quality.
  • Limitless: “A personalized AI powered by what you’ve seen, said, or heard. It’s a web app, Mac app, Windows app, and a wearable.” 😬

That’s 11, because one was announced while I was writing this. And let’s be clear, this is not all of the models released or previewed this week! It’s just the ones we saw and discussed. If we were to relax the conditions for inclusion a bit, there would dozens: some fine-tuned existing models, some combos like Idefics 2, some experimental or niche, and so on. Not to mention this week’s new tools for building (torchtune) and battling against (Glaze 2.0) generative AI!

What are we to make of this never-ending avalanche? Because next week, while it might not have the ten or twenty releases we saw in the previous one, will surely have at least five or six of the tier noted above. We can’t “review” them all. So how can we help you, our readers, understand and keep up with all these things?

Well… the truth is you don’t need to keep up, and nor does nearly anyone else. There has been a shift in the AI space: some models, like ChatGPT and Gemini, have evolved into entire web platforms spanning multiple use cases and access points. Other large language models like LLaMa or OLMo, though technically speaking they share a basic architecture, don’t actually fill the same role. They are intended to live in the background as a service or component, not in the foreground as a name brand.

There’s been a deliberate confusion of these two things, because the developers of models want to borrow a little of the fanfare we tend to associate with major AI platform releases like your GPT-4V or Gemini Ultra. Everyone wants you to think that their release is an important one. And while it’s probably important to somebody, that somebody is almost certainly not you.

Think about it in the sense of another broad, diverse category like cars. When they were first invented, you just bought “a car.” Then a little later, you could choose between a big car, a small car, and a tractor. Nowadays there are hundreds of cars released every year, but you probably don’t need to be aware of even one in ten of them — because nine out of ten are not a car you need, or really even a car as you understand the term. We’re moving from the big/small/tractor era of AI towards the proliferation era, and even AI specialists can’t keep up with and test all the models coming out.

The other side of this story is that we were already in this stage long before ChatGPT and the other big models came out. Far fewer people were reading about this 7 or 8 years ago, but we covered it nevertheless because it was clearly a technology waiting for its breakout moment — which came in due time. There were papers, models, and research constantly coming out, and conferences like SIGGRAPH and NeurIPS were filled with machine learning engineers comparing notes and building on one another’s work. Here’s a visual understanding story I wrote in 2011!

That activity is still underway every day. But because AI has become big business — arguably the biggest in tech right now — these developments have been lent a bit of extra weight, since people are curious whether one of these might be the big leap over ChatGPT that ChatGPT was over its predecessors.

The simple truth is that none of these models is going to be that kind of big step, since OpenAI’s advance was built on a fundamental change to machine learning architecture that every other company has now adopted, and which has not been superseded. Incremental improvements like a point or two better on a synthetic benchmark, or marginally more convincing language or imagery, is all we have to look forward to for the present.

Does that mean none of these models matter? Certainly they do. You don’t get from version 2.0 to 3.0 without 2.1, 2.2, 2.2.1, and so on — and that is what researchers and engineers are diligently working at. And sometimes those advances are meaningful, address serious shortcomings, or expose unexpected vulnerabilities. We try to cover the interesting ones, but that’s just a fraction of the full number. We’re actually working on a piece now collecting all the models we think the ML-curious should be aware of, and it’s on the order of a dozen.

Don’t worry: when a big one comes along, you’ll know, and not just because TechCrunch is covering it. It’s going to be as obvious to you as it is to us.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Tether makes first crypto VC fund investment into Arcanum...

“We are passionate about backing technologies that will...

512GB M4 Mac mini, Apple Pencil Pro, M4 iMac,...

Today’s 9to5Toys Lunch Break is now ready to...

Elon Musk’s jets made 355 trips in 2024, including...

Elon Musk’s jets took more than 355 flights...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!