Meta unveils its newest custom AI chip as it races to catch up

Share via:

Meta, hell-bent on catching up to rivals in the generative AI space, is spending billions on its own AI efforts. A portion of those billions is going toward recruiting AI researchers. But an even larger chunk is being spent developing hardware, specifically chips to run and train Meta’s AI models.

Meta unveiled the newest fruit of its chip dev efforts today, conspicuously a day after Intel announced its latest AI accelerator hardware. Called the “next-gen” Meta Training and Inference Accelerator (MTIA), the successor to last year’s MTIA v1, the chip runs models including for ranking and recommending display ads on Meta’s properties (e.g. Facebook).

Compared to MTIA v1, which was built on a 7nm process, the next-gen MTIA is 5nm. (In chip manufacturing, “process” refers to the size of the smallest component that can be built on the chip.) The next-gen MTIA is a physically larger design, packed with more processing cores than its predecessor. And while it consumes more power — 90W versus 25W — it also boasts more internal memory (128MB versus 64MB) and runs at a higher average clock speed (1.35GHz up from 800MHz).

Meta says the next-gen MTIA is currently live in 16 of its data center regions and delivering up to 3x overall better performance compared to MTIA v1. If that “3x” claim sounds a bit vague, you’re not wrong — we thought so too. but Meta would only volunteer that the figure came from testing the performance of “four key models” across both chips.

“Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs,” Meta writes in a blog post shared with TechCrunch.

Meta’s hardware showcase — which comes a mere 24 hours after a press briefing on the company’s various ongoing generative AI initiatives — is unusual for several reasons.

One, Meta reveals in the blog post that it’s not using the next-gen MTIA for generative AI training workloads at the moment, although the company claims it has “several programs underway” exploring this. Two, Meta admits that the next-gen MTIA won’t replace GPUs for running or training models — but instead complement them.

Reading between the lines, Meta is moving slowly — perhaps more slowly than it’d like.

Meta’s AI teams are almost certainly under pressure to cut costs. The company’s set to spend an estimated $18 billion by the end of 2024 on GPUs for training and running generative AI models, and — with training costs for cutting-edge generative models ranging in the tens of millions of dollars — in-house hardware presents an attractive alternative.

And while Meta’s hardware drags, rivals are pulling ahead, much to the consternation of Meta’s leadership, I’d suspect.

Google this week made its fifth-generation custom chip for training AI models, TPU v5p, generally available to Google Cloud customers, and revealed its first dedicated chip for running models, Axion. Amazon has several custom AI chip families under its belt. And Microsoft last year jumped into the fray with the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU.

In the blog post, Meta says it took fewer than nine months to “go from first silicon to production models” of the next-gen MTIA, which to be fair is shorter than the typical window between Google TPUs. But Meta has a lot of catching up to do if it hopes to achieve a measure of independence from third-party GPUs — and match its stiff competition.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta unveils its newest custom AI chip as it races to catch up

Meta, hell-bent on catching up to rivals in the generative AI space, is spending billions on its own AI efforts. A portion of those billions is going toward recruiting AI researchers. But an even larger chunk is being spent developing hardware, specifically chips to run and train Meta’s AI models.

Meta unveiled the newest fruit of its chip dev efforts today, conspicuously a day after Intel announced its latest AI accelerator hardware. Called the “next-gen” Meta Training and Inference Accelerator (MTIA), the successor to last year’s MTIA v1, the chip runs models including for ranking and recommending display ads on Meta’s properties (e.g. Facebook).

Compared to MTIA v1, which was built on a 7nm process, the next-gen MTIA is 5nm. (In chip manufacturing, “process” refers to the size of the smallest component that can be built on the chip.) The next-gen MTIA is a physically larger design, packed with more processing cores than its predecessor. And while it consumes more power — 90W versus 25W — it also boasts more internal memory (128MB versus 64MB) and runs at a higher average clock speed (1.35GHz up from 800MHz).

Meta says the next-gen MTIA is currently live in 16 of its data center regions and delivering up to 3x overall better performance compared to MTIA v1. If that “3x” claim sounds a bit vague, you’re not wrong — we thought so too. but Meta would only volunteer that the figure came from testing the performance of “four key models” across both chips.

“Because we control the whole stack, we can achieve greater efficiency compared to commercially available GPUs,” Meta writes in a blog post shared with TechCrunch.

Meta’s hardware showcase — which comes a mere 24 hours after a press briefing on the company’s various ongoing generative AI initiatives — is unusual for several reasons.

One, Meta reveals in the blog post that it’s not using the next-gen MTIA for generative AI training workloads at the moment, although the company claims it has “several programs underway” exploring this. Two, Meta admits that the next-gen MTIA won’t replace GPUs for running or training models — but instead complement them.

Reading between the lines, Meta is moving slowly — perhaps more slowly than it’d like.

Meta’s AI teams are almost certainly under pressure to cut costs. The company’s set to spend an estimated $18 billion by the end of 2024 on GPUs for training and running generative AI models, and — with training costs for cutting-edge generative models ranging in the tens of millions of dollars — in-house hardware presents an attractive alternative.

And while Meta’s hardware drags, rivals are pulling ahead, much to the consternation of Meta’s leadership, I’d suspect.

Google this week made its fifth-generation custom chip for training AI models, TPU v5p, generally available to Google Cloud customers, and revealed its first dedicated chip for running models, Axion. Amazon has several custom AI chip families under its belt. And Microsoft last year jumped into the fray with the Azure Maia AI Accelerator and the Azure Cobalt 100 CPU.

In the blog post, Meta says it took fewer than nine months to “go from first silicon to production models” of the next-gen MTIA, which to be fair is shorter than the typical window between Google TPUs. But Meta has a lot of catching up to do if it hopes to achieve a measure of independence from third-party GPUs — and match its stiff competition.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Prosus, Parent Nasper Rope In Former iFood Executive Fabricio...

SUMMARY Fabricio Bloisi will join the Naspers board as...

Good Meat’s SG partner becomes first retailer worldwide to...

However, the lab-made chicken consists of only 3%...

Hong Kong digital yuan pilot lacks P2P capabilities

Hong Kong launched a pilot program for the...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!