Meta’s new AI chips runs faster than before

Share via:

Meta promises the next generation of its custom AI chips will be more powerful and able to train its ranking models much faster. 

The Meta Training and Inference Accelerator (MTIA) is designed to work best with Meta’s ranking and recommendation models. The chips can help make training more efficient and inference — aka the actual reasoning task — easier. 

The company said in a blog post that MTIA is a big piece of its long-term plan to build infrastructure around how it uses AI in its services. It wants to design its chips to work with its current technology infrastructure and future advancements in GPUs. 

“Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking, and capacity as well as other next-generation hardware systems,” Meta said in its post. 

Meta announced MTIA v1 in May 2023, focusing on providing these chips to data centers. The next-generation MTIA chip will likely also target data centers. MTIA v1 was not expected to be released until 2025, but Meta said both MTIA chips are now in production. 

Right now, MTIA mainly trains ranking and recommendation algorithms, but Meta said the goal is to eventually expand the chip’s capabilities to begin training generative AI like its Llama language models. 

Meta said the new MTIA chip “is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity.” This chip will have 256MB memory on-chip with 1.3GHz compared to the v1’s 128MB and 800GHz. Early test results from Meta showed the new chip performs three times better than the first-generation version across four models the company evaluated. 

The competition to buy powerful chips underscored the need to have custom chips to run AI models. Demand for chips has grown so much that Nvidia, which dominates the AI chip market right now, is valued at $2 trillion.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Meta’s new AI chips runs faster than before

Meta promises the next generation of its custom AI chips will be more powerful and able to train its ranking models much faster. 

The Meta Training and Inference Accelerator (MTIA) is designed to work best with Meta’s ranking and recommendation models. The chips can help make training more efficient and inference — aka the actual reasoning task — easier. 

The company said in a blog post that MTIA is a big piece of its long-term plan to build infrastructure around how it uses AI in its services. It wants to design its chips to work with its current technology infrastructure and future advancements in GPUs. 

“Meeting our ambitions for our custom silicon means investing not only in compute silicon but also in memory bandwidth, networking, and capacity as well as other next-generation hardware systems,” Meta said in its post. 

Meta announced MTIA v1 in May 2023, focusing on providing these chips to data centers. The next-generation MTIA chip will likely also target data centers. MTIA v1 was not expected to be released until 2025, but Meta said both MTIA chips are now in production. 

Right now, MTIA mainly trains ranking and recommendation algorithms, but Meta said the goal is to eventually expand the chip’s capabilities to begin training generative AI like its Llama language models. 

Meta said the new MTIA chip “is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity.” This chip will have 256MB memory on-chip with 1.3GHz compared to the v1’s 128MB and 800GHz. Early test results from Meta showed the new chip performs three times better than the first-generation version across four models the company evaluated. 

The competition to buy powerful chips underscored the need to have custom chips to run AI models. Demand for chips has grown so much that Nvidia, which dominates the AI chip market right now, is valued at $2 trillion.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Rwandan central bank proceeds with ambitious retail CBDC project

The National Bank of Rwanda (BNR) has opened...

Bloomberg: John Ternus emerging as the most likely successor...

A new report from Bloomberg today gives an...

OpenAI offers a peek behind the curtain of its...

Ever wonder why conversational AI like ChatGPT says...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!