
Meta announced four successive generations of its custom Meta Training and Inference Accelerator (MTIA) chips on March 11: The MTIA 300, 400, 450, and 500, all scheduled for deployment over the next two years. Meta described the chips as progressively optimized for AI inference workloads on the premise that HBM memory bandwidth is the binding constraint on inference.
Coming two weeks after Meta disclosed a long-term AI infrastructure with AMD, the announcement puts Meta alongside Google, AWS, and

![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)