Microsoft CEO Satya Nadella said the company will continue purchasing AI chips from Nvidia and AMD, signalling that hyperscalers still depend heavily on external silicon even as they develop in-house processors for AI workloads.
Microsoft has no plans to step away from third-party AI silicon, even as it invests heavily in designing its own chips.
Speaking on the company’s latest earnings call, Satya Nadella, chief executive of Microsoft, said the company will continue buying AI chips from Nvidia and AMD, underscoring how demand for AI compute is outstripping even the largest cloud providers’ internal capacity.
The comments come as Microsoft rapidly expands its AI infrastructure to support Azure, OpenAI workloads, and a growing portfolio of enterprise and consumer AI products, from Copilot to custom large language models deployed across its cloud services.
Why Microsoft Still Needs Nvidia and AMD

While Microsoft has introduced its own custom AI chips, including the Maia accelerator for data centers, Nadella made clear that in-house silicon is a complement rather than a replacement for external suppliers.
AI workloads are scaling at a pace that makes single-vendor strategies impractical. Training and inference for large models require vast amounts of compute, and Nvidia’s GPUs remain the industry standard for many AI developers, while AMD has emerged as a credible alternative for specific data center workloads.
Nadella said Microsoft is focused on ensuring customers can access the best performance-per-dollar across a range of AI use cases, which means continuing to deploy a mix of chips rather than forcing workloads onto proprietary hardware.
A Signal to the AI Chip Ecosystem

Microsoft’s position highlights a broader reality across hyperscalers: custom chips may improve margins and efficiency over time, but they are unlikely to displace Nvidia or AMD in the near term.
For Nvidia, Microsoft’s stance reinforces its central role in the AI supply chain, even as customers explore alternatives. For AMD, it signals continued opportunity to gain share as cloud providers look to diversify suppliers and manage costs.
The strategy also reflects risk management. Relying exclusively on in-house silicon could expose cloud platforms to delays in design, manufacturing constraints, or performance gaps as AI models evolve faster than chip development cycles.
What This Means for Azure and Enterprise Customers
For enterprises building AI systems on Microsoft Azure, the continued use of Nvidia and AMD chips means broader compatibility with existing AI frameworks and faster access to next-generation hardware.

It also suggests Microsoft will prioritise scale and reliability over tight vertical integration, ensuring that AI capacity constraints do not slow customer adoption. As competition intensifies among cloud providers, the ability to rapidly deploy proven hardware has become a strategic advantage.
The Bigger Picture
Microsoft’s commitment to buying Nvidia and AMD chips illustrates a defining tension in the AI era: hyperscalers want control and cost efficiency, but the speed of AI innovation still favours established silicon leaders.
As AI demand continues to surge, the cloud giants’ future is likely to be hybrid by design — combining proprietary chips with best-in-class external hardware to keep pace with a market that shows no signs of slowing.


![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)