In the era of generative AI (GenAI), industry and financial analysis are increasingly blending within the technology world. So much for the circular economy. On 24 February 2026, AMD and Meta announced a partnership to deploy AMD Helios racks, optimised for Meta’s workloads, and running on AMD Instinct MI450 GPUs and sixth-generation EPYC CPUs. The shipments, starting in the second half of this year, will cover successive generations of silicon over several years and be equivalent to 6GW of power. The deal also involves AI model development and co-innovation agreements.

The companies are deepening their collaboration to align their GPU and CPU silicon, systems and software roadmaps. This will be the crucial part. The rack clusters are running on ROCm software and were developed jointly by AMD and Meta through the Open Compute Project. As part of the agreement, AMD is also giving Meta warrants that could convert into a 10% stake in the company. Meta can only cash the warrants in if it buys all the agreed chips, and AMD’s share price triples.

This is what has come to be known as “chips-for-stock” partnerships, and it can lead financial analysts to invoke potential stock dilution and other considerations outside the realm of industry analysis. For AMD, it’s a massive validation of its AI computing roadmap, and for the wider industry, it could potentially reduce overreliance on a single supplier, accelerating innovation. It gives Meta greater bargaining power as it gains pricing leverage and lowers the risk of supply bottlenecks. In other words, Meta avoids being completely locked into Nvidia’s CUDA ecosystem.

GlobalData analyst Beatriz Valle comments: “Comparisons have been drawn between a similar agreement signed by AMD and OpenAI in October last year. However, this is different. The terms of the deal may be similar: the scale is not. This agreement carries more weight in terms of long-term roadmap advancements. After all, for Meta, this is a bet involving not just selling chips but multi-generation commitments to the AMD roadmap. AMD benefits from large-scale deployment, which brings not just revenue scale but also ecosystem and software maturity.”

When a hyperscaler such as Meta commits billions to AMD, the implications are much further reaching than an endorsement by OpenAI, and the message to the market is clear: the AI accelerator landscape is becoming multi-vendor, and software ecosystems could one day evolve beyond the current CUDA dominance. For Meta, the deal means diversifying its compute supply and avoiding overreliance on Nvidia. Meta will need to meet massive demand for inference workloads powering its AI-related services in the next few years, as its acquisition of Manus and diversification into proprietary LLMs indicate a shift in GenAI strategy towards accelerated monetisation.

It’s the software, stupid

This is not so much about hardware but software: Nvidia’s success is based on CUDA and its associated software ecosystem. Chip designers and makers need not just develop and manufacture the silicon; they also need to develop, test, support, and update a software stack requiring massive R&D investment and effort. For the industry, anything eroding the Nvidia dominance is good as it drives innovation. One of the reasons behind Nvidia’s dominance in AI/ML was that the software stack is optimised for the workloads.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

But competing on the software stack represents a mammoth task ahead for AMD, and is going to involve a sustained amount of effort over a long period of time. The company has a tradition of being the underdog: for decades, it fought the Intel behemoth, supplying most of the ×86 chips powering servers and personal computers. Now it is measuring up against Nvidia, whose chips drove the first age of GenAI compute: training LLMs.

Every company except for Google, which has its own TPUs, uses Nvidia chips for training. However, we are now in the age of inference. Although Nvidia’s Blackwell and upcoming Rubin architectures do retain an edge against AMD’s Instinct MI400 chips in training, Meta’s AI labs are now focusing on developing other areas of the business instead of frontier model development, and this is where AMD steps in. Meta is also reportedly developing its own ASIC architecture right now. This deal could be a blueprint for increasingly strategic agreements between chip producers and hyperscalers as the market evolves and inferencing takes a greater proportion of AI workloads going forward.