OpenAI is preparing to initiate mass production of its own AI chips in 2026, marking a step towards meeting the increasing demand for computing power while decreasing its dependence on Nvidia, reported The Financial Times.

The new chip, developed in partnership with Broadcom, a US semiconductor company, is expected to be shipped in the coming year, several sources familiar with the collaboration told the publication.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

Broadcom’s CEO, Hock Tan, recently mentioned that a new undisclosed customer that has committed to $10bn in orders.

This move aligns OpenAI with other technology firms such as Google, Amazon, and Meta, which have also created their own specialised chips to handle AI workloads.

According to insiders, OpenAI intends to utilise the chips internally rather than offering them to external clients.

The partnership with Broadcom began in 2024, although the timeline for the mass production of a successful chip design had not been previously clarified.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

During a call with analysts, Tan revealed that Broadcom had secured a fourth customer for its custom AI chip division.

While Broadcom does not disclose the identities of its clients, sources confirmed that OpenAI is the new customer.

Both companies have chosen not to comment on the matter, the report said.

Tan noted that this agreement has enhanced Broadcom’s growth outlook by generating “immediate and fairly substantial demand,” with plans to begin shipping chips for this customer “pretty strongly” from next year.

Previously, Broadcom collaborated with Google to develop its custom “TPU” AI chips.

OpenAI’s CEO, Sam Altman, has consistently highlighted the need for increased computing power to accommodate the growing number of businesses and consumers utilising products such as ChatGPT, as well as for training and executing AI models.

In August 2025, Altman indicated that the company is prioritising computing resources “in light of the increased demand from [OpenAI’s latest model] GPT-5” and aims to double its computing fleet “over the next five months.”