
Meta, the parent company of Facebook, Instagram and WhatsApp, has begun testing its first in-house chip designed specifically for training AI systems, reported Reuters, citing two sources.
The move is part of the company’s efforts to reduce its reliance on external suppliers such as Nvidia, undisclosed sources told the news agency.
The test deployment of the new AI training chip is currently limited, with plans to scale production if the results are promising.
The initiative is part of Meta’s broader strategy to cut down on its infrastructure costs. Meta has forecast total expenses for 2025 to be between $114bn and $119bn, with up to $65bn allocated for capital expenditure, primarily focused on expanding its AI infrastructure.
The report added that the new chip is a dedicated accelerator, designed exclusively to handle AI-specific tasks.
Taiwan-based chip manufacturer TSMC is working with Meta to produce the chip, the publication quoted a source as saying.
The test follows Meta’s successful first “tape-out” of the chip, a key milestone in chip development that involves sending the initial design to a chip factory.
The tape-out process, which can cost tens of millions of dollars and take several months, is often followed by further testing and potential redesigns if issues arise.
Meta’s new AI training chip is part of its Meta Training and Inference Accelerator (MTIA) series, which has faced challenges in the past, including the scrapping of a similar chip during a previous development phase.
However, Meta’s MTIA programme has made progress, with the company recently deploying an MTIA chip for AI inference tasks such as powering the recommendation systems on Facebook and Instagram, the report said.
Looking ahead, Meta plans to utilise its own chips for training AI systems by 2026.
Initially, these chips will support recommendation systems, with the company eventually extending their use to genAI products including its chatbot, Meta AI.
In February this year, Meta Platforms revealed it is exploring the development of a new data centre campus to support its AI projects, with estimated costs exceeding $200bn.