Intel and Google have announced a multiyear collaboration aimed at further developing AI and cloud infrastructure.

The partnership will see both companies working closely to enhance the role of both Intel Xeon processors and custom infrastructure processing units (IPUs) in building and scaling complex AI systems.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

As organisations increasingly adopt AI-driven workloads, the need for more sophisticated and diverse infrastructure has grown, with CPUs remaining central to system orchestration, data management and overall performance.

Under the collaboration, Google plans to continue deploying multiple generations of Intel Xeon processors across its global infrastructure, including recent integrations in its C4 and N4 cloud instances, which cater to various tasks such as AI training, inference, and general computing.

Intel and Google will also continue their joint efforts in developing custom IPUs, programmable chips that offload storage, security and networking responsibilities from main processors, thus improving utilisation and operational efficiency.

By sharing infrastructure responsibilities, IPUs help data centres allocate computational resources more efficiently while maintaining predictable, consistent system performance.

Both companies have highlighted the importance of combining general-purpose processors with purpose-built accelerators as a means to address the increasing demands of AI workloads.

The partnership aims to strengthen Google’s cloud operations by boosting energy efficiency, improving performance, and reducing costs, providing a scalable way to manage next-generation AI services.

Intel CEO Lip-Bu Tan said: “AI is reshaping how infrastructure is built and scaled. Scaling AI requires more than accelerators – it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”

This extended partnership underlines the ongoing significance of CPUs and IPUs within hyperscale data centre designs, positioning them as essential elements in the delivery of advanced cloud and AI solutions.

The two firms have stated that their efforts will help lay the foundation for new AI cloud services, supporting a broad network of enterprises, developers and users.

Recently, Intel announced plans to participate in Elon Musk’s Terafab project, an AI chip manufacturing facility under development in Texas alongside SpaceX and Tesla.