AMD has introduced the AMD Ryzen AI embedded processors, a new series of x86 processors designed to facilitate AI-driven applications at the edge.

Within this line, the P100 and X100 Series processors cater to different needs in automotive digital cockpits, smart healthcare systems, and autonomous systems, including humanoid robotics.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The processors aim to provide high performance and efficient AI computing in compact ball grid array (BGA) packages, addressing the requirements of original equipment manufacturers (OEMs), tier-1 suppliers, and developers in automotive and industrial markets.

Built on the “Zen 5” core architecture, these processors deliver scalable x86 performance and deterministic control. They incorporate an RDNA 3.5 GPU for real-time graphics processing and an XDNA 2 NPU for low-latency AI acceleration, all on a single chip.

The P100 Series targets applications requiring 4-6 cores, focusing on in-vehicle experiences and industrial automation. The X100 Series offers higher CPU core counts and increased AI tera operations per second (TOPS) performance for more demanding tasks.

According to AMD, the P100 Series is specifically tailored for digital cockpit environments, optimising human-machine interfaces with real-time graphics capabilities in a compact form factor. It operates within a 15–54 watt range and supports temperatures from -40°C to +105°C, making it suitable for use in harsh environments with limited space.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

The RDNA 3.5 GPU supports up to four 4K or two 8K displays at 120 frames per second, enhancing visual outputs.

The AMD XDNA 2 NPU delivers up to 50 TOPS, offering up to three times higher AI inference performance compared to preceding models. This neural processing unit (NPU) supports advanced AI models such as vision transformers and compact LLMs for improved voice and gesture recognition.

The integrated software stack provides a unified development environment across CPU, graphics processing unit (GPU), and NPU components.

The infrastructure leverages an open-source virtualisation framework that securely separates multiple operating system domains. This setup allows Yocto or Ubuntu to power human-machine interfaces (HMIs), FreeRTOS for real-time control, and Android or Windows for richer applications to function concurrently.

AMD said that this approach aims to reduce costs, streamline customisation efforts, and expedite time-to-market for automotive and industrial systems.

AMD Embedded senior vice president and general manager Salil Raje said: “As industries push for more immersive AI experiences and faster on-device intelligence, they need high performance without added system complexity.

“The Ryzen AI embedded portfolio brings leadership CPU, GPU and NPU capabilities together in a single device, enabling smarter, more responsive automotive, industrial, and autonomous systems.”

AMD Ryzen AI embedded P100 processors are currently being sampled by early access customers with production shipments anticipated in the second quarter. Processors featuring 8-12 cores aimed at industrial automation are expected to begin sampling in the first quarter. Sampling for X100 Series processors with up to 16 cores is projected to commence in the first half of this year.

In October 2025, AMD entered into a 6 gigawatt (GW) deal with OpenAI under which the ChatGPT developer will use multiple generations of AMD GPUs for its upcoming AI infrastructure. As per the terms of the deal, OpenAI has chosen AMD as its main strategic compute partner for large-scale rollouts.