Table of Contents
LAS VEGAS — At CES 2026, AMD Chair and CEO Dr. Lisa Su unveiled an aggressive roadmap to conquer the next era of artificial intelligence, debuting massive data center infrastructure and consumer silicon designed to handle "yotta-scale" computing. In a keynote address defined by a strategy of open ecosystems and vertical integration, Dr. Su announced the Helios AI rack, the Instinct MI455 accelerator, and the Ryzen AI 400 series, positioning the company to power everything from hyperscale cloud clusters to autonomous lunar landers.
Key Points
- Infrastructure Scale-Up: AMD introduced the Helios rack, a liquid-cooled, 7,000-pound system powered by the new Instinct MI455 GPUs, designed to deliver 10x the performance of the previous generation.
- Consumer AI Push: The new Ryzen AI 400 series and Ryzen AI Max processors aim to bring local, agentic AI capabilities to laptops and workstations, challenging Apple and NVIDIA in the creator market.
- Compute Demand: Dr. Su projected a need to increase global compute capacity by 100x over the next five years to reach "yotta-scale" (10 yottaflops) as AI user bases grow from billions to five billion.
- Strategic Partnerships: AMD showcased collaborations with OpenAI, Blue Origin, and a new U.S. government initiative, the Genesis Mission, to solidify its role in critical infrastructure and scientific research.
The Shift to Yotta-Scale Computing
The central theme of Dr. Su’s address was the exponential escalation of compute requirements. While the industry has scaled rapidly since the debut of ChatGPT, AMD forecasts a trajectory that dwarfs current capabilities. Dr. Su stated that global infrastructure must scale from approximately one zettaflop in 2022 to more than 10 yottaflops (one followed by 25 zeros) within five years.
To meet this demand, AMD introduced Helios, a next-generation rack-scale platform developed in collaboration with Meta based on Open Compute Project (OCP) standards. The double-wide, liquid-cooled rack integrates networking, storage, and compute into a single turnkey solution.
The Instinct MI455 and Venice CPU
At the heart of the Helios rack lies the new Instinct MI455 accelerator. Built on 2nm and 3nm process technologies, the chip features 320 billion transistors—70% more than its predecessor, the MI355. It utilizes advanced 3D chiplet packaging and includes 432 GB of HBM4 memory to maximize throughput.
Dr. Su also revealed the engine driving these accelerators: the next-generation EPYC CPU, codenamed Venice. Featuring up to 256 Zen 6 cores, Venice is purpose-built for AI data centers, with doubled memory and GPU bandwidth to ensure the accelerators are fed data at full speed.
"We definitely don't have enough compute... Every time we want to release a new feature, we want to produce a new model, we want to bring this technology to the world, we have a big fight internally over compute because there are so many things we want to launch... that we simply cannot."
— Greg Brockman, President and Co-founder, OpenAI
Bringing AI to the Edge and PC
While the data center remains the powerhouse for training large models, AMD is aggressively targeting the inference market—where models run locally on devices. Dr. Su announced the Ryzen AI 400 series, creating a new tier of "AI PCs" capable of running complex agents locally without cloud latency or privacy concerns.
For developers and high-end creators, AMD introduced Ryzen AI Max. This system-on-chip (SoC) integrates 16 Zen 5 CPU cores and 40 RDNA 3.5 GPU compute units. Crucially, it employs a unified memory architecture supporting up to 128 GB shared between CPU and GPU. According to AMD, this allows the processor to run models with up to 200 billion parameters locally, a capability previously reserved for server-grade hardware.
To further democratize AI development, AMD unveiled Ryzen AI Halo, a small-form-factor reference platform launching in Q2 2026. Designed as a "dev kit" for the AI era, it aims to give independent developers access to high-performance local inference hardware.
Software Ecosystem and Strategic Alliances
AMD continued to differentiate itself from competitor NVIDIA by emphasizing an open software ecosystem. Dr. Su highlighted the maturity of the ROCm software stack, noting day-zero support for major frameworks like PyTorch and partnership with Luma AI to optimize video generation models.
Spatial Intelligence and Physical AI
The keynote emphasized "Physical AI"—machines that understand and navigate the real world. Dr. Fei-Fei Li, renowned computer scientist and CEO of World Labs, joined Dr. Su to demonstrate "spatial intelligence" models. These systems can generate fully navigable 3D worlds from static 2D images in minutes, running on AMD Instinct accelerators.
In the robotics sector, Generative Bionics unveiled "Gene One," a humanoid robot powered by AMD embedded systems. The robot features distributed tactile sensors to mimic human touch, aiming for deployment in safety-critical industrial environments.
Healthcare and Public Sector Initiatives
The presentation showcased the tangible impact of high-performance computing (HPC) on the bio-economy. Executives from Absci, Illumina, and AstraZeneca discussed how AMD technology is accelerating drug discovery and genomics.
Dr. Su also welcomed Michael Kratsios, overseeing the Genesis Mission, a U.S. federal initiative aimed at converging AI, supercomputing, and quantum computing. As part of this "whole of government" approach to secure American AI leadership, AMD’s hardware will underpin new supercomputers, including the "Lux" AI factory for science and the upcoming "Discovery" system planned for 2028.
"This is a moment where AI is helping design a renewable energy source as powerful as the sun itself... We're entering this era of yotta-scale computing where the deployment of more powerful models everywhere will require a massive increase in the amount of compute in the world."
— Dr. Lisa Su, Chair and CEO, AMD
What’s Next: The Road to 2027
Looking beyond the immediate product launches, AMD provided a glimpse into its long-term silicon roadmap. Development is already underway for the MI500 series, based on the CDNA 6 architecture. Scheduled for launch in 2027, the MI500 is projected to deliver a 1,000x increase in AI performance compared to benchmarks from four years prior.
With the launch of Helios later this year and the immediate availability of Ryzen AI 400 mobile processors, AMD is executing a strategy to capture market share through open standards and broad portfolio availability, betting that the future of AI will rely as much on partnership and interoperability as it does on raw silicon performance.