HomeArtificial IntelligenceAMD unveils CPU, NPU and GPU strategy for AI data centers

AMD unveils CPU, NPU and GPU strategy for AI data centers

Lisa Su, CEO of Advanced Micro Devices, spoke about her company’s chip architecture strategy for AI data centers and AI PCs on the Computex Trade fair in Taiwan.

She outlined AMD's plans for its Central Processing Unit (CPU), Neural Processing Unit (NPU) and Graphics Processing Unit (GPU) architectures, that are designed to power an end-to-end AI infrastructure from the info center to the PC.

AMD unveiled an expanded AMD Instinct accelerator roadmap, introduced an annual cadence of leading AI accelerators, and previewed the brand new AMD Instinct MI325X accelerator, scheduled to be available in Q4 2024.

AMD also previewed the fifth Generation AMD Epyc server processors, scheduled for launch within the second half of 2024. For laptops and desktops, AMD announced the AMD Ryzen AI 300 Series, the third generation of AMD AI-enabled mobile processors, in addition to the AMD Ryzen 9000 Series processors.

In his keynote, Su demonstrated how partners are leveraging AMD's broad portfolio of coaching and inference engines to speed up AI on PCs, in data centers, and at the sting.

AMD Instinct accelerator family

AMD Instinct MI325X accelerator.

At Computex 2024, Sun unveiled a multi-year, expanded roadmap for the AMD Instinct accelerator, which is able to deliver an annual cadence of leading AI performance and memory capacities in each generation.

The updated roadmap starts with the brand new AMD Instinct MI325X accelerator, which can be available in Q4 2024. After that, the AMD Instinct MI350 series, based on the brand new AMD CDNA 4 architecture, is predicted to be available in 2025 and can offer as much as 35x higher AI inference performance in comparison with the AMD Instinct MI300 series with AMD CDNA 3 architecture.

The AMD Instinct MI400 series is predicted to launch in 2026 and relies on the AMD CDNA “Next” architecture.

“The AMD Instinct MI300X accelerators proceed to enjoy strong popularity amongst quite a few partners and customers, including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo and others, which is a direct results of the exceptional performance and value proposition of the AMD Instinct MI300X accelerator,” said Brad McCredie, corporate vice chairman, Data Center Accelerated Compute at AMD, in an announcement. “With our updated annual product cadence, we’re relentless in our pace of innovation, delivering the leadership and performance that the AI ​​industry and our customers expect to drive the subsequent evolution of AI training and inference in the info center.”

Su also said that the AMD ROCm 6 open software stack is maturing, allowing AMD Instinct MI300X accelerators to spice up the performance of among the hottest LLMs. In addition, AMD is constant its upstream work on popular AI frameworks equivalent to PyTorch, TensorFlow and JAX.

AMD has announced an updated annual cadence for the AMD Instinct accelerator roadmap to fulfill growing demand for more AI compute power. This will help AMD Instinct accelerators drive the event of next-generation AI models.

The updated AMD Instinct annual roadmap highlighted the brand new AMD Instinct MI325X accelerator, which offers 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, is OAM compatible with the AMD Instinct MI300 series, and can be generally available in Q4 2024.

The accelerator will feature industry-leading memory capability and bandwidth that’s 2x and 1.3x higher than the competition, respectively. The first product within the AMD Instinct MI350 series, the AMD Instinct MI350X
The accelerator relies on the AMD CDNA 4 architecture and is predicted to be available in 2025. It can be OAM compatible with the MI300 series accelerators, be based on the 3nm node, support the FP4 and FP6 AI data types, and have as much as 288GB of HBM3E memory.

The AMD CDNA “Next” architecture that may power the AMD Instinct MI400 series accelerators is predicted to be available in 2026 and can offer the newest features and capabilities that may help unlock additional performance and efficiency for inference and AI training at scale.

Finally, AMD highlighted that demand for AMD Instinct MI300X accelerators continues to grow as quite a few partners and customers use the accelerators to run their demanding AI workloads. Accelerator users include Microsoft Azure, Dell, Supermicro, Lenovo and HPE.

Reinventing the PC to deliver intelligent, personal experiences

fifth Generation AMD Epyc Processors.

The fifth Generation AMD EPYC processors (codenamed “Torino”) announced today at Computex will leverage the “Zen 5” core and proceed the leading performance and efficiency of the AMD EPYC processor family. The fifth Generation AMD EPYC processors are expected to be available within the second half of 2024.

Su was joined by executives from Microsoft, HP, Lenovo and Asus to unveil recent PC experiences powered by AMD Ryzen AI 300 series processors and AMD Ryzen 9000 series desktop processors.

AMD has detailed its next-generation “Zen 5” CPU core, designed from the bottom up for leading performance and power efficiency, from supercomputers and the cloud to PCs.

AMD also introduced the AMD XDNA 2 NPU core architecture, which delivers as much as 50 TOPs of AI processing power. AMD XDNA 2 is the industry's only NPU to support the advanced block FP16 data type, providing higher accuracy without sacrificing performance in comparison with the lower precision data types utilized by competitive NPUs. Together, “Zen 5”, AMD XDNA 2 and AMD RDNA 3.5 graphics enable next-generation AI experiences in laptops with AMD Ryzen AI 300 processors.

On stage at Computex, ecosystem partners showcased how they’re working with AMD to enable recent AI experiences for PCs. Microsoft highlighted its long-standing partnership with AMD and announced that AMD Ryzen AI 300 Series processors exceed Microsoft's requirements for Copilot+ PCs. HP introduced recent Copilot+ PCs with AMD technology and demonstrated the Stable Diffusion XL Turbo image generator running locally on an HP laptop with a Ryzen AI 300 Series processor.

Lenovo unveiled upcoming consumer and business laptops with Ryzen AI 300 processors and showed how the corporate is using Ryzen AI to enable recent Lenovo AI software. Asus showcased a broad portfolio of AI PCs for business users, consumers, content creators and gamers with Ryzen AI 300 processors.

AMD also introduced the AMD Ryzen 9000 series desktop processors based on the “Zen 5” architecture, which provide leading performance in gaming, productivity and content creation.

In addition, AMD also announced the AMD Radeon PRO W7900 Dual Slot Workstation graphics card, optimized for scalable AI performance for multi-GPU platforms. AMD also introduced the AMD ROCm™ 6.1 open software stack, designed to make AI development and deployment with AMD Radeon™ desktop GPUs more compatible, accessible, and scalable.

Driving the subsequent wave of edge AI innovation

The next AMD Ryzen AI 300.

AMD showed how its AI and adaptive computing technology is driving the subsequent wave of AI innovation at the sting. Su said only AMD has all of the IP needed for all edge AI
Application acceleration.

The recent AMD Versal AI Edge Series Gen 2 combines FPGA-programmable logic for real-time pre-processing, next-generation AI engines with XDNA technology for efficient AI inference, and embedded CPUs for post-processing to deliver essentially the most powerful single-chip adaptive solution for edge AI. AMD Versal AI Edge Gen 2 devices are actually available for early access and are currently in development with over 30 key partners.

AMD demonstrated the way it enables AI at the sting in various industries, including:

  • Illumina uses advanced AMD technology to unlock the facility of genome sequencing.
  • Subaru uses AMD Versal AI Edge Gen 2 devices to power its EyeSight ADAS platform to enable Subaru’s zero-road fatality mission by 2030.
  • Canon uses the Versal AI Core series for its Free Viewpoint video system, revolutionizing the viewing experience for live sports broadcasts and webcasts.
  • Hitachi Energy's HVDC protection relays predict electrical surges using AMD's adaptive computing technology for real-time processing.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read