Covering Disruptive Technology Powering Business in The Digital Age

image
AMD Instinct GPU Roadmap Set for Annual Cadence as AMD Accelerates Data Centre AI Innovation and Leadership
image

 

At Computex 2024, AMD showcased the growing momentum of the AMD Instinct™ accelerator family during the opening keynote by Chair and CEO Dr Lisa Su. AMD unveiled a multiyear, expanded accelerator roadmap that will bring an annual cadence of leadership AI performance and memory capabilities at every generation.

The updated roadmap starts with the new MI325X accelerator, which will be available in Q4 2024. Following that, the MI350 series, powered by the new AMD CDNA™ 4 architecture, is expected to be available in 2025 bringing up to a 35x increase in AI inference performance compared to the MI300 Series with AMD CDNA 3 architecture. Expected to arrive in 2026, the MI400 series is based on the AMD CDNA “Next” architecture.

“The AMD Instinct MI300X accelerators continue their strong adoption from numerous partners and customers including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo, and others, a direct result of the AMD Instinct MI300X accelerator exceptional performance and value proposition,” said Brad McCredie, Corporate Vice President, Data Centre Accelerated Compute, at AMD. “With our updated annual cadence of products, we are relentless in our pace of innovation, providing the leadership capabilities and performance in the AI industry and our customers expect to drive the next evolution of data center AI training and inference.”

AMD AI Software Ecosystem Matures, Boosting AMD Instinct

The AMD ROCm™ 6 open software stack continues to mature, enabling AMD Instinct MI300X accelerators to drive impressive performance for some of the most popular LLMs. On a server using eight AMD Instinct MI300X accelerators and ROCm 6 running Meta Llama-3 70B, customers can get 1.3x better inference performance and token generation compared to the competition.

On a single AMD Instinct MI300X accelerator with ROCm 6, customers can get better inference performance and token generation throughput compared to the competition by 1.2x on Mistral-7B.

AMD also highlighted that Hugging Face, the largest and most popular repository for AI models, is now testing 700,000 of their most popular models nightly to ensure they work out of the box on MI300X accelerators. In addition, AMD is continuing its upstream work into popular AI frameworks like PyTorch, TensorFlow, and JAX.

AMD Previews New Accelerators and Reveals Annual Cadence Roadmap

During the keynote, AMD revealed an updated annual cadence for the accelerator roadmap to meet the growing demand for more AI compute. This will help ensure that AMD Instinct accelerators propel the development of next-generation frontier AI models.

The updated  annual roadmap highlighted:

  • The new AMD Instinct MI325X accelerator, which will bring 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, use the same industry standard Universal Baseboard server design used by the AMD Instinct MI300 series, and be generally available in Q4 2024. The accelerator will have industry-leading memory capacity and bandwidth, 2x and 1.3x better than the competition respectively, and 1.3x better compute performance than the competition.
  • The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory.
  • AMD CDNA “Next” architecture, which will power the AMD Instinct MI400 Series accelerators, is expected to be available in 2026 providing the latest features and capabilities that will help unlock additional performance and efficiency for inference and large-scale AI training.

AMD Instinct

Finally, AMD highlighted the demand for AMD Instinct MI300X accelerators continues to grow with numerous partners and customers using the accelerators to power their demanding AI workloads, including:

Read more AMD AI announcements at Computex here and watch a video replay of the keynote on the AMD YouTube page.

(0)(0)

Archive