Written by: Izzat Najmi, Journalist, AOPG.
In a stunning reversal of fortunes, AMD emerged from the shadows cast by NVIDIA’s recent news on their formidable H100 systems in AI servers. With a flurry of captivating announcements, AMD unveiled its ambitious AI Platform strategy, offering customers an extensive hardware product portfolio spanning from the cloud to the edge and even to the endpoint. Fuelled by deep collaborations with industry-leading software providers, AMD aims to revolutionise the AI landscape by delivering scalable and pervasive AI solutions.
Dr Lisa Su, the dynamic Chair and CEO of AMD left no room for doubt when she declared the company’s readiness to make its grand entrance into the realm of AI. Her resounding statement, “AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD,” echoes the company’s laser focus on driving the deployment of AMD AI platforms at an unprecedented scale. This strategic thrust will be spearheaded by the highly anticipated launch of the Instinct MI300 accelerators later this year, complemented by an expanding ecosystem of AI software meticulously optimised for AMD hardware.
The CEO of AMD, Dr Lisa Su, during the AMD Data Center and AI Premiere Event at Fairmont San Francisco, California.
AMD AI Platform: Illuminating the Path to Pervasive AI
- Introducing the World’s Most Advanced Accelerator for Generative AI: In a moment of awe-inspiring revelation, AMD lifted the veil on the remarkable Instinct MI300 Series accelerator family. This ground-breaking lineup includes the jewel in AMD’s crown – the Instinct MI300X accelerator, touted as the world’s most advanced accelerator for generative AI. Powered by the cutting-edge AMD CDNA™ 3 accelerator architecture and boasting an impressive 192 GB of HBM3 memory, the MI300X is perfectly suited for the compute and memory-intensive demands of large language model training and inference for generative AI workloads. With its expansive memory capacity, the MI300X can accommodate behemoth language models like Falcon-40, a 40 billion parameter model, all within a single accelerator.
The AMD Instinct MI300X
- To further elevate the AI experience, AMD introduced the Instinct™ Platform, an industry-standard design that seamlessly integrates eight MI300X accelerators, revolutionising AI inference and training. The MI300X is already sampling to key customers, with wider availability expected in Q3. Excitingly, AMD also announced the sampling phase of the MI300A, the world’s first APU Accelerator catering to HPC and AI workloads, delivering unparalleled versatility.
The AMD Instinct MI300A
- A Ready, Open AI Software Platform: AMD not only boasts exceptional hardware but also showcased its ROCm™ software ecosystem for data centre accelerators, solidifying its commitment to an open AI software environment. This ecosystem enjoys robust collaborations with industry leaders, exemplified by the synergistic partnership between AMD and the PyTorch Foundation. Together, they have seamlessly integrated the ROCm software stack, enabling “day-zero” support for PyTorch 2.0 on all AMD Instinct accelerators with the release of ROCm 5.4.2. This ground-breaking integration empowers developers with a vast array of AI models powered by PyTorch, ready to be harnessed “out of the box” on AMD accelerators.
Optimising Hugging Face Models
Furthermore, Hugging Face, the premier open platform for AI builders, unveiled its plans to optimise thousands of Hugging Face models specifically for AMD platforms, ranging from Instinct accelerators to Ryzen™ and EPYC processors, Radeon™ GPUs, and Versal™ and Alveo™ adaptive processors.
With this resounding declaration of intent and an impressive portfolio of hardware and software collaborations, AMD firmly establishes its prowess in the AI domain, posing a formidable challenge to the incumbent players. As the battle for AI dominance intensifies, all eyes are on AMD as it charts an exhilarating course towards pervasive AI solutions that promise to reshape the future of computing.
Archive
- October 2024(44)
- September 2024(94)
- August 2024(100)
- July 2024(99)
- June 2024(126)
- May 2024(155)
- April 2024(123)
- March 2024(112)
- February 2024(109)
- January 2024(95)
- December 2023(56)
- November 2023(86)
- October 2023(97)
- September 2023(89)
- August 2023(101)
- July 2023(104)
- June 2023(113)
- May 2023(103)
- April 2023(93)
- March 2023(129)
- February 2023(77)
- January 2023(91)
- December 2022(90)
- November 2022(125)
- October 2022(117)
- September 2022(137)
- August 2022(119)
- July 2022(99)
- June 2022(128)
- May 2022(112)
- April 2022(108)
- March 2022(121)
- February 2022(93)
- January 2022(110)
- December 2021(92)
- November 2021(107)
- October 2021(101)
- September 2021(81)
- August 2021(74)
- July 2021(78)
- June 2021(92)
- May 2021(67)
- April 2021(79)
- March 2021(79)
- February 2021(58)
- January 2021(55)
- December 2020(56)
- November 2020(59)
- October 2020(78)
- September 2020(72)
- August 2020(64)
- July 2020(71)
- June 2020(74)
- May 2020(50)
- April 2020(71)
- March 2020(71)
- February 2020(58)
- January 2020(62)
- December 2019(57)
- November 2019(64)
- October 2019(25)
- September 2019(24)
- August 2019(14)
- July 2019(23)
- June 2019(54)
- May 2019(82)
- April 2019(76)
- March 2019(71)
- February 2019(67)
- January 2019(75)
- December 2018(44)
- November 2018(47)
- October 2018(74)
- September 2018(54)
- August 2018(61)
- July 2018(72)
- June 2018(62)
- May 2018(62)
- April 2018(73)
- March 2018(76)
- February 2018(8)
- January 2018(7)
- December 2017(6)
- November 2017(8)
- October 2017(3)
- September 2017(4)
- August 2017(4)
- July 2017(2)
- June 2017(5)
- May 2017(6)
- April 2017(11)
- March 2017(8)
- February 2017(16)
- January 2017(10)
- December 2016(12)
- November 2016(20)
- October 2016(7)
- September 2016(102)
- August 2016(168)
- July 2016(141)
- June 2016(149)
- May 2016(117)
- April 2016(59)
- March 2016(85)
- February 2016(153)
- December 2015(150)