Covering Disruptive Technology Powering Business in The Digital Age

image
Supermicro Accelerates the Era of AI and the Metaverse with Top-of-the-Line Servers
image

 

Supermicro, Inc., a total IT solution Provider for Artificial Intelligence and Machine Learning (AI/ML), cloud, Storage and 5G/Edge, has announced it has begun shipping its new, top-of-the-line GPU servers. These servers feature the latest NVIDIA HGX H100 8-GPU system. Supermicro servers incorporate the new NVIDIA L4 Tensor Core GPU in a wide range of application-optimised servers from the edge to the data centre.

“Supermicro offers the most comprehensive portfolio of GPU systems in the industry, including servers in 8U, 6U, 5U, 4U, 2U and 1U form factors. These systems also include workstations and SuperBlade systems that support the full range of new NVIDIA H100 GPUs,” said Charles Liang, President and CEO at Supermicro. “With our new NVIDIA HGX H100 Delta-Next server, customers can expect 9x performance gains compared to the previous generation for AI training applications. Our GPU servers have innovative airflow designs that reduce fan speeds, lower noise levels and consume less power, resulting in a reduced total cost of ownership (TCO). In addition, we deliver complete rack-scale liquid-cooling options for customers looking to further future-proof their data centres.”

Supermicro’s most powerful new 8U GPU server is now shipping in volume. Optimised for AI, DL, ML and HPC workloads, this new Supermicro 8U server is powered by the NVIDIA HGX H100 8-GPU featuring the highest GPU-to-GPU communication using the fastest NVIDIA NVLink® 4.0 technology, NVSwitch interconnects and NVIDIA Quantum-2 InfiniBand and Spectrum-4 Ethernet networking to break through the barriers of AI at scale. In addition, Supermicro offers several performance-optimised configurations of GPU servers, including direct-connect/single-root/dual-root CPUs to GPUs and front or rear I/O models with AC and DC power in standard and OCP DC rack configurations. The Supermicro X13 SuperBlade® enclosure accommodates 20 NVIDIA H100 Tensor Core PCIe GPUs or 40 NVIDIA L40 GPUs in an 8U enclosure. In addition, up to 10 NVIDIA H100 PCIe GPUs or 20 NVIDIA L4 Tensor Core GPUs can be used in a 6U enclosure. These new systems deliver the optimized acceleration ideal for running NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform.

Liquid cooling of these servers is also supported on many GPU servers. In addition, Supermicro is announcing a liquid-cooled AI development system (as a tower or rack-mounted configuration) containing two CPUs and four NVIDIA A100 Tensor Core GPUs. These are ideal for office and home office environments and capable of being deployed in departmental and corporate clusters.

Supermicro systems support the new NVIDIA L4 GPU, which delivers multi-fold acceleration gain and energy efficiency compared to previous generations. The same applies to AI inferencing, video streaming, virtual workstations and graphics applications in the enterprise, in the cloud and at the edge. With NVIDIA’s AI platform and full-stack approach, the L4 is optimised for inference at scale for a broad range of AI applications, including recommendations, voice-based AI avatar assistants, chatbots, visual search and contact centre automation to deliver the best-personalised experiences. As the most efficient NVIDIA accelerator for mainstream servers, L4 has up to 4x higher AI performance, increased energy efficiency and over 3x more video streaming capacity and efficiency supporting AV1 encoding/decoding. The L4 GPU’s versatility for inference and visualisation and its small, energy-efficient, single-slot, low-profile, low-power-use 72W form factor make it ideal for global deployments, including at edge locations.

“Equipping Supermicro servers with the unmatched power of the new NVIDIA L4 Tensor Core GPU is enabling customers to accelerate their workloads efficiently and sustainably,” said Dave Salvator, Director of Accelerated Computing Products at NVIDIA. “Optimised for mainstream deployments, the NVIDIA L4 delivers a low-profile form factor operating in a 72W low-power envelope, taking AI performance and efficiency at the edge to new heights.”

Supermicro’s new PCIe accelerated solutions empower the creation of 3D worlds, digital twins, 3D simulation models and the industrial metaverse. In addition to supporting the previous generations of NVIDIA OVX™ systems, Supermicro offers an OVX 3.0 configuration featuring four NVIDIA L40 GPUs, two NVIDIA ConnectX®-7 SmartNICs, an NVIDIA BlueField®-3 DPU and the latest NVIDIA Omniverse Enterprise™ software.

To learn more about all of Supermicro advanced new GPU systems, please visit https://www.supermicro.com/en/accelerators/nvidia. See more about Supermicro at NVIDIA GTC 2023  by registering here: https://register.nvidia.com/events/widget/nvidia/gtcspring2023/sponsorcatalog/exhibitor/1564778120132001ghs2/?ncid=ref-spo-128510.

 

(0)(0)

Archive