Covering Disruptive Technology Powering Business in The Digital Age

image
Meta Llama 3 Gets Huge Boost from NVIDIA, Promises Optimised Performance, Reduced Costs
image

 

NVIDIA has announced optimisations across all its platforms to accelerate Meta Llama 3, the latest large language model (LLM) generation.

The open model combined with NVIDIA accelerated computing equips developers, researchers, and businesses to innovate responsibly across a wide variety of applications.

Meta engineers trained Meta Llama 3 on computer clusters packing 24,576 NVIDIA H100 Tensor Core GPUs, linked with RoCE and NVIDIA Quantum-2 InfiniBand networks.

To further advance the state of the art in generative AI, Meta recently described plans to scale its infrastructure to 350,000 H100 GPUs.

Putting Meta Llama 3 to Work

Versions of Meta Llama 3, accelerated on NVIDIA GPUs, are now available today for use in the cloud, data centre, edge, and PC.

From a browser, developers can try Meta Llama 3 at ai.nvidia.com. It is packaged as an NVIDIA NIM microservice with a standard application programming interface that can be deployed anywhere.

Businesses can fine-tune Meta Llama 3 with their data using NVIDIA NeMo, an open-source framework for LLMs that’s part of the secure, supported NVIDIA AI Enterprise platform. Custom models can be optimised for inference with NVIDIA TensorRT-LLM and deployed with NVIDIA Triton Inference Server.

Taking Meta Llama 3 to Devices and PCs

Llama 3 also runs on NVIDIA Jetson Orin for robotics and edge computing devices, creating interactive agents like those in the Jetson AI Lab.

In addition, NVIDIA RTX and GeForce RTX GPUs for workstations and PCs speed inference on Llama 3. These systems give developers a target of more than 100 million NVIDIA-accelerated systems worldwide.

Get Optimal Performance with Meta Llama 3

Best practices in deploying an LLM for a chatbot involves a balance of low latency, good reading speed, and optimal GPU use to reduce costs.

Such a service needs to deliver tokens—the rough equivalent of words to an LLM—at about twice a user’s reading speed which is about 10 tokens/second.

Applying these metrics, a single NVIDIA H200 Tensor Core GPU generated about 3,000 tokens/second—enough to serve about 300 simultaneous users—in an initial test using the version of Llama 3 with 70 billion parameters.

That means a single NVIDIA HGX server with eight H200 GPUs could deliver 24,000 tokens/second, further optimising costs by supporting more than 2,400 users at the same time.

For edge devices, the version of Llama 3 with eight billion parameters generated up to 40 tokens/second on Jetson AGX Orin and 15 tokens/second on Jetson Orin Nano.

Meta Llama 3

Advancing Community Models

An active open-source contributor, NVIDIA is committed to optimising community software that helps users address their toughest challenges. Open-source models also promote AI transparency and let users broadly share work on AI safety and resilience.

Learn more about how NVIDIA’s AI inference platform, including how NIM, TensorRT-LLM and Triton use state-of-the-art techniques such as low-rank adaptation to accelerate the latest LLMs.

(0)(0)

Archive