grace-hopper-superchip-no-crop

NVIDIA GH200 Grace Hopper™ Superchip

The NVIDIA GH200 Grace Hopper Superchip’s breakthrough design forms a high-bandwidth connection between the NVIDIA Grace™ CPU and Hopper™ GPU to enable the era of accelerated computing and generative AI.

LAMBDA RESERVED CLOUD
cloud-reserved-hero-image

Lambda Reserved Cloud powered by NVIDIA GH200

Dedicated, bare metal, hosted clusters optimized for distributed training. A single GH200 has 576 GB of coherent memory for unmatched efficiency and price for the memory footprint. Reserve a cloud cluster with Lambda and be one of the first in the industry to train LLMs on the most versatile compute platform in the world, the NVIDIA GH200. Our GH200-powered cloud clusters are starting at $3.99/hr.

NVIDIA GH200

NVIDIA GH200 Grace Hopper Superchip

Up to 10X higher performance compared to NVIDIA A100 for applications running terabytes of data, helping scientists and researchers reach unprecedented solutions for the world’s most complex problems.

Power and efficiency with the Grace CPU

The NVIDIA Grace CPU was designed for high single-threaded performance, high-memory bandwidth, and outstanding data-movement capabilities. This design strikes the optimal balance of performance and energy efficiency.

Performance and speed with the NVIDIA GH200

GH200 will deliver up to 10x higher performance compared to NVIDIA A100 for applications running terabytes of data, helping scientists and researchers reach unprecedented solutions for the world's most complex problems.

The Power of Coherent Memory

NVIDIA GH200 delivers 7X the bandwidth between CPU and GPU typically found in accelerated systems. The connection provides unified cache coherence with a single memory address space that combines system and HBM GPU memory for simplified programmability.

NVIDIA GH200 TECH SPECS
grace-hopper-superchip-no-crop
CPU Grace 72-core CPU
GPU H100 GPU
CPU memory 480GB LPDDR5X at 512 GB/s
GPU memory 96GB HBM3 at 4 TB/s
NVLink-C2C bandwidth 900 GB/s
Available in Lambda Reserved Cloud
BENCHMARKS
GPU benchmarks

Lambda GPU comparisons

Lambda's GPU benchmarks for deep learning are run on more than a dozen different GPU types in multiple configurations. GPU performance is measured running models for computer vision (CV), natural language processing (NLP), text-to-speech (TTS), and more. Visit our benchmarks page to get started.

RESOURCES

Resources for deep learning

Explore Lambda's deep learning materials including blog, technical documentation, research and more. We've curated a diverse set of resources just for ML and AI professionals to help you on your journey.