Don’t miss out on NVIDIA Blackwell! Join the waitlist.

Lambda Cloud Clusters to support NVIDIA H200 Tensor Core GPUs

Lambda Cloud Clusters to support NVIDIA H200 Tensor Core GPUs

Lambda will be one of the first cloud providers in the world to offer customers access to NVIDIA H200 Tensor Core GPUs through Lambda Cloud Clusters. Lambda Cloud Clusters are dedicated GPU clusters designed for machine learning teams who need the highest-performance GPUs, networking, and storage for large-scale distributed training. Lambda Cloud Clusters feature the fastest and most powerful GPUs available — including NVIDIA H100 Tensor Core GPUs, NVIDIA GH200 Grace Hopper Superchips, and now NVIDIA H200 GPUs — and leverage non-blocking 400 Gb/s NVIDIA Quantum-2 InfiniBand networking. This means customers can benefit from high throughput, low latency, and support for NVIDIA GPUDirect RDMA across the entire cluster.

Key benefits of Lambda Cloud Clusters powered by NVIDIA H200 GPUs

The NVIDIA H200 GPU supercharges generative AI and HPC with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200 GPU’s faster, larger memory fuels the acceleration of generative AI and LLMs while advancing scientific computing for HPC workloads. 

Nearly double the GPU memory of the prior-generation NVIDIA H100 GPU

The NVIDIA H200 GPU, with 141GB of HBM3e memory, nearly doubles capacity over the prior-generation H100 GPU, providing an optimal level of HBM3e memory that, combined with NVIDIA NVLink technology, delivers the highest-performance model parallelism for today’s LLMs and generative AI. The H200 GPU’s increased memory capacity allows larger models to be loaded into memory or batch sizes to be increased for more efficient inference and training of massive LLMs.

Unmatched memory bandwidth

The NVIDIA H200 GPU’s HBM3e memory bandwidth of 4.8TB/s is 1.4X faster than H100 GPUs with HBM3. Increased memory bandwidth and capacity is critical for the growing data sets and model sizes of today’s leading LLMs.

How to deploy NVIDIA H200 GPUs

This collaboration with NVIDIA enables Lambda to be one of the first to market with NVIDIA H200 GPUs. This means Lambda customers, from Fortune 500 companies to AI startups, continue to have access to the fastest and most effective cloud infrastructure to power their largest and most demanding AI training projects. Visit Lambda’s Cloud Cluster page to learn more and request NVIDIA H200 GPU allocation.