Don’t miss out on NVIDIA Blackwell! Join the waitlist.
Contact sales

Lambda among first NVIDIA Cloud Partners to deploy NVIDIA Blackwell-based GPUs

Lambda among first NVIDIA Cloud Partners to deploy NVIDIA Blackwell-based GPUs

Lambda will be one of the first NVIDIA Cloud Partners to deploy the latest NVIDIA accelerated computing solutions announced at the NVIDIA GTC global AI conference. The NVIDIA GB200 Grace Blackwell Superchip and NVIDIA B200 and B100 Tensor Core GPUs will be available through Lambda’s On-Demand & Reserved Cloud, and NVIDIA DGX SuperPODs based on the NVIDIA Blackwell architecture will be deployed in Lambda’s AI-Ready Data Centers. 

Machine Learning teams choose Lambda for access to the latest NVIDIA GPUs in any compute modality, from cloud to on-prem, and rent to own. Lambda’s On-Demand Cloud will feature instances with 1x or 8x NVIDIA B200 GPUs that can be spun up in under 5 minutes and paid for by the hour. 

Lambda’s Reserved Cloud will feature blocks of 64-2,040 NVIDIA B200 and GB200 NVL GPUs connected with NVIDIA InfiniBand for 1-3 year contracts featuring enterprise-grade security & SLAs.

Enterprises that want to own their infrastructure in AI-Ready Data Centers can benefit from DGX SuperPODs featuring NVIDIA Blackwell GPUs deployed in Lambda’s AI data centers supported by ML engineers.

NVIDIA GB200 Grace Blackwell Superchip

The NVIDIA GB200 Grace Blackwell Superchip combines two Blackwell GPUs and one NVIDIA Grace CPU. This scales up to the GB200 NVL72, a 72-GPU NVIDIA NVLink-connected system in a liquid-cooled rack that acts as a single massive GPU, delivering 1.4 exaFLOPS of AI performance and 30TB of fast memory. GB200 delivers 30X faster real-time LLM inference and 4X faster training performance for large language models like GPT-MoE-1.8T compared to the NVIDIA Hopper architecture generation.

For the highest AI performance, the GB200 Superchip supports the latest NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms with speeds at up to 800 Gbps. The GB200 NVL72 also includes NVIDIA BlueField-3 DPUs to enable cloud networking, composable storage, zero-trust security, and GPU compute elasticity in Lambda’s cluster offerings.

NVIDIA B200 Tensor Core GPU

NVIDIA Blackwell B200 Tensor Core GPUs deliver up to 15X more real-time inference and 3X faster training to accelerate trillion-parameter language models compared to the NVIDIA Hopper architecture generation.

NVIDIA B100 Tensor Core GPU

With real-time inference to accelerate trillion-parameter large language models, the B100 GPU is designed for the most demanding AI workloads.

NVIDIA Blackwell Architectural Innovations

There are six revolutionary technologies inside the new Blackwell architecture that enable organizations to build and run real-time inference on trillion-parameter large language models. Along with being the basis of the world’s most powerful chip, which is packed with 208 billion transistors, Blackwell includes a second-generation Transformer Engine, fifth-generation NVLink interconnect, advanced confidential computing capabilities, a dedicated compression engine, and a RAS Engine that adds preventative maintenance, diagnostics, and reliability-forecasting capabilities at the chip level.

Accelerated Networking Platforms for AI

NVIDIA Quantum-X800 InfiniBand forms the foundation of the AI compute fabric, capable of scaling beyond 10,000 GPUs in a two-level fat tree topology, which is 5X higher than the previous NVIDIA Quantum-2 Infiniband generation. NVIDIA Spectrum-4 Ethernet switches with NVIDIA BlueField-3 DPU platforms are used to scale across the data center, providing accelerated GPU access to data, secure cloud multi-tenancy, and efficient data center operations.

What’s next with NVIDIA Blackwell GPUs with Lambda

Our collaboration with NVIDIA enables Lambda to be one of the first to market with NVIDIA Blackwell GPUs, whether in our cloud or in your data center. Lambda customers, from AI startups to the Fortune 500, will continue to have access to the fastest and most effective infrastructure to power their largest and most demanding AI training projects. Reach out to our team to learn more.