Lambda Cloud Deploys On-Demand NVIDIA HGX H100 with 8x H100 SXM Instances
1 min read
Lambda Cloud now offers on-demand HGX H100 systems with 8x NVIDIA H100 SXM Tensor Core GPU instances for only $2.59/hr/GPU. The newest addition to Lambda Cloud gives more flexibility to Lambda users looking for more compute power to build and fine-tune generative AI models.
Earlier this year, Lambda Cloud added 1x NVIDIA H100 PCIe Tensor Core GPU instances at just $1.99/hr/GPU. The newly deployed NVIDIA HGX H100 with 8x SXM GPU instances are ideal for more complex, larger-scale tasks, offering significantly more compute power, enhanced scalability, high-bandwidth GPU-to-GPU communication and shared memory access, and optimal performance density.
8x H100 SXM Specs
- vRAM per GPU: 80GB
- vCPUs: 220
- RAM: 1.8 TB
- Storage: 24.3 TiB (26 TB) NVMe SSD
Next-Level Large-Scale Training: NVIDIA H100 Lambda Cloud Clusters
Lambda Cloud Clusters are designed for 64 - 2,040+ NVIDIA H100 GPUs in a single non-blocking NVIDIA Quantum-2 400Gb/s InfiniBand network. Lambda’s dedicated HGX H100 clusters feature 80GB NVIDIA H100 SXM5 GPUs at $1.89/GPU/Hour. Engineered for distributed training, Lambda Cloud Clusters are ideal for training foundation models and LLMs. By reserving an NVIDIA H100 cluster in the cloud, Lambda customers get access to the same NVIDIA H100 GPUs, compute power, high-bandwidth networking, and parallel storage as on-prem hardware, without the capital infrastructure cost.
Lambda is continuing to add more on-demand cloud and cloud cluster capacity, in addition to developing and launching features that will make Lambda the best cloud in the world for training AI.
In the meantime, you can also learn more about Lambda systems powered by NVIDIA H100 GPUs here.