Don’t miss out on NVIDIA Blackwell! Join the waitlist.

Lambda launches new Hyperplane Server with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs

Hyperplane Server with NVIDIA H100 and AMD EPYC 9004 series

Lambda has launched a new Hyperplane server with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs. The new AI server combines the fastest GPU type on the market, NVIDIA H100, with the world’s best data center CPU, AMD EPYC 9004, to deliver unprecedented performance, scalability, and security for every workload. This enables Lambda customers to train and run inference faster with enhanced performance per dollar.

The new HGX H100 servers with AMD CPUs are shipping now, available in Lambda Cloud Clusters starting at $1.89 per H100 per hour, as well as in on-prem Hyperplane Servers and Clusters.

Hyperplane 8-H100 HGX Server Specs

  • Hyperplane 8-H100 HGX platform with AMD EPYC 9004
  • 8x NVIDIA H100 SXM Tensor Core GPUs
  • NVLink and NVSwitch fabric for distributed training
  • Dual AMD EPYC 9004 CPUs with up to 256 cores total
  • Up to 6TB of DDR5 system memory and 480TB of local NVMe storage
  • 12x PCIe Gen 5 expansion slots supporting up to 400Gb networking
  • HGX system architecture with 1:1 NIC-to-GPU ratio for scalable GPUDirect RDMA

Learn More

NVIDIA HGX H100 servers with AMD CPUs in Lambda Cloud Clusters

Lambda Cloud Clusters are designed for 64 - 2,040+ NVIDIA H100 GPUs in a single non-blocking NVIDIA Quantum-2 InfiniBand network. Lambda’s dedicated HGX H100 clusters feature 80GB NVIDIA H100 SXM5 GPUs at $1.89/GPU/hour. Now, with the release of the new Lambda Hyperplane  server, Lambda customers have the option to equip their systems with 4th generation AMD EPYC 9004 Series processors that have up to 128 cores per CPU. The new server enables faster training times and better performance per dollar on Generative AI models and other distributed training applications.