hero

The world's fastest server for AI research

Now available with NVIDIA H100 Tensor Core GPUs

microsoft logo
intuitive-black
amazon logo
anthem logo
raytheon logo
argonne logo
sony logo
john deere logo
ibm logo
google logo
caltech logo
berkeley logo
netflix logo

10,000+ research teams trust Lambda

NOW AVAILABLE

Lambda Reserved Cloud with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs

Lambda’s Hyperplane HGX server, with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs, is now available for order in Lambda Reserved Cloud, starting at $1.89 per H100 per hour! By combining the fastest GPU type on the market with the world’s best data center CPU, you can train and run inference faster with superior performance per dollar.

Spec Highlights

Engineered for your workload

Tell us about your research and we'll design a machine that's perfectly tailored to your needs.

8x-gpu
hyperplane-8-h100
  HYPERPLANE A100 HYPERPLANE H100
GPUs 4x or 8x A100 80GB SXM4 8x H100 80GB SXM5
CPUs Up to 128 cores and 256 threads Up to 192 cores and 384 threads
Memory Up to 8 TB DDR4 Up to 8 TB DDR5
Storage Up to 184 TB of NVMe SSDs Up to 492 TB of NVMe SSDs
Networking Up to 200 Gb/s Up to 400 Gb/s
Echelon Clusters
Easily scale from gpu server to gpu cluster

Easily scale from server to cluster

As your team's compute needs grow, Lambda's in-house HPC engineers and AI researchers can help you integrate Hyperplane and Scalar servers into GPU clusters designed for deep learning.

  • Compute
    Scaling to 1000s of GPUs for distributed training or hyperparameter optimization.
  • Storage
    High-performance parallel file systems optimized for ML.
  • Networking
    Compute and storage fabrics for GPUDirect RDMA and GPUDirect Storage.
  • Software
    Fully integrated software stack for MLOps and cluster management.
Lambda Premium Support
Service and support by technical experts who specialize in machine learning

Service and support by technical experts who specialize in machine learning

Lambda Premium Support includes:

  • Up to 5 year extended warranty with advanced parts replacement
  • Live technical support from Lambda's team of ML engineers
  • Support for ML software included in Lambda Stack: PyTorch®, Tensorflow, CUDA, CuDNN, and NVIDIA Drivers
Lambda Stack
lambda stack illustration

Plug in. Start training.

Our servers include Lambda Stack, which manages frameworks like PyTorch® and TensorFlow. With Lambda Stack, you can stop worrying about broken GPU drivers and focus on your research.

  • Zero configuration required
    All your favorite frameworks come pre-installed.
  • Easily upgrade PyTorch® and TensorFlow
    When a new version is released, just run a simple upgrade command.
  • No more broken GPU drivers
    Drivers will "just work" and keep compatible with popular frameworks.
Colocation
Lambda Colocation

Your servers. Our datacenter.

Lambda Colocation makes it easy to deploy and scale your machine learning infrastructure. We'll manage racking, networking, power, cooling, hardware failures, and physical security. Your servers will run in a Tier 3 data center with state-of-the-art cooling that's designed for GPUs. You'll get remote access to your servers, just like a public cloud.

Fast support
If hardware fails, our on-premise data center engineers can quickly debug and replace parts.
Optimal performance
Our state-of-the-art cooling keeps your GPUs cool to maximize performance and longevity.
High availability
Our Tier 3 data center has redundant power and cooling to ensure your servers stay online.
No network set up
We handle all network configuration and provide you with remote access to your servers.
Research
Tech Specs

Technical Specifications

4x-gpu