Lambda Hyperplane A100

A new standard for
deep learning hardware

See up to 40% training performance improvements with the new 4x and 8x NVIDIA® A100 Tensor Core GPU servers from Lambda.

Lambda Hyperplane 4-A100

4 NVIDIA A100 Tensor Core GPUs with NVLink™ & Mellanox InfiniBand

System Specifications

  • GPU Details

    4x NVIDIA Tesla A100 SXM4-40GB + NVLink

  • Processor

    2x AMD EPYC™ Processors (Up to 64 cores)

  • SYSTEM RAM

    512 GB

  • Storage

    Up to 60TB NVMe

  • NETWORK INTERFACE

    Up to 4x Mellanox InfiniBand HDR 200Gbps Cards

Hyperplane 4-A100 pricing starting at

$65,000
Lambda Hyperplane 8-A100

8 NVIDIA A100 Tensor Core GPUs with NVLink, NVSwitch™ & Mellanox InfiniBand

System Specifications

  • GPU Details

    8x NVIDIA Tesla A100 SXM4-40GB + NVSwitch

  • Processor

    2x AMD EPYC™ or Intel Processors

  • SYSTEM RAM

    1 TB

  • STORAGE

    Up to 96TB NVMe

  • NETWORK INTERFACE

    Up to 9x Mellanox InfiniBand HDR 200Gbps Cards

Hyperplane 8-A100 pricing starting at

$140,000

Major deep learning frameworks pre-installed

TensorFlow
Jupyter
PyTorch
Keras
Caffe

Cluster-ready deep learning infrastructure

  • Multi-node distributed training

    The new Lambda Hyperplane 8-A100 Supports up to 9x Mellanox ConnectX-6 VPI HDR InfiniBand cards for up to 1.8 Terabits of internode connectivity.

  • NVIDIA multi-instance GPU (MIG) support

    The A100 GPUs inside the Hyperplane can now be seamlessly divided into 7 virtual GPUs each for up to 56 virtual GPUs in a Hyperplane 8.

  • Engineered for you

    Leverage Lambda support to plan your next server or cluster build and ensure it meets the needs of your specific deep learning workloads.

GPU Specifications

Comparing the A100 & V100

NVIDIA Tesla A100 SXM4-40GB NVIDIA Tesla V100 SXM3-32GB
FP32 CUDA Cores 6912 5120
Clock Speed 1410 MHz 1530MHz
Theoretical FP32 TFLOPS 19.5 TFLOPS 15.7 TFLOPS
VRAM 40 GB HBM2e 32 GB HBM2
Memory Bandwidth 1,555 GBps 900 GBps
GPU Interconnect 12 NVLink Connections (600 GBps) 6 NVLink Connections (300 GBps)
Process Node TSMC 7nm TSMC 12nm FFN
TDP (W) 400 W 300 W
Power Efficiency 45.2 GFLOPS / W 48.6 GFLOPS / W