Lambda Inference

Inference Server for Deep Learning

12-GPU / 16-GPU / 20-GPU Options

This is required

Trusted by Top A.I. Research Groups

Apple is a customer Microsoft is a customer Amazon is a customer Stanford is a customer Berkeley is a customer MIT is a customer Univerity of Toronto is a customer

Choose Your Inference Server

Not seeing what you want? Customize your own

Basic

12x Tesla T4 GPUs

2x Xeon Silver 4116 (12 Cores, 2.10 GHz)
12x NVIDIA T4 (16 GB)
256 GB RAM
Fully Customizable Storage
10 Gbps Ethernet
Lambda Support Contract - 1 year
Starting at
$ 46,809
Customize
Premium

16x Tesla T4 GPUs

2x Xeon Gold 6148 (20 cores, 2.40 GHz)
16x NVIDIA T4 (16 GB)
512 GB RAM
Fully Customizable Storage
100 Gbps InfiniBand
Lambda Support Contract - 1 year
Starting at
$ 67,598
Customize
Max

20x Tesla T4 GPUs

2x Xeon Platinum 8168 (24 Cores, 2.70 GHz)
20x NVIDIA T4 (16 GB)
512 GB RAM
Fully Customizable Storage
100 Gbps InfiniBand
Lambda Support Contract - 3 years
Starting at
$ 86,286
Customize
custom

Starting at
$
Customize

Easy Upgrades with Lambda Stack

Easily upgrade TensorFlow, Keras, PyTorch, CUDA, and cuDNN with Lambda Stack. When new versions are released, simply "apt-get upgrade." Learn More