
Lambda On-Demand Cloud powered by NVIDIA H100 Tensor Core GPUs
NOW AVAILABLE
Lambda On-Demand Cloud now includes the new NVIDIA H100 Tensor Core GPUs (PCIe) and delivers unprecedented performance, scalability, and security for every workload. NVIDIA H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI and speeds up large language models.
Upgrade your workloads and speed up your large-scale training with H100s on-demand in Lambda Cloud. Spin up an instance!
Host & share Generative AI apps
Lambda Demos makes it easy to host Gradio-powered Generative AI apps. Simply add your Github repo and host it on an A10 for $0.60/hr. Share publicly with the ML community or privately with individuals.

Instant access to cloud GPUs at the best prices
Save over 73% on your cloud bill
Get the latest NVIDIA GPUs for the best prices on the market.
Pay-by-the-second billing
Only pay when your instance is running.
Simple, transparent pricing
No hidden fees like data egress or ingress.
Pre-configured for machine learning. Start training in seconds
One-click Jupyter access
Quickly connect to NVIDIA GPU instances directly from your browser.
Pre-installed with popular ML frameworks
Ubuntu, TensorFlow, PyTorch, CUDA, and cuDNN come ready to use with Lambda Stack.
Spin up a variety of GPU instance types, on-demand
NVIDIA GPUs
Access GPUs like NVIDIA A100, RTX A6000, Quadro RTX 6000, and Tesla V100 on-demand.
Multi-GPU instances
Launch instances with 1x, 2x, 4x, or 8x GPUs.
Automate your workflow
Programmatically spin up instances with Lambda Cloud API.
On-demand GPU cloud pricing
Access high power GPUs when you need them and only pay for what you use. There are no charges for egress.
GPUs | VRAM per GPU | vCPUs | RAM | Storage | Price |
---|---|---|---|---|---|
1x NVIDIA H100 PCIe New | 80 GB | 26 | 200 GiB | 512 GiB | $2.40 / hr |
1x NVIDIA A100 | 40 GB | 30 | 200 GiB | 512 GiB | $1.10 / hr |
2x NVIDIA A100 | 40 GB | 60 | 400 GiB | 1 TiB | $2.20 / hr |
4x NVIDIA A100 | 40 GB | 120 | 800 GiB | 1 TiB | $4.40 / hr |
8x NVIDIA A100 | 40 GB | 124 | 1800 GiB | 6 TiB | $8.80 / hr |
8x NVIDIA A100 | 80 GB | 240 | 1800 GiB | 20 TiB | $12.00 / hr |
1x NVIDIA RTX A6000 | 48 GB | 14 | 100 GiB | 200 GiB | $0.80 / hr |
2x NVIDIA RTX A6000 | 48 GB | 28 | 200 GiB | 1 TiB | $1.60 / hr |
4x NVIDIA RTX A6000 | 48 GB | 56 | 400 GiB | 1 TiB | $3.20 / hr |
1x NVIDIA A10 | 24 GB | 30 | 200 GiB | 1.4 TiB | $0.60 / hr |
1x NVIDIA Quadro RTX 6000 | 24 GB | 14 | 46 GiB | 512 GiB | $0.50 / hr |
8x NVIDIA Tesla V100 | 16 GB | 92 | 448 GiB | 5.9 TiB | $4.40 / hr |
Reserved Cloud Cluster pricing
The best prices and value for NVIDIA H100 clusters in the industry
Instance type | GPU | GPU Memory | vCPUs | Storage | Network Bandwidth (Gbps) | Per Hour Price | Term | # of GPUs | |
---|---|---|---|---|---|---|---|---|---|
Reserved | 8x NVIDIA H100 | H100 SXM | 80 GB | 200 | 20 TB NVMe SSD local storage minimum | 3200 | $1.89/H100/hour | 3-years | 64 - 60,000 |
Sprint | 8x NVIDIA H100 | H100 SXM | 80 GB | 224 | 27 TB NVMe SSD local storage minimum | 3200 | $4.85/H100/hour | 3-months | 248 |