
Lambda On-Demand Cloud powered by NVIDIA H100 GPUs
NOW AVAILABLE
On-demand HGX H100 systems with 8x NVIDIA H100 SXM GPUs are now available on Lambda Cloud for only $2.59/hr/GPU. With H100 SXM you get:
- More flexibility for users looking for more compute power to build and fine-tune generative AI models
- Enhanced scalability
- High-bandwidth GPU-to-GPU communication
- Optimal performance density
High-speed filesystem for GPU instances
Create filesystems in Lambda On-Demand Cloud to persist files and data with your compute.
- Scalable performance: Adapts to growing storage needs without compromising speed.
- Cost-efficient: Only pay for the storage you use, optimizing budget allocation.*
- No limitations: No ingress, no egress and no hard limit on how much you can store.
*Texas region persistent storage will remain free until the end of 2023.
Storage | Rate |
---|---|
Shared filesystems | $0.20 / GB / month |
Host & share Generative AI apps
Lambda Demos makes it easy to host Gradio-powered Generative AI apps. Simply add your Github repo and host it on an A10 for $0.60/hr. Share publicly with the ML community or privately with individuals.

Instant access to cloud GPUs at the best prices
Save over 73% on your cloud bill
Get the latest NVIDIA GPUs for the best prices on the market.
Pay-by-the-second billing
Only pay when your instance is running.
Simple, transparent pricing
No hidden fees like data egress or ingress.
Pre-configured for machine learning. Start training in seconds
One-click Jupyter access
Quickly connect to NVIDIA GPU instances directly from your browser.
Pre-installed with popular ML frameworks
Ubuntu, TensorFlow, PyTorch, CUDA, and cuDNN come ready to use with Lambda Stack.
Spin up a variety of GPU instance types, on-demand
NVIDIA GPUs
Access GPUs like NVIDIA H100, A100, RTX A6000, Quadro RTX 6000, and Tesla V100 on-demand.
Multi-GPU instances
Launch instances with 1x, 2x, 4x, or 8x GPUs.
Automate your workflow
Programmatically spin up instances with Lambda Cloud API.
GPUs | VRAM per GPU | vCPUs | RAM | Storage | Price |
---|---|---|---|---|---|
1x NVIDIA H100 PCIe New | 80 GB | 26 | 200 GiB | 1 TiB | $1.99 / hr |
8x NVIDIA H100 SXM New | 80 GB | 220 | 1.8 TB | 24.3 TiB | $20.72 / hr |
1x NVIDIA A100 | 40 GB | 30 | 200 GiB | 512 GiB | $1.10 / hr |
2x NVIDIA A100 | 40 GB | 60 | 400 GiB | 1 TiB | $2.20 / hr |
4x NVIDIA A100 | 40 GB | 120 | 800 GiB | 1 TiB | $4.40 / hr |
8x NVIDIA A100 | 40 GB | 124 | 1800 GiB | 6 TiB | $8.80 / hr |
8x NVIDIA A100 | 80 GB | 240 | 1800 GiB | 20 TiB | $12.00 / hr |
1x NVIDIA RTX A6000 | 48 GB | 14 | 100 GiB | 200 GiB | $0.80 / hr |
2x NVIDIA RTX A6000 | 48 GB | 28 | 200 GiB | 1 TiB | $1.60 / hr |
4x NVIDIA RTX A6000 | 48 GB | 56 | 400 GiB | 1 TiB | $3.20 / hr |
1x NVIDIA A10 | 24 GB | 30 | 200 GiB | 1.4 TiB | $0.60 / hr |
1x NVIDIA Quadro RTX 6000 | 24 GB | 14 | 46 GiB | 512 GiB | $0.50 / hr |
8x NVIDIA Tesla V100 | 16 GB | 92 | 448 GiB | 5.9 TiB | $4.40 / hr |
Reserved Cloud Cluster pricing
The best prices and value for NVIDIA H100 clusters in the industry
Instance type | GPU | GPU Memory | vCPUs | Storage | Network Bandwidth (Gbps) | Per Hour Price | Term | # of GPUs | |
---|---|---|---|---|---|---|---|---|---|
Reserved | 8x NVIDIA H100 | H100 SXM | 80 GB | 200 | 20 TB NVMe SSD local storage minimum | 3200 | $1.89/H100/hour | 3-years | 64 - 60,000 |
Sprint | 8x NVIDIA H100 | H100 SXM | 80 GB | 224 | 27 TB NVMe SSD local storage minimum | 3200 | $4.85/H100/hour | 3-months | 248 |