The Lambda Deep Learning Blog

Subscribe

Featured posts

Recent posts

Considerations for Large-Scale NVIDIA H100 Cluster Deployments

How to build the GPU infrastructure needed to pretrain LLM and Generative AI models from scratch (e.g. GPT-4, LaMDA, LLaMA, BLOOM).

Published 07/13/2023 by David Hall

Lambda Cloud Deploys NVIDIA H100 Tensor Core GPUs

Lambda Cloud has deployed a fleet of NVIDIA H100 Tensor Core GPUs, making it one of the FIRST to market with general-availability, on-demand H100 GPUs. The high-performance GPUs enable faster training times, better model accuracy, and increased productivity.

Published 05/10/2023 by Kathy Bui

Lambda Cloud Adding NVIDIA H100 Tensor Core GPUs in Early April

In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.

Published 03/21/2023 by Mitesh Agrawal

...

Next page