How to build the GPU infrastructure needed to pretrain LLM and Generative AI models from scratch (e.g. GPT-4, LaMDA, LLaMA, BLOOM).
The Lambda Deep Learning Blog
Subscribe
Categories
- gpu-cloud (27)
- tutorials (24)
- benchmarks (22)
- announcements (21)
- lambda cloud (15)
- NVIDIA H100 (14)
- hardware (12)
- gpus (9)
- tensorflow (9)
- NVIDIA A100 (8)
- gpu clusters (8)
- LLMs (7)
- company (7)
- news (7)
- deep learning (6)
- hyperplane (6)
- training (6)
- CNNs (4)
- generative networks (4)
- machine learning (4)
- presentation (4)
- research (4)
- rtx a6000 (4)
- text generation (4)
Recent posts
Lambda Cloud has deployed a fleet of NVIDIA H100 Tensor Core GPUs, making it one of the FIRST to market with general-availability, on-demand H100 GPUs. The high-performance GPUs enable faster training times, better model accuracy, and increased productivity.
Published 05/10/2023 by Kathy Bui
In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.
Published 03/21/2023 by Mitesh Agrawal
...