Lambda Cloud has deployed a fleet of NVIDIA H100 Tensor Core GPUs, making it one of the FIRST to market with general-availability, on-demand H100 GPUs. The high-performance GPUs enable faster training times, better model accuracy, and increased productivity.
The Lambda Deep Learning Blog
Featured Posts
Recent Posts
In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.
Published 03/21/2023 by Mitesh Agrawal
Native support for FP8 data types is here with the release of the NVIDIA H100 Tensor Core GPU. These new FP8 types can speed up training and inference.
Published 12/07/2022 by Jeremy Hummel
Discuss the performance and scalability of H100 GPUs and the whys for upgrading your ML infrastructure with this upcoming big release from NVIDIA.
Published 10/05/2022 by Chuan Li