Lambda was selected as NVIDIA's 2024 AI Excellence Partner of the Year for providing NVIDIA-powered systems via our Cloud and on-prem offerings.
The Lambda Deep Learning Blog
Categories
- gpu-cloud (25)
- tutorials (24)
- benchmarks (22)
- announcements (19)
- lambda cloud (13)
- NVIDIA H100 (12)
- hardware (12)
- tensorflow (9)
- NVIDIA A100 (8)
- gpus (8)
- company (7)
- LLMs (6)
- deep learning (6)
- hyperplane (6)
- news (6)
- training (6)
- gpu clusters (5)
- CNNs (4)
- generative networks (4)
- presentation (4)
- research (4)
- rtx a6000 (4)
Recent Posts
The NVIDIA GB200 Superchip and NVIDIA B200 and B100 GPUs will be available through Lambda’s On-Demand & Reserved Cloud, and NVIDIA DGX SuperPODs.
Published 03/18/2024 by Maxx Garrison
Lambda raised a $320M Series C for a $1.5B valuation, to expand our GPU cloud & further our mission to build the #1 AI compute platform in the world.
Published 02/15/2024 by Stephen Balaban
Persistent storage is now available in all Lambda Cloud regions and for all on-demand instance types, including our NVIDIA H100 Tensor Core GPU instances.
Published 12/19/2023 by Kathy Bui
The Lambda Vector One is a single-GPU desktop PC built to tackle demanding AI/ML tasks, from fine-tuning Stable Diffusion to handling the complexities of Llama 2 7B.
Published 12/12/2023 by Samuel Park
Lambda will be one of the first cloud providers in the world to offer customers access to NVIDIA H200 Tensor Core GPUs through Lambda Cloud Clusters.
Published 11/13/2023 by Maxx Garrison
Lambda Cloud Clusters are now available with the NVIDIA GH200 Grace Hopper Superchip. A single GH200 has 576GB of coherent memory for unmatched efficiency.
Published 11/13/2023 by Maxx Garrison
Persistent storage for Lambda Cloud is expanding. Filesystems are now available for all regions except Utah, which is coming very soon.
Published 09/20/2023 by Kathy Bui
Lambda has launched a new Hyperplane server combining the fastest GPU on the market, NVIDIA H100, with the world’s best data center CPU, AMD EPYC 9004.
Published 09/07/2023 by Maxx Garrison
On-demand HGX H100 systems with 8x NVIDIA H100 SXM instances are now available on Lambda Cloud for only $2.59/hr/GPU.
Published 08/02/2023 by Kathy Bui
Lambda Cloud has deployed a fleet of NVIDIA H100 Tensor Core GPUs, making it one of the FIRST to market with general-availability, on-demand H100 GPUs. The high-performance GPUs enable faster training times, better model accuracy, and increased productivity.
Published 05/10/2023 by Kathy Bui
For the third consecutive year, Lambda has been chosen as NVIDIA Partner Network (NPN) Solution Integration Partner of the Year.
Published 04/04/2023 by Jaimie Renner
Lambda secured a $44 million Series B to accelerate the growth of our AI cloud. Funds will be used to deploy new H100 GPU capacity with high-speed network interconnects and develop features that will make Lambda the best cloud in the world for training AI.
Published 03/21/2023 by Stephen Balaban