The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Lambda Raises $44M to Build the World’s Best Cloud for Training AI

Lambda secured a $44 million Series B to accelerate the growth of our AI cloud. Funds will be used to deploy new H100 GPU capacity with high-speed network interconnects and develop features that will make Lambda the best cloud in the world for training AI.

Published 03/21/2023 by Stephen Balaban

Lambda Cloud Adding NVIDIA H100 Tensor Core GPUs in Early April

In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.

Published 03/21/2023 by Mitesh Agrawal

How To Use mpirun to Launch a LLaMA Inference Job Across Multiple Cloud Instances

Learn how to use mpirun to launch a LLaMA inference job across multiple cloud instances if you do not have a multi-GPU workstation or server.

Published 03/14/2023 by Chuan Li

Lambda Cloud accounts now support teams!

Lambda's GPU cloud has a new team feature that allows you to invite your team to join your account for easy collaboration and more.

Published 01/13/2023 by Kathy Bui

NVIDIA Hopper: H100 and FP8 Support

Native support for FP8 data types is here with the release of the NVIDIA H100 Tensor Core GPU. These new FP8 types can speed up training and inference.

Published 12/07/2022 by Jeremy Hummel

Hugging Face x Lambda: Whisper Fine-Tuning Event

Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.

Published 12/01/2022 by Chuan Li

How To Fine Tune Stable Diffusion: Naruto Character Edition

This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of prompt engineering.

Published 11/02/2022 by Eole Cervenka

Voltron Data Case Study: Why ML teams are using Lambda Reserved Cloud Clusters

In this blog, we will outline the benefits of our new Reserved Cloud Cluster and an example of how Voltron Data is using it to work with large datasets.

Published 11/01/2022 by Lauren Watkins

NVIDIA GeForce RTX 4090 vs RTX 3090 Deep Learning Benchmark

RTX 4090 vs RTX 3090 benchmarks to assess deep learning training performance, including training throughput/$, throughput/watt, and multi-GPU scaling.

Published 10/31/2022 by Chuan Li

September 2022 Lambda GPU Cloud Release Notes

In this blog post, we go over the most recent updates we made to Lambda on-demand GPU cloud in September, 2022.

Published 10/11/2022 by Cody Brownstein

All You Need Is One GPU: Inference Benchmark for Stable Diffusion

Lambda presents stable diffusion benchmarks with different GPUs including A100, RTX 3090, RTX A6000, RTX 3080, and RTX 8000, as well as various CPUs.

Published 10/05/2022 by Eole Cervenka

NVIDIA H100 Tensor Core GPU - Deep Learning Performance Analysis

This article discusses the performance and scalability of H100 GPUs and the whys for upgrading your ML infrastructure with the H100 release from NVIDIA.

Published 10/05/2022 by Chuan Li

How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda

How to fine tune Stable Diffusion on a Pokemon dataset to create a text to Pokemon image model. Use the guide to train your own Stable Diffusion models on your own datasets.

Published 09/28/2022 by Justin Pinkney


Next page