The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Lambda Cloud accounts now support teams!

Lambda's GPU cloud has a new team feature that allows you to invite your team to join your account for easy collaboration and more.

Published 01/13/2023 by Kathy Bui

NVIDIA Hopper: H100 and FP8 Support

Native support for FP8 data types is here with the release of the NVIDIA H100 Tensor Core GPU. These new FP8 types can speed up training and inference.

Published 12/07/2022 by Jeremy Hummel

Hugging Face x Lambda: Whisper Fine-Tuning Event

Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.

Published 12/01/2022 by Chuan Li

How To Fine Tune Stable Diffusion: Naruto Character Edition

This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of "prompt engineering". Try it out yourself or use it to learn how to train your own Stable Diffusion variants.

Published 11/02/2022 by Eole Cervenka

Voltron Data Case Study: Why ML teams are using Lambda Reserved Cloud Clusters

In this blog, we will outline the benefits of our new Reserved Cloud Cluster and an example of how Voltron Data is using it to work with large datasets.

Published 11/01/2022 by Lauren Watkins

NVIDIA GeForce RTX 4090 vs RTX 3090 Deep Learning Benchmark

In this blog post, we benchmark RTX 4090 to assess its deep learning training performance and compare its performance against RTX 3090, the flagship consumer GPU of the previous Ampere generation.

Published 10/31/2022 by Chuan Li

September 2022 Lambda GPU Cloud Release Notes

In this blog post, we go over the most recent updates we made to Lambda on-demand GPU cloud in September, 2022.

Published 10/11/2022 by Cody Brownstein

All You Need Is One GPU: Inference Benchmark for Stable Diffusion

Lambda presents an inference benchmark of Stable Diffusion model with different GPUs and CPUs.

Published 10/05/2022 by Eole Cervenka

NVIDIA H100 GPU - Deep Learning Performance Analysis

Discuss the performance and scalability of H100 GPUs and the whys for upgrading your ML infrastructure with this upcoming big release from NVIDIA.

Published 10/05/2022 by Chuan Li

How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda

Stable Diffusion is great at many things, but not great at everything, and getting results in a particular style or appearance often involves a lot of work & prompt engineering. If you have a particular type of image you'd like to generate, then an alternative to spending a long time crafting an intricate text prompt is to actually fine tune the image generation model itself.

Published 09/28/2022 by Justin Pinkney

Multi node PyTorch Distributed Training Guide For People In A Hurry

The goal of this tutorial is to give a summary of how to write and launch PyTorch distributed data parallel jobs across multiple nodes, with working examples with the torch.distributed.launch, torchrun and mpirun APIs.

Published 08/26/2022 by Chuan Li

Training YoloV5 face detector on Lambda Cloud

How Lambda Cloud can save a Machine Learning Engineer time and money to train state of the art YoloV5 object detection models.

Published 08/15/2022 by Cooper L

Careers at Lambda

Lambda is hiring! Join a fast growing startup providing deep learning hardware, software, and cloud services to the world's leading companies.

Published 06/26/2022 by Stephen Balaban

...

Next page