The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Hugging Face x Lambda: Whisper Fine-Tuning Event

Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.

Published 12/01/2022 by Chuan Li

How To Fine Tune Stable Diffusion: Naruto Character Edition

This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of "prompt engineering". Try it out yourself or use it to learn how to train your own Stable Diffusion variants.

Published 11/02/2022 by Eole Cervenka

How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda

Stable Diffusion is great at many things, but not great at everything, and getting results in a particular style or appearance often involves a lot of work & prompt engineering. If you have a particular type of image you'd like to generate, then an alternative to spending a long time crafting an intricate text prompt is to actually fine tune the image generation model itself.

Published 09/28/2022 by Justin Pinkney

Lambda Cloud Storage is now in open beta: a high speed filesystem for our GPU instances

After a period of closed beta, persistent storage for Lambda GPU Cloud is now available for all A6000 and V100 instances in an extended open beta period.

Published 04/19/2022 by Kathy Bui

Lambda raises $24.5M to build GPU cloud and deep learning hardware

Lambda secured $24.5M in financing, including a $15M Series A equity round and a $9.5M debt facility that will allow for the growth of Lambda GPU Cloud and the expansion of Lambda's on-prem AI infrastructure software products. Read more details in the post.

Published 07/16/2021 by Stephen Balaban

1, 2 & 4-GPU NVIDIA Quadro RTX 6000 Lambda GPU Cloud Instances

1, 2, or 4 NVIDIA® Quadro RTX™ 6000 GPUs on Lambda Cloud are a cost effective way of scaling your machine learning infrastructure. With the new RTX 6000 instances you can expect: a lower initial price of $1.25 / hr, 2x the performance per dollar vs a p3.8xlarge, and up-to-date drivers & frameworks.

Published 10/29/2020 by Remy Guercio

Cutting the cost of deep learning — Lambda Cloud 8-GPU V100 instances

Priced at $12.00 / hr, our new instance provides over 2x more compute per dollar than comparable on-demand 8 GPU instances from other cloud providers.

Published 05/13/2020 by Remy Guercio

How to Transfer Data to Lambda Cloud GPU Instances

This guide will walk you through how to load data from various sources onto your Lambda Cloud GPU instance. If you're looking for how to get started and SSH into your instance for the first time, check out our Getting Started Guide.

Published 05/03/2020 by Remy Guercio

Getting Started Guide — Lambda Cloud GPU Instances

This guide will walk you through the process of launching a Lambda Cloud GPU instance and using SSH to log in. For this guide we'll assume that you're running either Mac OSX or Linux. If you're a Windows user we recommend using either...

Published 05/03/2020 by Remy Guercio

Perform GPU, CPU, and I/O stress testing on Linux

CPU, GPU, and I/O utilization monitoring using tmux, htop, iotop, and nvidia-smi. This stress test is running on a Lambda GPU Cloud [https://lambdalabs.com/service/gpu-cloud] 4x GPU instance.Often times you'll want to put a system through the paces after it's been set up. To stress test

Published 02/17/2019 by Stephen Balaban

On-prem GPU Training Infrastructure for Deep Learning - Slides

You'll learn how to provide your team with GPU training infrastructure at a variety of scales, from a single shared multi-GPU system to a cluster for distributed training.

Published 01/25/2019 by Stephen Balaban

...

Next page