Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.
The Lambda Deep Learning Blog
Voltron Data Case Study: Why ML teams are using Lambda Reserved Cloud Clusters
November 01, 2022
How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda
September 28, 2022
Lambda Cloud Storage is now in open beta: a high speed filesystem for our GPU instances
April 18, 2022
Lambda Teams Up With Razer to Launch the World’s Most Powerful Laptop for Deep Learning
April 11, 2022
This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of "prompt engineering". Try it out yourself or use it to learn how to train your own Stable Diffusion variants.
Stable Diffusion is great at many things, but not great at everything, and getting results in a particular style or appearance often involves a lot of work & prompt engineering. If you have a particular type of image you'd like to generate, then an alternative to spending a long time crafting an intricate text prompt is to actually fine tune the image generation model itself.
After a period of closed beta, persistent storage for Lambda GPU Cloud is now available for all A6000 and V100 instances in an extended open beta period.
Lambda secured $24.5M in financing, including a $15M Series A equity round and a $9.5M debt facility that will allow for the growth of Lambda GPU Cloud and the expansion of Lambda's on-prem AI infrastructure software products. Read more details in the post.
1, 2, or 4 NVIDIA® Quadro RTX™ 6000 GPUs on Lambda Cloud are a cost effective way of scaling your machine learning infrastructure. With the new RTX 6000 instances you can expect: a lower initial price of $1.25 / hr, 2x the performance per dollar vs a p3.8xlarge, and up-to-date drivers & frameworks.
Priced at $12.00 / hr, our new instance provides over 2x more compute per dollar than comparable on-demand 8 GPU instances from other cloud providers.
This guide will walk you through how to load data from various sources onto your Lambda Cloud GPU instance. If you're looking for how to get started and SSH into your instance for the first time, check out our Getting Started Guide.
This guide will walk you through the process of launching a Lambda Cloud GPU instance and using SSH to log in. For this guide we'll assume that you're running either Mac OSX or Linux. If you're a Windows user we recommend using either...
CPU, GPU, and I/O utilization monitoring using tmux, htop, iotop, and nvidia-smi. This stress test is running on a Lambda GPU Cloud [https://lambdalabs.com/service/gpu-cloud] 4x GPU instance.Often times you'll want to put a system through the paces after it's been set up. To stress test
You'll learn how to provide your team with GPU training infrastructure at a variety of scales, from a single shared multi-GPU system to a cluster for distributed training.