Lambda secured a $44 million Series B to accelerate the growth of our AI cloud. Funds will be used to deploy new H100 GPU capacity with high-speed network interconnects and develop features that will make Lambda the best cloud in the world for training AI.
The Lambda Deep Learning Blog
Featured Posts
Recent Posts
In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.
Published 03/21/2023 by Mitesh Agrawal
How to use mpirun to launch a LLAMA inference job across multiple cloud instances if you do not have a multi-GPU workstation or server. Despite being more memory efficient than previous language foundation models, LLAMA still requires multiple GPUs to run inference with.
Published 03/14/2023 by Chuan Li
Lambda's GPU cloud has a new team feature that allows you to invite your team to join your account for easy collaboration and more.
Published 01/13/2023 by Kathy Bui
Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.
Published 12/01/2022 by Chuan Li
This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of "prompt engineering". Try it out yourself or use it to learn how to train your own Stable Diffusion variants.
Published 11/02/2022 by Eole Cervenka
It can require a lot of work and prompt engineering to get results in a particular style using Stable Diffusion. Fine tuning the image generation model itself presents an efficient solution. This blog walks through how to fine tune Stable Diffusion on a Pokemon dataset to create a text to Pokemon image model.
Published 09/28/2022 by Justin Pinkney
After a period of closed beta, persistent storage for Lambda GPU Cloud is now available for all A6000 and V100 instances in an extended open beta period.
Published 04/19/2022 by Kathy Bui
Lambda secured $24.5M in financing, including a $15M Series A equity round and a $9.5M debt facility that will allow for the growth of Lambda GPU Cloud and the expansion of Lambda's on-prem AI infrastructure software products. Read more details in the post.
Published 07/16/2021 by Stephen Balaban
1, 2, or 4 NVIDIA® Quadro RTX™ 6000 GPUs on Lambda Cloud are a cost effective way of scaling your machine learning infrastructure. With the new RTX 6000 instances you can expect: a lower initial price of $1.25 / hr, 2x the performance per dollar vs a p3.8xlarge, and up-to-date drivers & frameworks.
Published 10/29/2020 by Remy Guercio
Priced at $12.00 / hr, our new instance provides over 2x more compute per dollar than comparable on-demand 8 GPU instances from other cloud providers.
Published 05/13/2020 by Remy Guercio
This guide will walk you through how to load data from various sources onto your Lambda Cloud GPU instance. If you're looking for how to get started and SSH into your instance for the first time, check out our Getting Started Guide.
Published 05/03/2020 by Remy Guercio
This guide will walk you through the process of launching a Lambda Cloud GPU instance and using SSH to log in. For this guide we'll assume that you're running either Mac OSX or Linux. If you're a Windows user we recommend using either...
Published 05/03/2020 by Remy Guercio