Lambda Demos streamlines the process of hosting your own machine learning demos. Host a Gradio app using your existing repository URL in just a few clicks.
The Lambda Deep Learning Blog
Featured Posts
Recent Posts
Tired of waiting in a queue to try out Stable Diffusion or another ML app? Lambda GPU Cloud’s Demos feature makes it easy to host your own ML apps.
Published 05/18/2023 by Cody Brownstein
Lambda Cloud has deployed a fleet of NVIDIA H100 Tensor Core GPUs, making it one of the FIRST to market with general-availability, on-demand H100 GPUs. The high-performance GPUs enable faster training times, better model accuracy, and increased productivity.
Published 05/10/2023 by Kathy Bui
For the third consecutive year, Lambda has been chosen as NVIDIA Partner Network (NPN) Solution Integration Partner of the Year.
Published 04/04/2023 by Jaimie Renner
Lambda secured a $44 million Series B to accelerate the growth of our AI cloud. Funds will be used to deploy new H100 GPU capacity with high-speed network interconnects and develop features that will make Lambda the best cloud in the world for training AI.
Published 03/21/2023 by Stephen Balaban
In early April, NVIDIA H100 Tensor Core GPUs, the fastest GPU type on the market, will be added to Lambda Cloud. NVIDIA H100 80GB PCIe Gen5 instances will go live first, with SXM to follow very shortly after.
Published 03/21/2023 by Mitesh Agrawal
How to use mpirun to launch a LLAMA inference job across multiple cloud instances if you do not have a multi-GPU workstation or server. Despite being more memory efficient than previous language foundation models, LLAMA still requires multiple GPUs to run inference with.
Published 03/14/2023 by Chuan Li
Lambda's GPU cloud has a new team feature that allows you to invite your team to join your account for easy collaboration and more.
Published 01/13/2023 by Kathy Bui
Native support for FP8 data types is here with the release of the NVIDIA H100 Tensor Core GPU. These new FP8 types can speed up training and inference.
Published 12/07/2022 by Jeremy Hummel
Lambda and Hugging Face are collaborating on a 2-week sprint to fine-tune OpenAI's Whisper model in as many languages as possible.
Published 12/01/2022 by Chuan Li
This blog walks through how to fine tune stable diffusion to create a text-to-naruto character model, emphasizing the importance of "prompt engineering". Try it out yourself or use it to learn how to train your own Stable Diffusion variants.
Published 11/02/2022 by Eole Cervenka
In this blog, we will outline the benefits of our new Reserved Cloud Cluster and an example of how Voltron Data is using it to work with large datasets.
Published 11/01/2022 by Lauren Watkins
In this blog post, we benchmark RTX 4090 to assess its deep learning training performance and compare its performance against RTX 3090, the flagship consumer GPU of the previous Ampere generation.
Published 10/31/2022 by Chuan Li