Instructions for getting TensorFlow and PyTorch running on NVIDIA's GeForce RTX 30 Series GPUs (Ampere), including RTX 3090, RTX 3080, and RTX 3070.
The Lambda Deep Learning Blog
Voltron Data Case Study: Why ML teams are using Lambda Reserved Cloud Clusters
November 01, 2022
How to fine tune stable diffusion: how we made the text-to-pokemon model at Lambda
September 28, 2022
Lambda Cloud Storage is now in open beta: a high speed filesystem for our GPU instances
April 18, 2022
Lambda Teams Up With Razer to Launch the World’s Most Powerful Laptop for Deep Learning
April 11, 2022
This tutorial explains the basics of TensorFlow 2.0 with image classification as the example. 1) Data pipeline with dataset API. 2) Train, evaluation, save and restore models with Keras. 3) Multiple-GPU with distributed strategy. 4) Customized training with callbacks
Distributed training allows scaling up deep learning task so bigger models can be learned from more extensive data. In this tutorial, we will explain how to do distributed training across multiple nodes.
This tutorial explains how early stopping is implemented in TensorFlow. The key lesson is to use tf.keras.EarlyStopping callback. Early stopping is triggered by monitoring if certain quantity has improved over the latest period of time.
This tutorial explained how to use checkpoint to save and restore TensorFlow models during the training. The key is to use tf.kears.ModelCheckpoint callbacks to save the model. Set initial_epoch in the model.fit call to restore the model from a pre-saved checkpoint.
This tutorial explains how to do transfer learning with TensorFlow 2. We will cover: Handeling Customized Dataset, Restore Backbone with Keras's application API, Restore Backbone from disk
RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. Titan V vs. GTX 1080 Ti vs. Titan Xp benchmarks neural net training.
Update June 5th 2020: OpenAI has announced a successor to GPT-2 in a newly published paper. Checkout our GPT-3 model overview [https://lambdalabs.com/blog/demystifying-gpt-3/]. OpenAI recently published a blog post on their GPT-2 language model [https://blog.openai.com/better-language-models/]. This tutorial shows you how to run the
This tutorial is about making a character-based text generator using a simple two-layer LSTM. It will walk you through the data preparation and the network architecture. TensorFlow implementation is available at the end of the tutorial.