Instructions for getting TensorFlow and PyTorch running on NVIDIA's GeForce RTX 30 Series GPUs (Ampere), including RTX 3090, RTX 3080, and RTX 3070.
The Lambda Deep Learning Blog
Featured Posts
Recent Posts
Published 08/10/2021 by Michael Balaban
This tutorial explains the basics of TensorFlow 2.0 with image classification as the example. 1) Data pipeline with dataset API. 2) Train, evaluation, save and restore models with Keras. 3) Multiple-GPU with distributed strategy. 4) Customized training with callbacks
Published 10/01/2019 by Chuan Li
Distributed training allows scaling up deep learning task so bigger models can be learned from more extensive data. In this tutorial, we will explain how to do distributed training across multiple nodes.
Published 06/07/2019 by Chuan Li
This tutorial explains how early stopping is implemented in TensorFlow. The key lesson is to use tf.keras.EarlyStopping callback. Early stopping is triggered by monitoring if certain quantity has improved over the latest period of time.
Published 06/06/2019 by Chuan Li
This tutorial explained how to use checkpoint to save and restore TensorFlow models during the training. The key is to use tf.kears.ModelCheckpoint callbacks to save the model. Set initial_epoch in the model.fit call to restore the model from a pre-saved checkpoint.
Published 06/06/2019 by Chuan Li
This tutorial explains how to do transfer learning with TensorFlow 2. We will cover: Handeling Customized Dataset, Restore Backbone with Keras's application API, Restore Backbone from disk
Published 06/05/2019 by Chuan Li
RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. Titan V vs. GTX 1080 Ti vs. Titan Xp benchmarks neural net training.
Published 03/04/2019 by Stephen Balaban
Update June 5th 2020: OpenAI has announced a successor to GPT-2 in a newly published paper. Checkout our GPT-3 model overview [https://lambdalabs.com/blog/demystifying-gpt-3/]. OpenAI recently published a blog post on their GPT-2 language model [https://blog.openai.com/better-language-models/]. This tutorial shows you how to run the
Published 02/16/2019 by Stephen Balaban
This tutorial is about making a character-based text generator using a simple two-layer LSTM. It will walk you through the data preparation and the network architecture. TensorFlow implementation is available at the end of the tutorial.
Published 02/08/2019 by Chuan Li