Instructions for getting TensorFlow and PyTorch running on NVIDIA's GeForce RTX 30 Series GPUs (Ampere), including RTX 3090, RTX 3080, and RTX 3070.
The Lambda Deep Learning Blog
Featured Posts
Recent Posts
Published 08/10/2021 by Michael Balaban
PyTorch & TensorFlow benchmarks of the Tesla A100 and V100 for convnets and language models - both both 32-bit and mix precision performance.
Published 01/28/2021 by Michael Balaban
PyTorch and TensorFlow training speeds on models like ResNet-50, SSD, and Tacotron 2. Compare performance of the RTX 3090, 3080, A100, V100, and A6000 .
Published 01/04/2021 by Michael Balaban
It’s important to take into account available space, power, cooling, and relative performance into account when deciding which RTX 3000 series cards to include in your next deep learning workstation.
Published 09/14/2020 by Michael Balaban
A balanced perspective on OpenAI's GPT-3. We summarize how the A.I. research community is thinking about Open AI's new language model.
Published 07/20/2020 by Michael Balaban
This blog summarizes our GPU benchmark for training State of the Art (SOTA) deep learning models. We measure each GPU's performance by batch capacity as well as...
Published 02/18/2020 by Michael Balaban
Titan V vs. RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. GTX 1080 Ti vs. Titan Xp - TensorFlow benchmarks for neural net training.
Published 03/12/2019 by Michael Balaban
Titan RTX vs. 2080 Ti vs. 1080 Ti vs. Titan Xp vs. Titan V vs. Tesla V100. For this post, Lambda engineers benchmarked the Titan RTX's deep learning performance vs. other common GPUs. We measured the Titan RTX's single-GPU training performance on ResNet50, ResNet152, Inception3, Inception4, VGG16, AlexNet, and SSD.
Published 12/26/2018 by Michael Balaban