The Lambda Deep Learning Blog

Featured Posts

Recent Posts

All You Need Is One GPU: Inference Benchmark for Stable Diffusion

Lambda presents stable diffusion benchmarks with different GPUs including A100, RTX 3090, RTX A6000, RTX 3080, and RTX 8000, as well as various CPUs.

Published 10/05/2022 by Eole Cervenka

Keeping an eye on your GPUs - GPU monitoring tools compared

The best tools for monitoring your GPU usage and performance statistics compared.

Published 03/29/2022 by Justin Pinkney

Best GPU for Deep Learning in 2022 (so far)

While waiting for NVIDIA's next-generation consumer & professional GPUs, here are the best GPUs for Deep Learning currently available as of March 2022.

Published 02/28/2022 by Chuan Li

NVIDIA A40 Deep Learning Benchmarks

GPU benchmarks on NVIDIA A40 GPUs with 48 GB of GDDR6 VRAM, including performance comparisons to the NVIDIA V100, RTX 8000, RTX 6000, and RTX 5000.

Published 11/30/2021 by Chuan Li

RTX A6000 Deep Learning Benchmarks

PyTorch and TensorFlow training speeds on models like ResNet-50, SSD, and Tacotron 2. Compare performance of the RTX 3090, 3080, A100, V100, and A6000.

Published 01/04/2021 by Michael Balaban

Deep Learning Hardware Deep Dive – RTX 3090, RTX 3080, and RTX 3070

It’s important to take into account available space, power, cooling, and relative performance into account when deciding which RTX 3000 series cards to include in your next deep learning workstation.

Published 09/14/2020 by Michael Balaban

NVIDIA A100 GPU Benchmarks for Deep Learning

Benchmarks for ResNet-152, Inception v3, Inception v4, VGG-16, AlexNet, SSD300, and ResNet-50 using the NVIDIA A100 GPU and DGX A100 server.

Published 05/22/2020 by Stephen Balaban

Choosing the Best GPU for Deep Learning in 2020

GPU benchmarks for training State of the Art (SOTA) deep learning models. We measure each GPU's performance by batch capacity and more.

Published 02/18/2020 by Michael Balaban

...

Next page