The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Benchmarking ZeRO-Inference on the NVIDIA GH200 Grace Hopper Superchip

Benchmarks comparing inference performance of the NVIDIA GH200 Grace Hopper Superchip, enhanced by ZeRO-Inference, to NVIDIA H100 and A100 Tensor Core GPUs.

Published 12/20/2023 by Chuan Li

Unleashing the power of Transformers with NVIDIA Transformer Engine

Benchmarks on NVIDIA’s Transformer Engine, which boosts FP8 performance by an impressive 60% on GPT3-style model testing on NVIDIA H100 Tensor Core GPUs.

Published 11/21/2023 by Chuan Li

DeepChat 3-Step Training At Scale: Lambda’s Instances of NVIDIA H100 SXM5 vs A100 SXM4

GPU benchmarks on Lambda’s offering of the NVIDIA H100 SXM5 vs the NVIDIA A100 SXM4 using DeepChat’s 3-step training example.

Published 10/12/2023 by Chuan Li

How FlashAttention-2 Accelerates LLMs on NVIDIA H100 and A100 GPUs

How to use FlashAttention-2 on Lambda Cloud, including H100 vs A100 benchmark results for training GPT-3-style models using the new model.

Published 08/24/2023 by Chuan Li

NVIDIA GeForce RTX 4090 vs RTX 3090 Deep Learning Benchmark

RTX 4090 vs RTX 3090 benchmarks to assess deep learning training performance, including training throughput/$, throughput/watt, and multi-GPU scaling.

Published 10/31/2022 by Chuan Li

All You Need Is One GPU: Inference Benchmark for Stable Diffusion

Lambda presents stable diffusion benchmarks with different GPUs including A100, RTX 3090, RTX A6000, RTX 3080, and RTX 8000, as well as various CPUs.

Published 10/05/2022 by Eole Cervenka

NVIDIA H100 Tensor Core GPU - Deep Learning Performance Analysis

This article discusses the performance and scalability of H100 GPUs and the whys for upgrading your ML infrastructure with the H100 release from NVIDIA.

Published 10/05/2022 by Chuan Li

NVIDIA A40 Deep Learning Benchmarks

GPU benchmarks on NVIDIA A40 GPUs with 48 GB of GDDR6 VRAM, including performance comparisons to the NVIDIA V100, RTX 8000, RTX 6000, and RTX 5000.

Published 11/30/2021 by Chuan Li

Tesla A100 Server Total Cost of Ownership Analysis

This post discusses the Total Cost of Ownership (TCO) for a variety of Lambda A100 servers and clusters. We calculate the TCO for individual Hyperplane-A100 servers, compare the cost with renting a AWS p4d.24xlarge instance, and walk through the cost of building and operating A100 clusters.

Published 09/22/2021 by Chuan Li

RTX A6000 vs RTX 3090 Deep Learning Benchmarks

PyTorch benchmarks of the RTX A6000 and RTX 3090 for convnets and language models - both 32-bit and mix precision performance.

Published 08/09/2021 by Chuan Li

A100 vs V100 Deep Learning Benchmarks

PyTorch & TensorFlow benchmarks of the Tesla A100 and V100 for convnets and language models - both both 32-bit and mix precision performance.

Published 01/28/2021 by Michael Balaban

RTX A6000 Deep Learning Benchmarks

PyTorch and TensorFlow training speeds on models like ResNet-50, SSD, and Tacotron 2. Compare performance of the RTX 3090, 3080, A100, V100, and A6000.

Published 01/04/2021 by Michael Balaban

NVIDIA A100 GPU Benchmarks for Deep Learning

Benchmarks for ResNet-152, Inception v3, Inception v4, VGG-16, AlexNet, SSD300, and ResNet-50 using the NVIDIA A100 GPU and DGX A100 server.

Published 05/22/2020 by Stephen Balaban


Next page