The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Tesla A100 Server Total Cost of Ownership Analysis

This post discusses the Total Cost of Ownership (TCO) for a variety of Lambda A100 servers and clusters. We first calculate the TCO for individual Hyperplane-A100 servers, and compare the cost with renting a AWS p4d.24xlarge instance which has the similar hardware and software set up. We then walk you through the cost of building and operating A100 clusters.

Published 09/22/2021 by Chuan Li

Lambda Echelon – a turn key GPU cluster for your ML team

Introducing the Lambda EchelonLambda Echelon [] is a GPU cluster designed for AI. It comes with the compute, storage, network, power, and support you need to tackle large scale deep learning tasks. Echelon offers a turn-key solution to faster training, faster hyperparameter search, and faster inference.

Published 10/06/2020 by Stephen Balaban

Hyperplane-16 InfiniBand Cluster Total Cost of Ownership Analysis

In this post we'll walk through using our Total Cost of Ownership (TCO) calculator to examine the cost of a variety of Lambda Hyperplane-16 clusters. We have the option to include 100 Gb/s EDR InfiniBand networking, storage servers, and complete rack-stack-label-cable service. The purpose of this post is to

Published 04/07/2020 by Stephen Balaban

Tracking system resource (GPU, CPU, etc.) utilization during training with the Weights & Biases Dashboard

One of the most asked questions we get at Lambda Labs is, “how do I track resource utilization for deep learning jobs?” Resource utilization tracking can help machine learning engineers improve both their software pipeline and model performance. I recently came across a tool called "Weights and Biases [https://www.

Published 08/12/2019 by Chuan Li

A Gentle Introduction to Multi GPU and Multi Node Distributed Training

This presentation is a high-level overview of the different types of training regimes that you'll encounter as you move from single GPU to multi GPU to multi node distributed training. It briefly describes where the computation happens, how the gradients are communicated, and how the models are updated and communicated.

Published 05/31/2019 by Stephen Balaban

Titan V Deep Learning Benchmarks with TensorFlow

Titan V vs. RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. GTX 1080 Ti vs. Titan Xp - TensorFlow benchmarks for neural net training.

Published 03/12/2019 by Michael Balaban

RTX 2080 Ti Deep Learning Benchmarks with TensorFlow

RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. Titan V vs. GTX 1080 Ti vs. Titan Xp benchmarks neural net training.

Published 03/04/2019 by Stephen Balaban

Perform GPU, CPU, and I/O stress testing on Linux

CPU, GPU, and I/O utilization monitoring using tmux, htop, iotop, and nvidia-smi. This stress test is running on a Lambda GPU Cloud [] 4x GPU instance.Often times you'll want to put a system through the paces after it's been set up. To stress test

Published 02/17/2019 by Stephen Balaban

Titan RTX Deep Learning Benchmarks

Titan RTX vs. 2080 Ti vs. 1080 Ti vs. Titan Xp vs. Titan V vs. Tesla V100. For this post, Lambda engineers benchmarked the Titan RTX's deep learning performance vs. other common GPUs. We measured the Titan RTX's single-GPU training performance on ResNet50, ResNet152, Inception3, Inception4, VGG16, AlexNet, and SSD.

Published 12/26/2018 by Michael Balaban

Crowd Sourced Deep Learning GPU Benchmarks from the Community

We open sourced the benchmarking code [] we use at Lambda Labs so that anybody can reproduce the benchmarks that we publish or run their own. We encourage people to email us with their results and will continue to publish those results here. You can run

Published 10/12/2018 by Stephen Balaban

Deep Learning GPU Benchmarks - V100 vs 2080 Ti vs 1080 Ti vs Titan V

What's the best GPU for Deep Learning? The 2080 Ti. We benchmark the 2080 Ti vs the Titan V, V100, and 1080 Ti.

Published 10/08/2018 by Stephen Balaban


Next page