The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Lambda launches Vector One, a new single-GPU desktop PC

The Lambda Vector One is a single-GPU desktop PC built to tackle demanding AI/ML tasks, from fine-tuning Stable Diffusion to handling the complexities of Llama 2 7B.

Published 12/12/2023 by Samuel Park

Tesla A100 Server Total Cost of Ownership Analysis

This post discusses the Total Cost of Ownership (TCO) for a variety of Lambda A100 servers and clusters. We calculate the TCO for individual Hyperplane-A100 servers, compare the cost with renting a AWS p4d.24xlarge instance, and walk through the cost of building and operating A100 clusters.

Published 09/22/2021 by Chuan Li

Lambda Echelon – a turn key GPU cluster for your ML team

Introducing the Lambda Echelon, a GPU cluster designed for AI. It comes with the compute, storage, network, power, and support you need to tackle large scale deep learning tasks. Echelon offers a turn-key solution to faster training, faster hyperparameter search, and faster inference.

Published 10/06/2020 by Stephen Balaban

Hyperplane-16 InfiniBand Cluster Total Cost of Ownership Analysis

This post uses our Total Cost of Ownership (TCO) calculator to examine the cost of a variety of Lambda Hyperplane-16 clusters. We have the option to include 100 Gb/s EDR InfiniBand networking, storage servers, and complete rack-stack-label-cable service.

Published 04/07/2020 by Stephen Balaban

Tracking system resource (GPU, CPU, etc.) utilization during training with the Weights & Biases Dashboard

Resource utilization tracking can help machine learning engineers improve their software pipeline and model performance. This blog discusses how to use Weights & Biases to inspect the efficiency of TensorFlow training jobs.

Published 08/12/2019 by Chuan Li

A Gentle Introduction to Multi GPU and Multi Node Distributed Training

This presentation is a high-level overview of the different types of training regimes you'll encounter as you move from single GPU to multi GPU to multi node distributed training. It describes where the computation happens, how the gradients are communicated, and how the models are updated and communicated.

Published 05/31/2019 by Stephen Balaban

Titan V Deep Learning Benchmarks with TensorFlow

Titan V vs. RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. GTX 1080 Ti vs. Titan Xp - TensorFlow benchmarks for neural net training.

Published 03/12/2019 by Michael Balaban

RTX 2080 Ti Deep Learning Benchmarks with TensorFlow

RTX 2080 Ti vs. RTX 2080 vs. Titan RTX vs. Tesla V100 vs. Titan V vs. GTX 1080 Ti vs. Titan Xp benchmarks neural net training.

Published 03/04/2019 by Stephen Balaban

Perform GPU, CPU, and I/O stress testing on Linux

How to stress test a system for simultaneous GPU and CPU loads using two stress tools, stress and gpu_burn, and three monitoring tools, htop, iotop and nvidia-smi.

Published 02/17/2019 by Stephen Balaban

Titan RTX Deep Learning Benchmarks

Titan RTX vs. 2080 Ti vs. 1080 Ti vs. Titan Xp vs. Titan V vs. Tesla V100. For this post, Lambda engineers benchmarked the Titan RTX's deep learning performance vs. other common GPUs. We measured the Titan RTX's single-GPU training performance on ResNet50, ResNet152, Inception3, Inception4, VGG16, AlexNet, and SSD.

Published 12/26/2018 by Michael Balaban

Crowd Sourced Deep Learning GPU Benchmarks from the Community

We open sourced the benchmarking code we use at Lambda so that anybody can reproduce the benchmarks that we publish or run their own.

Published 10/12/2018 by Stephen Balaban

Deep Learning GPU Benchmarks - V100 vs 2080 Ti vs 1080 Ti vs Titan V

What's the best GPU for Deep Learning? The 2080 Ti. We benchmark the 2080 Ti vs the Titan V, V100, and 1080 Ti.

Published 10/08/2018 by Stephen Balaban

...

Next page