The Lambda Deep Learning Blog

Featured Posts

Recent Posts

Lambda launches new Hyperplane Server with NVIDIA H100 GPUs and AMD EPYC 9004 series CPUs

Lambda has launched a new Hyperplane server combining the fastest GPU on the market, NVIDIA H100, with the world’s best data center CPU, AMD EPYC 9004.

Published 09/07/2023 by Maxx Garrison

Tesla A100 Server Total Cost of Ownership Analysis

This post discusses the Total Cost of Ownership (TCO) for a variety of Lambda A100 servers and clusters. We calculate the TCO for individual Hyperplane-A100 servers, compare the cost with renting a AWS p4d.24xlarge instance, and walk through the cost of building and operating A100 clusters.

Published 09/22/2021 by Chuan Li

Lambda Echelon – a turn key GPU cluster for your ML team

Introducing the Lambda Echelon, a GPU cluster designed for AI. It comes with the compute, storage, network, power, and support you need to tackle large scale deep learning tasks. Echelon offers a turn-key solution to faster training, faster hyperparameter search, and faster inference.

Published 10/06/2020 by Stephen Balaban

Hyperplane-16 InfiniBand Cluster Total Cost of Ownership Analysis

This post uses our Total Cost of Ownership (TCO) calculator to examine the cost of a variety of Lambda Hyperplane-16 clusters. We have the option to include 100 Gb/s EDR InfiniBand networking, storage servers, and complete rack-stack-label-cable service.

Published 04/07/2020 by Stephen Balaban

Training Neural Networks in Record Time with the Hyperplane-16

Scaling out deep learning infrastructure becomes easier with 16 NVIDIA Tesla V100 GPUs and preinstalled frameworks like TensorFlow, Keras, and PyTorch.

Published 12/19/2019 by Chuan Li

V100 server on-prem vs AWS p3 instance cost comparison

A cost and speed comparison between the Lambda Hyperplane 8 V100 GPU Server and AWS p3 GPU instances. A very similar comparison to the DGX-1.

Published 02/11/2019 by Chuan Li

...

Next page