A cost and speed comparison between the Lambda Hyperplane 8 V100 GPU Server and AWS p3 GPU instances. A very similar comparison to the DGX-1.
The Lambda Deep Learning Blog
Subscribe
Categories
- gpu-cloud (29)
- tutorials (24)
- announcements (23)
- benchmarks (22)
- lambda cloud (17)
- NVIDIA H100 (16)
- hardware (12)
- gpus (9)
- tensorflow (9)
- NVIDIA A100 (8)
- gpu clusters (8)
- LLMs (7)
- company (7)
- deep learning (7)
- news (7)
- hyperplane (6)
- training (6)
- CNNs (4)
- NVIDIA GH200 (4)
- generative networks (4)
- machine learning (4)
- presentation (4)
- research (4)
- rtx a6000 (4)
- text generation (4)
Recent posts
We reproduce the latest Fast.ai/DIUx's ImageNet result with a single 8 Turing GPUs (Titan RTX) server. It takes 2.36 hours to achieve 93% Top-5 accuracy.
Published 01/15/2019 by Chuan Li
This blog tests how fast does ResNet9 (the fastest way to train a SOTA image classifier on Cifar10) run on Nvidia's Turing GPUs, including 2080 Ti and Titan RTX. We also include 1080 Ti as the baseline for comparison.
Published 01/07/2019 by Chuan Li
...