Unlock the potential of open-source LLMs by hosting your very own langchain+Falcon+Chroma application. Upload a PDF and engage in Q&A about its contents.
The Lambda Deep Learning Blog
Subscribe
Categories
- gpu-cloud (29)
- tutorials (24)
- announcements (23)
- benchmarks (22)
- lambda cloud (17)
- NVIDIA H100 (16)
- hardware (12)
- gpus (9)
- tensorflow (9)
- NVIDIA A100 (8)
- gpu clusters (8)
- LLMs (7)
- company (7)
- deep learning (7)
- news (7)
- hyperplane (6)
- training (6)
- CNNs (4)
- NVIDIA GH200 (4)
- generative networks (4)
- machine learning (4)
- presentation (4)
- research (4)
- rtx a6000 (4)
- text generation (4)
Recent posts
How to build the GPU infrastructure needed to pretrain LLM and Generative AI models from scratch (e.g. GPT-4, LaMDA, LLaMA, BLOOM).
Published 07/13/2023 by David Hall
Learn how to fine-tune Falcon LLM 7B/40B on a single GPU with LoRA and quantization, enabling data parallelism for linear scaling across multiple GPUs.
Published 06/29/2023 by Xi Tian
...