RunPod Blog
  • RunPod
  • Docs
Sign in Subscribe

Large Language Model

A collection of 3 posts
How a Solo Dev Built an AI for Dads—No GPU, No Team, Just $5
Built on RunPod

How a Solo Dev Built an AI for Dads—No GPU, No Team, Just $5

A solo developer fine-tuned an emotional support AI for dads using Mistral 7B, QLoRA, and RunPod—with no GPU, no team, and under $5 in training costs.
09 May 2025 4 min read
How to Fine-Tune LLMs with Axolotl on RunPod
Fine-Tuning

How to Fine-Tune LLMs with Axolotl on RunPod

Learn how to fine-tune large language models (LLMs) using Axolotl on RunPod. This step-by-step guide covers setup, configuration, and training with LoRA, 8-bit quantization, and DeepSpeed—all on scalable GPU infrastructure.
21 Apr 2025 3 min read
How Much VRAM Does Your LLM Need? A Guide to GPU Memory Requirements
GPU Power

How Much VRAM Does Your LLM Need? A Guide to GPU Memory Requirements

Discover how to determine the right VRAM for your Large Language Model (LLM). Learn about GPU memory requirements, model parameters, and tools to optimize your AI deployments.
08 Jul 2024 5 min read
Page 1 of 1
RunPod Blog © 2025
  • Sign up
Powered by Ghost