RunPod Blog
  • RunPod
  • Docs
Sign in Subscribe

LLMs

A collection of 2 posts
How to Fine-Tune LLMs with Axolotl on RunPod
Fine-Tuning

How to Fine-Tune LLMs with Axolotl on RunPod

Learn how to fine-tune large language models (LLMs) using Axolotl on RunPod. This step-by-step guide covers setup, configuration, and training with LoRA, 8-bit quantization, and DeepSpeed—all on scalable GPU infrastructure.
21 Apr 2025 3 min read
How Much VRAM Does Your LLM Need? A Guide to GPU Memory Requirements
GPU Power

How Much VRAM Does Your LLM Need? A Guide to GPU Memory Requirements

Discover how to determine the right VRAM for your Large Language Model (LLM). Learn about GPU memory requirements, model parameters, and tools to optimize your AI deployments.
08 Jul 2024 5 min read
Page 1 of 1
RunPod Blog © 2025
  • Sign up
Powered by Ghost