AI Development The Open-Source AI Renaissance: How Community Models Are Shaping the Future The open-source AI movement is rewriting the rules of model development. From community-built tools to powerful fine-tunes, builders are moving faster than ever—with the infrastructure to match. Here’s how open-source AI took off, and where it’s headed next.
InstaHeadshots: Scaling AI-Generated Portraits with RunPod InstaHeadshots is revolutionizing professional photography by transforming casual selfies into studio-quality headshots within minutes. Their AI-driven platform caters to professionals seeking polished images for LinkedIn, resumes, and social media profiles, eliminating the need for traditional photoshoots. The Challenge: Managing Surging Demand and Diverse Workloads As InstaHeadshots experienced rapid growth, they
The 'Minor Upgrade' That's Anything But: DeepSeek R1-0528 Deep Dive Earlier this year, DeepSeek dropped a little, experimental reasoning model in the middle of the night that ended up taking the world by storm, shooting to the top of the App Store past closed model rivals and overloading their API with unprecedented levels of demand to the point that they
How Segmind Scaled GenAI Workloads 10x Without Scaling Costs Segmind uses RunPod to dynamically scale GPU infrastructure across its Model API and PixelFlow engine—powering 10x growth with zero idle waste.
Run your Own AI from your iPhone Using RunPod Cell phones have provided users access to an AI such as iPhone’s Siri. With the emergence of cloud-based open-source LLMs, you can now run a personalized AI on your iPhone with RunPod’s offerings. RunPod allows you to have the resources to run the various (and very large) open
Connecting Cursor to LLM Pods on RunPod For AI Development In this comprehensive walkthrough, we'll show you how to set up and configure Cursor AI to connect to a Large Language Model (LLM) running on RunPod. This setup gives you the power of high-performance GPUs for your AI-assisted coding while maintaining the familiar Cursor interface. Not only that,
Built on RunPod How Glam Labs Powers Viral AI Video Effects with RunPod Glam Labs used RunPod Serverless to train and run viral AI video effects—cutting costs, accelerating development, and scaling content creation with ease.
GPUComputing Why AI Needs GPUs: A No-Code Beginner’s Guide to Compute Power Why AI models need GPUs, how to choose the right one, and what makes cloud GPUs ideal for no-code AI experimentation. A beginner’s guide to compute power.
Built on RunPod Talking to AI, at Human Scale: How Scatterlab Powers 1,000+ RPS with RunPod Learn how Scatterlab scaled to 1,000+ requests per second using RunPod to deliver real-time AI conversations at half the cost of hyperscalers.
Automated Image Captioning with Gemma 3 on RunPod Serverless Creating high-quality training datasets for machine learning models often requires detailed image captions. However, manually captioning hundreds or thousands of images is time-consuming and tedious. This tutorial demonstrates how to leverage Google's powerful Gemma 3 multimodal models on RunPod Serverless to automatically generate detailed, consistent image captions. Once
From API to Autonomy: Why More Builders Are Self-Hosting Their Models Outgrowing the APIs? Learn when it’s time to switch from API access to running your own AI model. We’ll break down the tools, the stack, and why more builders are going open source.
Built on RunPod How a Solo Dev Built an AI for Dads—No GPU, No Team, Just $5 A solo developer fine-tuned an emotional support AI for dads using Mistral 7B, QLoRA, and RunPod—with no GPU, no team, and under $5 in training costs.
Stable Diffusion How Civitai Scaled to 800K Monthly LoRAs on RunPod Discover how Civitai used RunPod to train over 868,000 LoRA models in one month—fueling a growing creator community and powering millions of AI generations.
From Pods to Serverless: When to Switch and Why It Matters You’ve just finished fine-tuning your model in a pod. Now it’s time to deploy it—and you’re staring at two buttons: Serverless or Pod. Which one’s right for running inference? If you’ve been using Pods to train, test, or experiment on RunPod, Serverless might be
Runpod Platform RunPod Just Got Native in Your AI IDE RunPod’s new MCP server brings first-class GPU access to any AI IDE—Cursor, Claude Desktop, Windsurf, and more. Launch pods, deploy endpoints, and manage infrastructure directly from your editor using Model Context Protocol.
Qwen3 Released: How Does It Stack Up? The Qwen Team has released Qwen3, their latest generation of large language models that brings groundbreaking advancements to the open-source AI community. This comprehensive suite of models ranges from lightweight 0.6B parameter versions to massive 235B parameter Mixture-of-Experts (MoE) architectures, all designed with a unique "thinking mode"
GPU Clusters: Powering High-Performance AI Computing (When You Need It) AI infrastructure isn't one-size-fits-all. Different stages of the AI development lifecycle call for different types of compute—and choosing the right tool for the job can make all the difference in performance, efficiency, and cost. At RunPod, we're building infrastructure that fits the way modern AI
How Krnl Scaled to Millions of Users—and Cut Infra Costs by 65% With RunPod When Krnl’s AI tools went viral, they outgrew AWS fast. Discover how switching to RunPod’s serverless 4090s helped them scale effortlessly, eliminate idle costs, and cut infrastructure spend by 65%.
Mixture of Experts (MoE): A Scalable Architecture for Efficient AI Training Mixture of Experts (MoE) models scale efficiently by activating only a subset of parameters per input. Learn how MoE works, where it shines, and why RunPod is built to support MoE training and inference.
Global Networking Expansion: Now Available in 14 Additional Data Centers RunPod is excited to announce a major expansion of our Global Networking feature, which now supports 14 additional data centers. Following the successful launch in December 2024, we've seen tremendous adoption of this capability that enables seamless cross-data center communication between pods. This expansion significantly increases our global
Fine-Tuning How to Fine-Tune LLMs with Axolotl on RunPod Learn how to fine-tune large language models (LLMs) using Axolotl on RunPod. This step-by-step guide covers setup, configuration, and training with LoRA, 8-bit quantization, and DeepSpeed—all on scalable GPU infrastructure.
RTX 5090 LLM Benchmarks for AI: Is It the Best GPU for ML? The AI landscape demands ever-increasing performance for demanding workloads, especially for large language model (LLM) inference. Today, we're excited to showcase how the NVIDIA RTX 5090 is reshaping what's possible in AI compute with breakthrough performance that outpaces even specialized data center hardware. Benchmark Showdown: RTX
LoRAs The Complete Guide to Training Video LoRAs: From Concept to Creation Learn how to train custom video LoRAs for models like Wan, Hunyuan Video, and LTX Video. This guide covers hyperparameters, dataset prep, and best practices to help you fine-tune high-quality, motion-aware video outputs.
The RTX 5090 Is Here: Serve 65,000+ Tokens per Second on RunPod RunPod customers can now access the NVIDIA RTX 5090—the latest powerful GPU for real-time LLM inference. With impressive throughput and large memory capacity, the 5090 enables serving for small and mid-sized AI models at scale. Whether you’re deploying high-concurrency chatbots, inference APIs, or multi-model backends, this next-gen GPU
Runpod Platform Cost-effective Computing with Autoscaling on RunPod Learn how RunPod helps you autoscale AI workloads for both training and inference. Explore Pods vs. Serverless, cost-saving strategies, and real-world examples of dynamic resource management for efficient, high-performance compute.