RunPod Weekly #14 - Pricing Changes, Community Changelog, Blogs

RunPod Weekly #14 - Pricing Changes, Community Changelog, Blogs

Welcome to another round of RunPod Weekly! This week, we are excited to share the following:

๐Ÿ’ธ Pricing Changes

RunPod pricing is dropping by up to -40% on Serverless and up to -18% on Secure Cloud.

Why We're Doing This

GPUs aren't cheap, nor is the infrastructure to run them at scale. But we believe that great ideas shouldn't be held back by budget constraints.

We've been fortunate to secure some serious funding recently. And instead of blowing it all on fancy office chairs or an in-house barista (tempting as that was), we chose to invest the savings in you.

  1. Infrastructure Optimization: We've streamlined our operations, allowing us to pass savings directly to you.
  2. Enhanced Support: We're focusing our support team to be closer to you, our customers. This means faster response times and more personalized assistance.
  3. Platform Improvements: We're continuously working on reducing cold start times, enhancing our API, and introducing new features to make your experience smoother.

By optimizing our pricing, we're not just cutting costs โ€“ we're reinvesting in the platform and community to provide you with a better overall experience.

If you want to dive deeper into the numbers, you can check out our complete pricing page for a full breakdown of all our GPU options, or read our blog post on these changes.

Reminder โ€” if you haven't yet, please take our user survey! Your responses will help us shape RunPod as we grow!

๐Ÿ“‹ Community Changelog

Since our last newsletter, we've made several changes to RunPod based on feedback from our community.

  1. We have added messaging and validation for the max secret size of 16 MB.
  2. When editing a template, you will be prompted before discarding any unsaved changes.
  3. The timestamp column in audit logs is now converted automatically to your local timezone.

We'd love to hear any feedback you have through this form.

โœ๏ธ Blogs

We're thrilled to share four new blog posts, packed with tons of valuable information.

Understanding VRAM and How Much Your LLM Needs

Ever wondered why your beefy GPU still chokes on that new language model? This article breaks down VRAM - the secret sauce that keeps LLMs running smoothly - and shows you how to determine exactly how much your model needs. Plus, it throws in a quick how-to for getting your own LLM up and running in just half a minute.

RAG vs. Fine-Tuning: Which Method is Best for Large Language Models (LLMs)?'

Wondering how to make those fancy AI language models work better for specific tasks? This article breaks down two cool tricks: RAG (like giving your AI an open-book test) and fine-tuning (turning your AI into a subject expert). It even dives into a new method called RAFT that combines the best of both worlds, helping you decide which approach fits your AI project best.

RunPod Slashes GPU Prices: Powering Your AI Applications for Less

RunPod just slashed their GPU prices! Whether you're cooking up the next ChatGPT or just playing around with some cool AI stuff, you can now do it without breaking the bank. We've cut costs on everything from beefy server GPUs to more modest options, so there's never been a better time to let your AI dreams run wild.

How to run vLLM with RunPod Serverless

This article shows you how to use vLLM, a speedy open-source tool, to deploy language models on RunPod's servers. It walks you through choosing between fancy closed-source AIs and DIY open-source options, then gives you a step-by-step guide to get your model up and running in minutes.


That's all for this week's newsletter. We're constantly striving to improve our platform and services, and your feedback is invaluable in this journey. We welcome you to join our Discord server and share what you've been working on.

Thanks for being part of the RunPod community!

p.s. We're still hiring, learn more here!