RunPod Weekly #15 - New Referral Program, Community Changelog, Blogs

RunPod Weekly #15 - New Referral Program, Community Changelog, Blogs

Welcome to another round of RunPod Weekly! This week, we are excited to share the following:

🤝 New Referral Program

We've reworked our referral program to make it easier (and more lucrative) for anyone to get started.

These changes include higher reward rates, a new serverless referral program, no minimum referrals to start spending, and template commissions.

Learn more about our new referral program in our new blog post introducing these changes, or to get started generate your unique referral link.

📋 Community Changelog

Since our last newsletter, we've made several changes to RunPod based on feedback from our community.

  1. Timestamps on the billing page are now converted to your local timezone.
  2. You can now copy and share the link of templates on the explore page.

We'd love to hear any feedback you have through this form.

✍️ Blogs

We're thrilled to share four new blog posts, packed with tons of valuable information.

Introducing RunPod’s New and Improved Referral Program

We've revamped our referral program, making it more accessible and rewarding. Until the end of 2024, all users can earn commissions without minimum requirements. The program now offers increased rates of 5% for Serverless, 3% for GPU Pods, and 1% for template usage. By simply referring friends to RunPod users can earn credits when their referrals spend on RunPod.

Master the Art of Serverless Scaling: Optimize Performance and Costs on RunPod

Dive into the art of optimizing serverless scaling for AI workloads. Find the "sweet spot", emphasizing the balance between cost-efficiency and performance. Learn all about key factors such as active workers, flex workers, idle timeout, and scale types. Learn how to tailor your scaling strategy to meet user expectations while minimizing costs.

Run Llama 3.1 405B with Ollama: A Step-by-Step Guide

A step-by-step guide to deploying Meta's groundbreaking Llama 3.1 405B model using Ollama. This open-source AI model, boasting 405 billion parameters, outperforms many leading models in crucial benchmarks. Walk through setting up a RunPod GPU instance and deploying Ollama in a user-friendly chat interface.

How to run SAM 2 on a cloud GPU with RunPod

A step-by-step tutorial on deplying Meta's Segment Anything Model 2 (SAM 2) using RunPod Cloud GPUs. SAM 2 represents a significant advancement in object segmentation, capable of real-time promptable segmentation for both images and videos. Walk through setting up a RunPod GPU instance and deploy SAM 2.

Read previous editions of RunPod Weekly: RunPod Weekly #14


That's all for this week's newsletter. We're constantly striving to improve our platform and services, and your feedback is invaluable in this journey. We welcome you to join our Discord server and share what you've been working on.

Thanks for being part of the RunPod community!

p.s. We're still hiring, learn more here!