Global Networking Expansion: Now Available in 13 Additional Data Centers

Global Networking Expansion: Now Available in 13 Additional Data Centers

RunPod is excited to announce a major expansion of our Global Networking feature, which now supports 13 additional data centers. Following the successful launch in December 2024, we've seen tremendous adoption of this capability that enables seamless cross-data center communication between pods. This expansion significantly increases our global coverage, allowing more users to leverage the benefits of our virtual internal network regardless of geographic location.

Expanded Coverage

Global Networking is now available in the following additional data centers:

  • Europe: EU-CZ-1, EU-FR-1, EU-NL-1, EU-SE-1, EUR-IS-2
  • Oceania: OC-AU-1
  • United States: US-CA-2, US-DE-1, US-IL-1, US-NC-1, US-TX-3, US-TX-4, US-WA-1

These join our originally supported locations:

  • CA-MTL-3, US-GA-1, US-GA-2, US-KS-2

Reminder: What is Global Networking?

For those who might have missed our initial announcement, Global Networking allows pods to communicate with each other over a secure virtual internal network facilitated by RunPod. This powerful feature enables your pods to talk to each other without opening TCP or HTTP ports to the Internet, creating a private and secure environment for your applications. You can share data and run client-server applications across multiple pods in real time, while utilizing distributed computing resources across different geographic regions. All communication takes place over the private .runpod.internal network.

How to Use Global Networking

Enabling Global Networking for your pods remains simple:

  1. Check the Global Networking checkbox under the Instance Pricing options while deploying your pod
  2. When the pod is created, it will be assigned a virtual Global Network Hostname
  3. Use this hostname to communicate with any other pods that were also created with Global Networking, regardless of which supported data center they reside in

Deep Dive: Potential AI Applications with Global Networking

With our expanded Global Networking infrastructure, here are some theoretical implementations that could revolutionize AI workloads:

Distributed Machine Learning Pipelines

AI research teams could construct sophisticated training pipelines that segment workloads across geographic regions. For example, a team might distribute their data preprocessing across pods in US-TX-3 and US-TX-4, while running their primary model training in EU-FR-1 to take advantage of specific GPU availability. Training pods could communicate model gradients and parameter updates seamlessly over the internal network, with intermediate checkpoints flowing between pods without ever touching the public internet. Data scientists could orchestrate the entire pipeline from a central management pod, monitoring training progress and adjusting hyperparameters in real-time regardless of where the actual computation occurs.

Federated Learning Systems

Global Networking could enable powerful federated learning architectures where model training happens across geographically distributed pods while raw data remains in its original location. A pharmaceutical company might deploy model training pods in US-GA-1 and EU-CZ-1 to process regional datasets, with a coordinator pod in US-IL-1 aggregating model updates without ever seeing the raw data. This approach would satisfy data residency requirements while still leveraging the combined knowledge from multiple regions to create more robust models.

Multi-Region Model Serving Infrastructure

AI applications requiring low-latency inference could deploy model serving pods across multiple regions (US-WA-1, EU-NL-1, OC-AU-1) to ensure users worldwide receive fast responses. A centralized pod in US-DE-1 could handle continuous model updates, automatically propagating the latest versions to edge serving pods over the secure internal network. This architecture would provide both the performance benefits of edge deployment and the management simplicity of centralized operations.

Large-Scale Reinforcement Learning Environments

Reinforcement learning projects requiring massive parallel simulations could distribute simulation pods across US-GA-2, US-TX-4, and EUR-IS-2 to take advantage of available computing resources. A central controller pod in US-CA-2 would aggregate experiences and update policies, which would then be distributed back to the simulation pods. This approach could scale to thousands of simultaneous simulations while maintaining efficient policy updates through the secure, high-speed internal network.

Looking Forward

This expansion represents our ongoing commitment to providing flexible and powerful networking capabilities for our users. If you have questions about how to best utilize Global Networking in your specific use case, please reach out to our support team or join the discussion on our Discord server.

Give it a try today and experience the power of borderless pod communication!