Skip to content

Multi-Region Deployment

This guide covers deploying La Suite Meet across multiple geographic regions for lower latency and higher availability.

Why multi-region?

A single-region deployment has a single LiveKit server. All media travels through that server regardless of where participants are located. For international teams or national-scale deployments:

  • A participant in Asia connecting to a LiveKit server in Europe adds 150-300ms of latency
  • A single-region failure takes down the entire service

Multi-region solves both problems.

Architecture overview

                    ┌─────────────────────┐
                    │    Meet Backend     │
                    │  (single region or  │
                    │   multi-AZ HA)      │
                    └─────────┬───────────┘
                              │ issues tokens
                    ┌─────────▼───────────┐
                    │     PostgreSQL      │
                    │  (shared database)  │
                    └─────────────────────┘

    Region EU                    Region US
┌─────────────────┐         ┌─────────────────┐
│  LiveKit EU     │         │  LiveKit US     │
│  (Frankfurt)    │         │  (Virginia)     │
└────────┬────────┘         └────────┬────────┘
         │                           │
    EU participants             US participants

Meet backend placement

The Meet Django backend can remain in a single region with a standard HA setup (multiple replicas, managed PostgreSQL). The backend is not latency-sensitive for media — it only issues tokens and manages room state.

For true active-active multi-region backend, you would need:

  • A distributed database (CockroachDB, PlanetScale, Neon, etc.) or read replicas with primary failover
  • A global load balancer (Cloudflare, AWS Global Accelerator) routing to the nearest backend

This complexity is rarely necessary — the backend handles only control-plane traffic.

LiveKit multi-region

LiveKit supports multi-region natively via distributed regions. Each region runs its own LiveKit server cluster. Participants connect to the nearest region.

Approach 1: Single LiveKit cluster with multiple nodes

LiveKit can run as a cluster with nodes in different availability zones in the same region. This provides HA but not geographic distribution.

# livekit-server.yaml for clustered mode
redis:
  address: redis-cluster.example.com:6379
  # All nodes must point to the same Redis

Approach 2: Separate LiveKit deployments per region

For true geographic distribution:

  1. Deploy a full LiveKit stack (server + Redis) in each region
  2. Configure the Meet backend to select the appropriate LiveKit endpoint based on participant location or room configuration
  3. Issue tokens pointing to the correct regional LiveKit URL

The Meet backend LIVEKIT_URL environment variable can be made dynamic — this requires custom middleware or a smart DNS setup.

Region selection strategies

DNS-based geo-routing (simplest): - Use a DNS provider with geo-routing (Cloudflare, Route53, etc.) - livekit.eu.example.com → Frankfurt LiveKit - livekit.us.example.com → Virginia LiveKit - The frontend receives the appropriate URL based on DNS resolution

Application-level selection: - The backend receives the participant's IP - Determines the nearest region - Issues a token pointing to that region's LiveKit URL

Manual room configuration: - Room owners select a region when creating a room - Stored in the room's configuration

Redis considerations

Each LiveKit cluster needs its own Redis. Do not share Redis across geographic regions — high-latency Redis connections severely degrade LiveKit performance.

The Meet backend Redis (for Celery and sessions) can be separate from the LiveKit Redis instances.

Garage / S3 for recordings in multi-region

Recordings are written by LiveKit Egress to S3. In multi-region:

  • Each region's Egress writes to the nearest S3-compatible storage
  • The Meet backend needs to know which bucket/endpoint has which recording
  • Use S3 cross-region replication if you need recordings available from all regions

Simpler approach: use a cloud S3 service (AWS S3, Scaleway Object Storage) that has built-in multi-region replication, and point all Egress instances at the same bucket.

Health checks and failover

Configure health checks for each regional LiveKit:

# Kubernetes liveness probe
livenessProbe:
  httpGet:
    path: /
    port: 7880
  initialDelaySeconds: 10
  periodSeconds: 30

Use a global load balancer or DNS failover to route traffic away from an unhealthy region automatically.

Bandwidth costs

In multi-region, media stays within each region — participants in Frankfurt talk to the Frankfurt LiveKit, which only exchanges media with other Frankfurt participants. Cross-region traffic only occurs for the control plane (token issuance, room state), which is minimal.

This is a significant advantage over single-region deployments where all traffic traverses the globe.

Practical starting point

For most organizations, a single well-provisioned LiveKit server in the region where most participants are located is sufficient. Consider multi-region only when:

  • You have significant user populations in 2+ geographic regions
  • You have strict data residency requirements for different regions
  • You need >99.9% availability SLAs

Start simple, add regions as your user base grows.