Skip to content

Networking

Port reference

Ports that must be open (public firewall)

Port Protocol Service Purpose
80 TCP nginx-proxy HTTP — ACME challenge / redirect to HTTPS
443 TCP nginx-proxy HTTPS — all web traffic (Meet, Keycloak, LiveKit WebSocket)
7881 TCP LiveKit TCP media fallback (when UDP is blocked)
7882 UDP LiveKit RTP/RTCP media — critical for video/audio

Ports that must NOT be exposed publicly

Port Service Why
5432 PostgreSQL Database — internal only
6379 Redis Cache/broker — internal only
7880 LiveKit WebSocket — proxied via nginx on 443
8000 Django backend API — proxied via frontend nginx on 443
8080 Frontend SPA / Keycloak Proxied via nginx on 443
8083 Frontend routing nginx Proxied via nginx-proxy on 443
3900/3901 Garage Object storage — internal only

Traffic flow

Web request (browser → Meet UI)

Browser :443
  → nginx-proxy (TLS termination)
  → frontend container :8083 (routing nginx)
    → /api/* → backend :8000 (Django)
    → /*     → frontend :8080 (React SPA)

OIDC login

Browser → visio.example.com/oidc/authenticate/
  → backend :8000 → redirect to auth.example.com :443
  → Keycloak login form
  → redirect to visio.example.com/api/v1.0/callback/
  → backend :8000 (code exchange with Keycloak via internal network)
  → browser receives session cookie

LiveKit WebSocket (signaling)

Browser → wss://livekit.example.com :443
  → nginx-proxy (TLS termination, WebSocket upgrade)
  → livekit container :7880 (plain WebSocket)

LiveKit media (audio/video)

Browser ←→ server :7882/UDP  (RTP/RTCP — direct, no proxy)
Browser ←→ server :7881/TCP  (fallback when UDP blocked)

Recording webhook

Garage/S3 → POST https://visio.example.com/api/v1.0/recordings/storage-hook/
  → nginx-proxy → frontend :8083 → backend :8000

Docker networks

Meet uses two Docker networks:

Network Type Members Purpose
proxy Bridge (external) nginx-proxy, frontend, livekit, keycloak nginx-proxy routing and TLS
internal Bridge (internal) backend, frontend, celery, postgresql, redis, livekit, keycloak, kc-postgresql Service-to-service communication

The backend container is intentionally NOT on the proxy network — it is only reached via the frontend container's routing nginx.

Internal DNS resolution

Docker resolves container names within a network. Services reference each other by service name:

From To Address
backend postgresql postgresql:5432
backend redis redis:6379
backend livekit (API) https://livekit.example.com (via host-gateway)
frontend nginx backend backend:8000
frontend nginx frontend SPA frontend:8080
livekit redis redis:6379

extra_hosts for backend

The backend container needs to resolve auth.example.com and livekit.example.com to make OIDC token exchange and LiveKit API calls. Since these are external hostnames, they are added via extra_hosts: host-gateway:

backend:
  extra_hosts:
    - "auth.example.com:host-gateway"
    - "livekit.example.com:host-gateway"

This routes those hostnames to the Docker host's gateway IP, which then goes through nginx-proxy.

LiveKit media path

Participant A browser
  │ WebSocket (TCP 443) → nginx-proxy → livekit:7880
  │ RTP/RTCP (UDP 7882) → livekit:7882 (direct)
LiveKit server
  │ RTP/RTCP (UDP 7882) → direct
Participant B browser

LiveKit is a Selective Forwarding Unit (SFU) — it does not transcode media, only forwards packets between participants. This makes the media path efficient: - Signaling (WebSocket) goes through nginx-proxy - Media (UDP) goes directly to/from the LiveKit container

ICE and NAT traversal

LiveKit uses ICE (Interactive Connectivity Establishment) to find the best media path:

  1. Host candidates: Server's IP announced to clients
  2. STUN: If behind NAT, LiveKit auto-detects its public IP (use_external_ip: true in config)
  3. TURN: For clients behind restrictive firewalls where UDP 7882 is blocked

For cloud servers (Hetzner, OVH, AWS, etc.) — always set use_external_ip: true in livekit-server.yaml.

Firewall configuration

Hetzner Cloud

Firewalls → Add inbound rules:

Protocol Port Source
TCP 80 Any
TCP 443 Any
TCP 7881 Any
UDP 7882 Any

Linux / ufw (if active)

sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 7881/tcp
sudo ufw allow 7882/udp

Note: check sudo ufw status first — if ufw is inactive, the OS firewall is not blocking anything and cloud-level rules are sufficient.