Day 5 of 5
⏱ ~60 minutes
WebSockets in 5 Days — Day 5

Scaling WebSockets

Redis adapter, sticky sessions, load balancing, production deploy.

Why Scaling WebSockets Is Hard

A single Node.js WebSocket server can handle thousands of connections, but as you scale horizontally (multiple servers), a problem emerges: Socket.io events are in-memory. If user A is on Server 1 and user B is on Server 2, they can't communicate by default because Server 1 doesn't know about Server 2's connections.

The solution is a shared message broker — typically Redis — that all servers subscribe to. When Server 1 emits an event, Redis delivers it to Server 2, which forwards it to its connected clients.

Redis Adapter Setup

Socket.io with Redis Adapter
const { Server } = require('socket.io');
const { createAdapter } = require('@socket.io/redis-adapter');
const { createClient } = require('redis');

async function createServer() {
  const pubClient = createClient({ url: process.env.REDIS_URL });
  const subClient = pubClient.duplicate();

  await Promise.all([pubClient.connect(), subClient.connect()]);

  const io = new Server(httpServer);
  io.adapter(createAdapter(pubClient, subClient));

  return io;
}
// Now io.emit() automatically reaches ALL server instances

Sticky Sessions for Load Balancers

During the WebSocket handshake, the initial HTTP request must reach the same server as subsequent WebSocket frames. Configure your load balancer to use sticky sessions (also called session affinity) based on a cookie or IP hash.

nginx — Sticky Sessions
upstream websocket_servers {
  ip_hash; # Sticky session by IP
  server ws-server-1:3000;
  server ws-server-2:3000;
  server ws-server-3:3000;
}

server {
  location /socket.io/ {
    proxy_pass http://websocket_servers;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
}
ℹ️
Production checklist: Redis adapter + sticky sessions + connection limits per process + monitoring connection count + graceful shutdown that drains connections before stopping.
📝 Day 5 Exercise
Scale Your Chat App
  1. Add the Redis adapter to your Day 3 chat app using a local Redis instance (docker run -d -p 6379:6379 redis).
  2. Start two server instances on different ports: PORT=3001 node server.js and PORT=3002 node server.js.
  3. Connect two browser tabs to different ports and verify they can chat.
  4. Add a graceful shutdown handler that closes the Redis connection and drains WebSocket connections before exiting.

Day 5 Summary

  • Horizontal scaling requires a shared pub/sub broker — Redis is the standard choice.
  • The Redis adapter makes io.emit() work across all server instances automatically.
  • Load balancers need sticky sessions so the WebSocket handshake and frames hit the same server.
  • Add graceful shutdown to drain connections cleanly — important for zero-downtime deploys.
Finished this lesson?