Redis adapter, sticky sessions, load balancing, production deploy.
A single Node.js WebSocket server can handle thousands of connections, but as you scale horizontally (multiple servers), a problem emerges: Socket.io events are in-memory. If user A is on Server 1 and user B is on Server 2, they can't communicate by default because Server 1 doesn't know about Server 2's connections.
The solution is a shared message broker — typically Redis — that all servers subscribe to. When Server 1 emits an event, Redis delivers it to Server 2, which forwards it to its connected clients.
const { Server } = require('socket.io');
const { createAdapter } = require('@socket.io/redis-adapter');
const { createClient } = require('redis');
async function createServer() {
const pubClient = createClient({ url: process.env.REDIS_URL });
const subClient = pubClient.duplicate();
await Promise.all([pubClient.connect(), subClient.connect()]);
const io = new Server(httpServer);
io.adapter(createAdapter(pubClient, subClient));
return io;
}
// Now io.emit() automatically reaches ALL server instances
During the WebSocket handshake, the initial HTTP request must reach the same server as subsequent WebSocket frames. Configure your load balancer to use sticky sessions (also called session affinity) based on a cookie or IP hash.
upstream websocket_servers {
ip_hash; # Sticky session by IP
server ws-server-1:3000;
server ws-server-2:3000;
server ws-server-3:3000;
}
server {
location /socket.io/ {
proxy_pass http://websocket_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
docker run -d -p 6379:6379 redis).PORT=3001 node server.js and PORT=3002 node server.js.io.emit() work across all server instances automatically.