Elixir runs natively distributed across multiple nodes. Today you connect nodes, use distributed GenServers, deploy with releases, and look at Elixir in production at scale.
Start a named node: 'iex --sname node1 --cookie secret'. Connect: 'Node.connect(:node2@hostname)'. After connecting, you can spawn processes on remote nodes, send messages to registered processes on other nodes, and call functions on remote nodes. Distributed Erlang/Elixir uses the actor model across a cluster — the same code works locally or distributed.
Mix releases bundle your Elixir app and its dependencies into a self-contained directory that runs without Elixir or Mix installed. Create a release: 'mix release'. The _build/prod/rel/myapp/bin/myapp start command starts it. Releases embed the BEAM VM, so deployment is just copying a directory. Gigalixir, Fly.io, and Render support Elixir releases natively.
Discord handles 11 million concurrent WebSocket connections on Elixir/Phoenix. WhatsApp ran 2 million connections per server before moving to Erlang clusters. Bleacher Report scaled from 150 servers (Ruby) to 5 servers (Elixir) with better performance. The BEAM's preemptive scheduler ensures low latency under load — it gives each process a fair time slice and never lets one process starve others, unlike Node.js's single-threaded event loop.
# Create and run a Mix project
mix new my_app
cd my_app
mix run --no-halt
# Build a production release
MIX_ENV=prod mix release
./_build/prod/rel/my_app/bin/my_app start
# Start two named nodes (in separate terminals)
iex --sname alice --cookie secret
iex --sname bob --cookie secret
# In alice's iex:
Node.connect(:bob@localhost)
Node.list() # [:bob@localhost]
# Spawn process on remote node
Node.spawn(:bob@localhost, fn ->
IO.puts "Running on #{node()}"
end)
# Observer: graphical process inspector
# In iex: :observer.start()
# Check BEAM memory and process count
:erlang.memory()
:erlang.system_info(:process_count)
Build a distributed key-value store across two connected BEAM nodes using GenServer. The primary node handles writes; the secondary replicates state via message passing. Simulate primary failure and verify the secondary can take over.