How Deployer talks to itself
When Deployer deploys to ten hosts at once, it does not run ten SSH sessions inside a single PHP process. It spawns ten subprocesses, one per host, and lets them run in parallel. Each subprocess (a "worker") needs to ask the parent (the " master") for things: the current config for its host, permission to ask the user a question, a place to push updated values back to. v8 ships a rewritten version of that master and worker plumbing. This post walks through what is in there, and why.
Why subprocesses at all
PHP does not have great concurrency primitives. There is no real threading in the CLI runtime, and the async ecosystem ( ReactPHP, Amp, Swoole) is great but heavy to depend on. Deployer's job is to run shell commands on remote hosts, and the simplest model that works for that is a process per host.
The trade-off is coordination. Each worker has its own copy of the recipe, its own config object, its own SSH
connection. If a task calls ask() to prompt the user, only one worker should actually print a prompt and read stdin.
Workers need to talk to the parent.
A small HTTP server
In Deployer, the master starts a tiny HTTP server bound to 127.0.0.1 on a random port, and each worker is told the
port via command-line arguments. Workers POST JSON to it. The core loop is a non-blocking accept loop with a 60 Hz tick:
while (true) {
$this->acceptNewConnections();
$this->handleClientRequests();
usleep(16_000); // 16ms
($this->tickerCallback)();
if ($this->stop) {
break;
}
}
Three things happen on every tick:
- Accept any new TCP clients without blocking.
- Read available bytes from each connected client. If the buffer contains a complete HTTP request (headers terminated
by
\r\n\r\n, body length matchesContent-Length), parse and dispatch it. - Run the master's "ticker" callback. The master uses this to drain worker stdout/stderr into the deploy log, animate a spinner, and check whether all workers have exited.
There is no event loop library, no fiber, no async framework. Just stream_socket_server,
stream_set_blocking($socket, false), and a buffer per connection. The whole Server class is around 300 lines.
The reason this works is that the traffic is tiny and local. Worker requests are JSON payloads in the kilobyte range, the round-trip stays inside a single machine, and there are at most a few dozen connections at peak. We do not need backpressure or HTTP/2 or websockets. We need "the master can answer before the worker times out", and a blocking accept on localhost handles that easily.
The routes
The master exposes three endpoints:
$server->router(function (string $path, array $payload) {
switch ($path) {
case '/load':
// Worker requests current config for its host.
...
case '/save':
// Worker pushes an updated config back to master.
...
case '/proxy':
// Worker asks master to run an interactive function.
...
}
});
/load and /save
Config in Deployer is a per-host dictionary that can be mutated at runtime. A task might call set('release_path', ...)
after creating a new release directory, and a later task running in a different worker (on the same host) needs to see
that value. In v8, Configuration::load() and Configuration::save() are simple HTTP calls:
public function load(): void
{
if (!Deployer::isWorker()) {
return;
}
$values = Httpie::post(MASTER_ENDPOINT . '/load')
->noTimeout()
->bearerToken(MASTER_TOKEN)
->jsonBody(['host' => $this->get('alias')])
->sendJson();
$this->update($values);
}
The master holds the canonical config for every host in the run. Workers fetch a snapshot before they start a task and push their changes back when they finish. There is no shared memory, no diffing, no merge logic. The host config is small enough that round-tripping the whole thing on each task boundary is fine.
/proxy
Interactive prompts are the awkward case. If three workers all hit a confirm("Continue?") call at the same time, you
do not want three overlapping prompts on the same terminal. The fix is to forward the prompt back to the master, which
is the only process that owns the real stdin and stdout:
case '/proxy':
['host' => $host, 'func' => $func, 'arguments' => $arguments] = $payload;
$allowedFunctions = [
'Deployer\ask',
'Deployer\askChoice',
'Deployer\askConfirmation',
'Deployer\askHiddenResponse',
];
if (!in_array($func, $allowedFunctions, true)) {
return new Response(403, ['error' => "Function not allowed: $func"]);
}
Context::push(new Context($this->hosts->get($host)));
$answer = call_user_func($func, ...$arguments);
Context::pop();
return new Response(200, $answer);
The allowlist is intentional. /proxy is "let me run a function in the master process", which is a powerful primitive,
so we restrict it to the four interactive helpers it is meant for. Anything else gets a 403.
Why a bearer token
The master listens on 127.0.0.1, not on a public interface. So why does it bother with auth?
$authToken = bin2hex(random_bytes(16));
$server->setAuthToken($authToken);
// ...
$process->setEnv(['DEPLOYER_MASTER_TOKEN' => $authToken]);
On a shared developer machine or a CI runner, any local process can connect to 127.0.0.1:<port>. If your CI pipeline
runs untrusted code in another container on the same network namespace, or if a coworker is logged into the same build
host, that "local" port is not actually private. Without auth, anything that could connect() to it could call /save
and mutate the deploy config, or /proxy and trigger an interactive prompt that hangs the deploy.
A 128-bit random token, passed to workers via an environment variable and required as a bearer header, makes the surface
tiny. Workers know it because the master spawned them. Anyone else does not. The cost is one random_bytes() call and a
header check on each request.
if ($this->authToken !== null) {
$provided = $headers['authorization'] ?? '';
if ($provided !== "Bearer {$this->authToken}") {
$this->sendResponse($socket, new Response(403, ['error' => 'Forbidden']));
continue;
}
}
What it looks like in practice
If you run a deploy with -vvv, you can see the master starting:
[master] Starting server at http://127.0.0.1:54321
Each worker is a separate php bin/dep worker --port 54321 --task ... --host ... subprocess. The master tracks them
all, drains their stdout into the deploy log, and shuts down once every worker has exited. If a worker crashes, the
master's cumulativeExitCode() picks up the non-zero status and propagates it.
For most users, none of this is visible. You write recipes, you run dep deploy, things deploy. The master server is
plumbing. But it is the kind of plumbing that quietly breaks in interesting ways when it goes wrong.
