Worker tunnels
Long-lived, team-owned tunnels managed by a platform service supervisor — how they differ from ad-hoc tunnels.
A worker tunnel is a tunnel that's meant to stay up. Unlike justtunnel <port> — which lives only as long as your terminal session — a worker is registered server-side, gets a stable team-scoped subdomain, and runs under a platform service supervisor (launchd user agent on macOS, systemd --user unit on Linux).
If you've ever wished a tunnel could survive logout, reboot, and SSH disconnects, that's a worker. If you want a tunnel that pairs with a webhook URL you'll publish to a customer, that's also a worker. Most teams have one or two: a staging-api for shared previews, a webhook-receiver for inbound integrations.
How it works
justtunnel worker install staging-api
│
├─▶ Server: register worker (id=wrk_..., subdomain=staging-api--acme)
├─▶ Local: write per-worker config in ~/.justtunnel/
└─▶ Supervisor: launchd-user (macOS) or systemd --user (Linux) unit
│
▼
every reboot / login → supervisor starts `worker start <name>`
│
▼
long-lived WebSocket attach loop to the edge
Three things happen on worker install:
- Server-side registration. The worker gets an id (
wrk_...) and a derived subdomain<name>--<team-slug>(e.g.staging-api--acme.justtunnel.dev). The CLI validates that the derived subdomain fits within the 63-character DNS label before contacting the server. - Local config. Per-worker config is written to
~/.justtunnel/so the CLI can attach later without flags. - Supervisor install. A native service definition is installed so the worker starts on login (and across reboots, with linger enabled on Linux).
After install, the supervisor invokes justtunnel worker start <name> whenever it should be running. That command runs the same WebSocket attach loop as the foreground path — there is no separate "worker mode" on the wire. See justtunnel worker for the full command surface.
Workers vs ad-hoc tunnels
| Aspect | Ad-hoc (justtunnel <port>) | Worker (justtunnel worker ...) |
|---|---|---|
| Lifetime | Session-bound; dies when terminal closes | Long-lived; survives logout and reboot |
| Owner | The active context (personal or team) | Always team — workers are team-only |
| Subdomain | Random, requested, or reserved | Derived: <name>--<team-slug> |
| Local management | None | Platform supervisor (launchd/systemd) |
| Logs | Stream to the terminal | Rotated daily to ~/.justtunnel/logs/worker-<name>.log |
--local-timeout | Honored | Not applicable — workers are a pure attach loop |
| Counts against | MaxTunnels | MaxWorkers (separate quota) |
Use an ad-hoc tunnel for testing on your laptop right now. Use a worker when the URL needs to keep working after you close your laptop lid.
Platform support
Sourced from justtunnel-cli/cmd/worker*.go:
| Platform | Foreground (worker start) | Managed service (worker install) |
|---|---|---|
| macOS | Supported | Supported (launchd-user agents survive logout natively) |
| Linux | Supported | Supported (systemd-user; linger needed for survival across logout) |
| Windows | Supported | Not supported — use worker start under your own supervisor (Task Scheduler, NSSM); see the CLI repo's docs/windows-recipe.md |
worker install is idempotent across all four cases (local present + server present, etc.) — re-running on a fully-configured worker is a no-op except for a supervisor refresh.
Lifecycle
A worker has both a server-side state and a local supervisor state. justtunnel worker status reports both:
NAME SERVER LOCAL LAST SEEN
staging-api active launchd:running 2026-05-06 12:34:56Z
SERVERis what the JustTunnel API knows about:active,quarantined,<local-only>(the local config exists but the server has no record).LOCALis what the platform supervisor reports:launchd:running,systemd:stopped,none(no supervisor managing it).
Tearing down a worker has two scopes. worker uninstall <name> stops the supervisor and removes local config, leaving the server-side record so you can re-install with the same name later. worker uninstall <name> --delete-on-server also quarantines the server-side row — the server soft-deletes for 30 days before a background reaper removes it permanently.
If local config is corrupt or the supervisor is wedged, worker uninstall <name> --force continues past per-step failures and exits 0 with a stderr summary. That's the right tool when the normal path fails.
Limits and guarantees
What's enforced today, sourced from internal/plan/limits.go:
- Workers require a paid plan. Free and Starter plans have
MaxWorkers: 0(the field defaults to zero and is not populated for those tiers). Pro allows 1 worker per user; Team allows 2 workers per seat. - Worker request budget. Pro: 5,000 worker requests/month. Team: 50,000/month, pooled across the entire team (not per seat). Soft-cap; review usage in the dashboard.
- Worker rate limit. Pro: 500 req/min per worker. Team: 2,000 req/min per worker.
- Workers are team-only. Server checks the active context on every worker command. The CLI also pre-flights this so you fail fast on
personal. - Soft-delete window.
worker uninstall --delete-on-serverandworker rm --delete-on-serverquarantine for 30 days before permanent removal. Runworker list --allto see quarantined rows.
Best-effort, not guaranteed:
- Service supervisor uptime. The platform supervisor (
launchd,systemd --user) handles restarts on crash, but a misconfigured user-linger on Linux can leave a worker stopped after logout. The CLI prompts about this on Linux installs. - Log retention beyond 7 days / 100 MB. The writer rotates daily and reaps anything older than the cap. Don't rely on the logs as long-term audit storage; ship them off-box if you need that.
Related
justtunnel worker— every worker subcommand.- Run a worker tunnel — task-oriented walkthrough.
- Contexts — workers require a team context.
- Tunnel anatomy — the underlying WebSocket and request path.
- Plans and limits — worker quotas per tier.
- Worker won't start — troubleshooting failed installs.