2026 OpenClaw Gateway on cloud Docker with Tailscale private tailnet access

Teams that run OpenClaw Gateway on a cheap cloud VM usually want the same thing: MCP and Streamable HTTP live only on the tailnet, with no public listener on the host or security group. The durable pattern is a Docker Compose stack where the official Tailscale image acts as a sidecar: it owns tailscale up, state under a persistent volume, and interface bring-up, while the Gateway container binds to the bridge or 127.0.0.1 and never advertises 0.0.0.0:443 to the Internet. Treat the VM’s provider firewall as necessary but not sufficient — Docker publish rules and systemd socket units can still bypass what security groups show on paper. Exact image tags, health checks, and ports follow the OpenClaw release you pin; this note captures the network and auth layering you can paste into a runbook. For how OpenClaw maps to Docker versus native daemons on a remote Mac, see OpenClaw remote Mac deployment: install paths, Docker versus native daemons, and workflow sketches.

1. Compose sidecars: do not let ports: reopen the public data plane

Place the sidecar and Gateway on the same user-defined bridge, mount separate state directories, and never share a single /var/lib/tailscale between two containers. The classic foot-gun is ports: "443:443" or binding the Gateway to 0.0.0.0 “just to test” — your cloud firewall can be perfect while Docker still publishes a path straight to the Internet. Prefer listening on loopback or the internal bridge IP, then reach the service over Tailscale DNS names from other members. When you need TLS termination, terminate on a reverse proxy that is reachable only inside the tailnet, not via a host publish. When you tighten openclaw.json permissions and tokens, keep the same discipline: OpenClaw production hardening: minimal permissions, ClawHub imports, and reproducible doctor runbooks.

Rule of thumb: run docker ps and confirm nothing maps to the host public interface before you spend hours tuning ACLs or certificates.

2. Bind the Gateway to the tailnet: stack HTTPS and Bearer tokens

After tailscale up succeeds, issue certificates with tailscale cert or an internal Caddy instance so clients trust the chain instead of silently rejecting self-signed files. Inject tokens with environment variables or Docker secrets mounted as files — never bake them into image layers. If you terminate TLS in front of the Gateway, validate Authorization: Bearer at the proxy and keep a second check in Gateway so a mis-routed path cannot skip auth. Log structured 401 versus TLS handshake failures separately so on-call engineers do not chase the wrong layer. Rotate tokens with a rolling restart; long-lived processes often cache the old secret until replaced. Document who owns the Tailscale auth key and who may edit ACLs using the same rotation table you use for API keys.

3. Serve versus Funnel: keep policy boundaries explicit

tailscale serve is usually the right primitive for tailnet-only access. Funnel intentionally exposes selected paths to the public Internet. If compliance demands zero public exposure, default Funnel off, block it in policy, and audit for “temporary” debugging toggles. Under Serve, mismatched hostnames or path prefixes relative to the Gateway base path look like flaky 404s or redirect loops even when the process is healthy — align MagicDNS names, Serve paths, and OpenClaw route tables in the same change ticket.

4. Connection failures: shrink the search space in a fixed order

1) Inside the sidecar, run tailscale status and confirm you joined the intended tailnet with no NeedsLogin state. 2) From another member, ping the 100.x address and curl -vk https://machine-name to separate ACL denials from application bugs. 3) curl the Gateway directly with a real Authorization header so TLS errors do not masquerade as 401s. 4) If Streamable HTTP fails while stdio still works, revisit idle timeouts and body size limits end to end. To rehearse incidents, tighten an ACL, capture logs, then revert — the rollback should be as scripted as the failure.

  • Did a sidecar restart wipe the state volume and force a fresh login?
  • Are you accidentally running both host tailscaled and a container sidecar on overlapping routes?
  • Did cloud metadata endpoints get pulled into “internal only” routing tables?

Closing: Linux gateways, macOS for the heavy half

Keeping Gateway on a small Linux VM saves money, but notarization, code signing, and GUI-heavy debugging still belong on macOS. Splitting control-plane containers on the tailnet from burst workloads on a dedicated Mac mini gives you Apple Silicon memory bandwidth and native Xcode tooling without exposing either side to the public Internet. Mac mini idles at roughly four watts, stays silent under continuous duty, and layers Gatekeeper, SIP, and FileVault for unattended machines. If you want that pairing without sourcing more desk hardware, start from a Mac mini M4 in a nearby region on the Macstripe home page and wire it into the same tailnet as this Compose stack.