← all posts

· Infra · 8 min read

The boring stack that runs half the lab: Docker + Watchtower + nginx + a wildcard cert

The goal

Every few weeks I want to stand up another self-hosted app — a new dashboard, a small MCP server, a wiki, a file drop, something I'm prototyping. Each one is a standard shape:

I wanted the marginal cost of "another service in the lab" to be genuinely close to zero. Not "fifteen minutes of YAML"; not "an hour of nginx and certbot"; literally the time to copy a recipe.

This is the stack I settled on.

The four pieces

                    ┌──────────────────────────────────────────┐
    *.dnsif.ca      │ nginx reverse proxy                      │
    *.nginx.dnsif.ca│ wildcard cert from acme.sh               │
            ───────▶│ vhost per service, proxy_pass to docker  │
                    └───────────┬──────────────────────────────┘
                                │
                                ▼
                   ┌─────────────────────────┐
                   │  Docker host            │
                   │ (docker.dnsif.ca, 11.1) │
                   │                         │
                   │  newthing:8080          │
                   │  another:3000           │
                   │  wiki:4000     …        │
                   └───────────┬─────────────┘
                               │ monitors
                               ▼
                     ┌──────────────────────┐
                     │  Watchtower          │
                     │  pulls + redeploys   │
                     │  on new image tags   │
                     └──────────────────────┘
  1. One Docker host, running all the long-running containers. Reachable as docker.dnsif.ca.
  2. Watchtower, a tiny container that watches the other containers and rolls them forward when a newer tagged image appears.
  3. nginx as a reverse proxy, one vhost per service, proxy_passing to the Docker host.
  4. Wildcard DNS + wildcard TLS cert — so neither adding DNS nor issuing a cert is ever part of the "add a new service" path.

None of this is novel. The win is that they compose so well that the marginal cost collapses to almost nothing.

Docker host with several long-running containers

Piece 1 — A single Docker host

My lab has one machine (an old ThinkCentre, 24GB RAM) that exists to run Docker. It's reachable internally as docker.dnsif.ca at 10.33.11.1. Persistent services live here, nowhere else.

Why one host, not Kubernetes? Because for this scale — 30-ish long-running services, no meaningful scaling or scheduling needs — Kubernetes would be pure overhead. Docker Compose is enough. If a container dies, Docker restarts it. If the host reboots, systemd brings everything back up. Simpler fault model, simpler mental model.

Every service I add follows the same pattern:

services:
  newthing:
    image: vendor/newthing:latest
    container_name: newthing
    restart: unless-stopped
    labels:
      - "com.centurylinklabs.watchtower.enable=true"
    ports:
      - "8080:8080"
    environment:
      - ...

Restart policy: unless-stopped. Label: opt into Watchtower. Port: the service's internal port mapped to a free port on the host. That's it.

Piece 2 — Watchtower, but opt-in

Watchtower is a tiny Go daemon that watches your containers and redeploys them when a new image tag is available. By default it goes after everything, which is the anti-feature: updating a database container mid-flight because of a point release is not a nice surprise.

The fix is the WATCHTOWER_LABEL_ENABLE=true flag:

services:
  watchtower:
    image: containrrr/watchtower
    container_name: watchtower
    restart: unless-stopped
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - WATCHTOWER_POLL_INTERVAL=86400   # check once a day
      - WATCHTOWER_LABEL_ENABLE=true     # only containers that opt in
      - WATCHTOWER_CLEANUP=true          # remove old images after update
      - WATCHTOWER_NOTIFICATIONS=shoutrrr
      - WATCHTOWER_NOTIFICATION_URL=slack://...

Now Watchtower only touches containers with the com.centurylinklabs.watchtower.enable=true label. My databases, anything running state, anything critical — I leave unlabelled. They stay frozen at the version I deployed. Stateless things (static sites, most MCPs, small dashboards) get the label and auto-update.

I also get a Slack/shoutrrr ping whenever an update happens, which is the only "ops surface" the whole lab has to expose to me.

Watchtower update log

Piece 3 — nginx as the reverse proxy

I have a separate nginx box (nginx.dnsif.ca, 10.33.11.2) whose only job is reverse proxying. Keeping it off the Docker host is deliberate: if Docker needs a reboot, ingress still works; if I nuke an nginx config, Docker services are fine.

Adding a new service to nginx is one vhost file:

# /etc/nginx/sites-enabled/newthing.nginx.dnsif.ca.conf
server {
  listen 443 ssl http2;
  server_name newthing.nginx.dnsif.ca;

  ssl_certificate     /etc/nginx/ssl/dnsif.ca.cer;
  ssl_certificate_key /etc/nginx/ssl/dnsif.ca.key;

  location / {
    proxy_pass http://docker.dnsif.ca:8080;
    proxy_set_header Host              $host;
    proxy_set_header X-Real-IP         $remote_addr;
    proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
  }
}

Every service vhost is a near-clone of that. I have snippets for "WebSocket", "long-lived streaming" and "basic auth" as tiny include files I drop into the location block when needed.

Piece 4 — The wildcard cert and wildcard DNS, doing the boring heavy lifting

Here's the part that cuts the setup time in half:

Together, these two mean: adding a new service never touches DNS, never touches a cert. The wildcard covers the subdomain the moment I write the nginx vhost. TLS just works.

The actual "add a new service" recipe

End-to-end, here's what launching newthing.nginx.dnsif.ca looks like:

  1. SSH to the Docker host. Add the service to docker-compose.yml:

      newthing:
        image: vendor/newthing:1.2
        restart: unless-stopped
        labels: ["com.centurylinklabs.watchtower.enable=true"]
        ports: ["8080:8080"]
    
  2. docker compose up -d newthing.

  3. SSH to the nginx box. Copy the vhost template, change three fields (subdomain, upstream host:port, any service-specific bits). nginx -t && systemctl reload nginx.

  4. Open https://newthing.nginx.dnsif.ca in the browser.

No DNS change. No cert request. No restarts besides the service itself. About four minutes if I'm not distracted.

Nginx vhost template

Things I don't do — and why

Results

This is the least exciting part of the lab, which is exactly why it's the most productive. When docker compose up -d feels like it publishes a fully-HTTPS service at a pretty URL on your own infra, self-hosting becomes a habit instead of a project.