· Infra · 8 min read
The boring stack that runs half the lab: Docker + Watchtower + nginx + a wildcard cert
The goal
Every few weeks I want to stand up another self-hosted app — a new dashboard, a small MCP server, a wiki, a file drop, something I'm prototyping. Each one is a standard shape:
- Runs as a Docker container that wants to stay up.
- Needs its own HTTPS URL — ideally
newthing.nginx.dnsif.cawithout editing DNS. - Should stay patched automatically if upstream releases a new image.
- Shouldn't require me to think about certs, reverse proxies or DNS every time.
I wanted the marginal cost of "another service in the lab" to be genuinely close to zero. Not "fifteen minutes of YAML"; not "an hour of nginx and certbot"; literally the time to copy a recipe.
This is the stack I settled on.
The four pieces
┌──────────────────────────────────────────┐
*.dnsif.ca │ nginx reverse proxy │
*.nginx.dnsif.ca│ wildcard cert from acme.sh │
───────▶│ vhost per service, proxy_pass to docker │
└───────────┬──────────────────────────────┘
│
▼
┌─────────────────────────┐
│ Docker host │
│ (docker.dnsif.ca, 11.1) │
│ │
│ newthing:8080 │
│ another:3000 │
│ wiki:4000 … │
└───────────┬─────────────┘
│ monitors
▼
┌──────────────────────┐
│ Watchtower │
│ pulls + redeploys │
│ on new image tags │
└──────────────────────┘
- One Docker host, running all the long-running containers. Reachable as
docker.dnsif.ca. - Watchtower, a tiny container that watches the other containers and rolls them forward when a newer tagged image appears.
- nginx as a reverse proxy, one vhost per service,
proxy_passing to the Docker host. - Wildcard DNS + wildcard TLS cert — so neither adding DNS nor issuing a cert is ever part of the "add a new service" path.
None of this is novel. The win is that they compose so well that the marginal cost collapses to almost nothing.

Piece 1 — A single Docker host
My lab has one machine (an old ThinkCentre, 24GB RAM) that exists to run Docker. It's reachable internally as docker.dnsif.ca at 10.33.11.1. Persistent services live here, nowhere else.
Why one host, not Kubernetes? Because for this scale — 30-ish long-running services, no meaningful scaling or scheduling needs — Kubernetes would be pure overhead. Docker Compose is enough. If a container dies, Docker restarts it. If the host reboots, systemd brings everything back up. Simpler fault model, simpler mental model.
Every service I add follows the same pattern:
services:
newthing:
image: vendor/newthing:latest
container_name: newthing
restart: unless-stopped
labels:
- "com.centurylinklabs.watchtower.enable=true"
ports:
- "8080:8080"
environment:
- ...
Restart policy: unless-stopped. Label: opt into Watchtower. Port: the service's internal port mapped to a free port on the host. That's it.
Piece 2 — Watchtower, but opt-in
Watchtower is a tiny Go daemon that watches your containers and redeploys them when a new image tag is available. By default it goes after everything, which is the anti-feature: updating a database container mid-flight because of a point release is not a nice surprise.
The fix is the WATCHTOWER_LABEL_ENABLE=true flag:
services:
watchtower:
image: containrrr/watchtower
container_name: watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- WATCHTOWER_POLL_INTERVAL=86400 # check once a day
- WATCHTOWER_LABEL_ENABLE=true # only containers that opt in
- WATCHTOWER_CLEANUP=true # remove old images after update
- WATCHTOWER_NOTIFICATIONS=shoutrrr
- WATCHTOWER_NOTIFICATION_URL=slack://...
Now Watchtower only touches containers with the com.centurylinklabs.watchtower.enable=true label. My databases, anything running state, anything critical — I leave unlabelled. They stay frozen at the version I deployed. Stateless things (static sites, most MCPs, small dashboards) get the label and auto-update.
I also get a Slack/shoutrrr ping whenever an update happens, which is the only "ops surface" the whole lab has to expose to me.

Piece 3 — nginx as the reverse proxy
I have a separate nginx box (nginx.dnsif.ca, 10.33.11.2) whose only job is reverse proxying. Keeping it off the Docker host is deliberate: if Docker needs a reboot, ingress still works; if I nuke an nginx config, Docker services are fine.
Adding a new service to nginx is one vhost file:
# /etc/nginx/sites-enabled/newthing.nginx.dnsif.ca.conf
server {
listen 443 ssl http2;
server_name newthing.nginx.dnsif.ca;
ssl_certificate /etc/nginx/ssl/dnsif.ca.cer;
ssl_certificate_key /etc/nginx/ssl/dnsif.ca.key;
location / {
proxy_pass http://docker.dnsif.ca:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Every service vhost is a near-clone of that. I have snippets for "WebSocket", "long-lived streaming" and "basic auth" as tiny include files I drop into the location block when needed.
Piece 4 — The wildcard cert and wildcard DNS, doing the boring heavy lifting
Here's the part that cuts the setup time in half:
- Wildcard DNS:
*.nginx.dnsif.cais a single A record in DigitalOcean DNS pointing at the nginx box. Adding a new subdomain doesn't require a DNS change at all — it already resolves. - Wildcard TLS:
acme.shissues*.nginx.dnsif.cavia DNS-01 (I wrote about that in the previous post). It lives on the nginx box as/etc/nginx/ssl/dnsif.ca.cer. The cert renews itself via theacme.shcron. The nginx deploy hook reloads nginx automatically.
Together, these two mean: adding a new service never touches DNS, never touches a cert. The wildcard covers the subdomain the moment I write the nginx vhost. TLS just works.
The actual "add a new service" recipe
End-to-end, here's what launching newthing.nginx.dnsif.ca looks like:
SSH to the Docker host. Add the service to
docker-compose.yml:newthing: image: vendor/newthing:1.2 restart: unless-stopped labels: ["com.centurylinklabs.watchtower.enable=true"] ports: ["8080:8080"]docker compose up -d newthing.SSH to the nginx box. Copy the vhost template, change three fields (subdomain, upstream host:port, any service-specific bits).
nginx -t && systemctl reload nginx.Open
https://newthing.nginx.dnsif.cain the browser.
No DNS change. No cert request. No restarts besides the service itself. About four minutes if I'm not distracted.

Things I don't do — and why
- I don't use Traefik or Caddy with Docker labels to auto-route. Auto-routing is cool, but every time the Docker host reboots, everything churns through reconfiguration. A static nginx config is debuggable, grep-able, and doesn't randomly forget how to route to a container.
- I don't use Kubernetes. For this workload the scheduling and healing features don't buy anything that "systemd + restart=unless-stopped" doesn't already deliver.
- I don't put Watchtower on stateful containers. Databases, tools holding secret material, anything writing its own configs — all frozen. Updates to those happen deliberately.
- I don't run certs per service. One wildcard covers all of
*.nginx.dnsif.ca. The pattern of "cert per service" is where most people start drowning in certbot plugin hell.
Results
- Uptime is whatever Docker's
restart: unless-stoppeddelivers, which — on bare metal with no dramatic hardware — is essentially the host's uptime. - Patching happens on its own. I get a Slack ping when Watchtower redeploys something, and I scroll through occasionally to see what changed.
- New services take four minutes instead of forty. This is the biggest compounding win — low friction means I actually try ideas I'd otherwise skip.
- Certificate operations have effectively dropped out of my awareness. The wildcard +
acme.sh+ nginx deploy hook make it genuinely invisible.
This is the least exciting part of the lab, which is exactly why it's the most productive. When docker compose up -d feels like it publishes a fully-HTTPS service at a pretty URL on your own infra, self-hosting becomes a habit instead of a project.