· Agents · 7 min read
Running GitLab MCP from my couch (via Cloudflare Tunnel)
The goal
My GitLab runs in the home lab at gitlab.dnsif.ca. My laptop talks to it through WireGuard. In Claude Code — the CLI on my laptop — I use a community GitLab MCP server to let Claude list projects, open merge requests, edit files, triage issues, push branches, the works. It's ridiculously useful for "hey, refactor this thing and open an MR" style requests.
But I don't always have my laptop.
What I really wanted: use Claude on mobile or claude.ai from any browser, ask it to do something in GitLab, and have it actually do it. Walking around, on the train, with zero local tooling. Same agent superpowers, different client.
This post is the story of why that's harder than it sounds and how a Cloudflare Tunnel closed the gap.

Why Claude Code was enough — until it wasn't
Claude Code runs as a process on my laptop. When I register a local MCP server — say, http://gitlab-mcp.local:8080/mcp — Claude Code just connects to it over the loopback or lab network. The MCP can live on an internal-only IP, bound to WireGuard, hidden behind the home firewall, and Claude Code reaches it fine. No public endpoint needed.
So for months, the setup was:
┌──────────────────────────┐
│ laptop (Claude Code CLI) │ ──► http://gitlab-mcp.lab:xxxx ──► gitlab.dnsif.ca
└──────────────────────────┘
Clean. Private. No attack surface. But only useful from the one place my lab VPN is set up.
Why Claude web is different
Claude web (claude.ai) is a browser app running a model session on Anthropic's servers. When it calls a tool, that tool call originates from Anthropic's infrastructure — not from my laptop, not from my browser. So:
If I want Claude web to reach an MCP server, that MCP has to be reachable from the public internet.
My laptop seeing gitlab-mcp.lab doesn't help. Anthropic's servers have never heard of it and never will. The MCP needs a publicly resolvable hostname, valid TLS, and at least token-level auth — otherwise anyone on the internet could drive my GitLab.
(the missing bit)
│
▼
┌──────────────┐ ┌────────────────┐ ┌──────────────────┐
│ claude.ai │ ───▶ │ public HTTPS │ ───▶ │ GitLab MCP │
│ (web/mobile)│ │ hostname │ │ (inside my lab) │
└──────────────┘ └────────────────┘ └──────────────────┘
Why not just open a port?
I could port-forward the MCP through my home router. In theory. In practice: my ISP hands out CG-NAT-ish addresses, I'd have to manage TLS myself, deal with Let's Encrypt DNS-01 challenges, and hope my upstream never changes. Also I'd be exposing an internal service directly — no intermediate hardening, no geo-blocking, no DDoS buffer. For a personal lab that's more operational risk than I want.
Cloudflare Tunnel as the glue
Cloudflare Tunnel (formerly Argo Tunnel) is purpose-built for this shape of problem: a small daemon, cloudflared, runs on a machine in my lab and makes an outbound connection to Cloudflare's edge. Cloudflare then routes traffic for a specified public hostname into that tunnel. No inbound ports open on my router, no public IP required on my end, TLS terminated automatically at Cloudflare's edge.
The shape becomes:
┌────────────────┐ ┌────────────────┐ ┌──────────────────┐
│ claude.ai │ HTTPS │ Cloudflare │ tunnel │ cloudflared in │
│ agent session │ ──────▶ │ edge │ ──────▶ │ my lab │
└────────────────┘ └────────────────┘ └────────┬─────────┘
│
▼
┌──────────────────┐
│ GitLab MCP │
│ (LAN-only) │
└────────┬─────────┘
▼
gitlab.dnsif.ca
The MCP itself still binds to a local port on a LAN-only IP. cloudflared is the only thing that knows both sides.

The setup, roughly
- Run
cloudflaredin the lab — I use the Docker image, one container on the same Docker host as the MCP. It takes a tunnel token from the Cloudflare Zero Trust dashboard. - Create the tunnel in the Zero Trust dashboard, give it a name, note the ID.
- Map a public hostname to an internal service:
gitlab-mcp.awkto.dev→http://gitlab-mcp.docker:8080. Cloudflare provisions TLS automatically. - Add auth — I put a Cloudflare Access policy in front of the hostname: only my identity (email login) can reach it. The MCP itself also expects a bearer token that Claude includes with every tool call.
- Register the MCP with Claude — in Claude's settings UI, add a new MCP server pointing at the public URL, paste the bearer token. Claude.ai pulls the tool list, and from that moment the tools show up in every conversation.

What about authentication?
Two layers, defense in depth:
- Cloudflare Access in front of the hostname — a session cookie / service-token gate enforced at the edge. Claude sends its API call with a service token; anyone stumbling on the hostname without one gets a login page.
- Bearer token inside the MCP — the MCP server itself still checks an
Authorization: Bearer …header before it honours any tool call. So even if the edge was bypassed, you still need the token.
The GitLab token itself is a narrowly-scoped personal access token (read/write on specific projects only, no admin), stored in OpenBao and injected into the MCP container at start. If the tunnel gets compromised, the blast radius is "the projects I use Claude on", not the whole GitLab install.
Claude Code still wins for LAN stuff
This public path is only for Claude web and mobile. When I'm at my laptop, Claude Code keeps talking to the MCP over my LAN directly — faster, lower-latency, no Cloudflare hop, no tunnel. Same MCP server, two clients, two paths.
┌──────────────────┐
│ Claude Code CLI │ ── LAN ──▶ http://gitlab-mcp.lab:8080 (fast, private)
└──────────────────┘
┌──────────────────┐ CF tunnel
│ claude.ai web │ ── HTTPS ──▶ gitlab-mcp.awkto.dev ──────▶ same MCP
└──────────────────┘
The MCP exposes the same tools regardless of who called. Claude just doesn't care.
Results
- I can now open my phone, load Claude, and ask it to "create a merge request on project X that bumps the go-toolchain version and drops the ci cache step" — and it does. Same as it's been doing from Claude Code for months, but now from a bus.
- No ports opened on my home router.
cloudflaredhandles egress only. - TLS I didn't configure. Cloudflare does it; I don't own a single cert.
- Access-gated at the edge. Random probing scripts hit a login page before they hit the MCP.
- Narrow GitLab token keeps the blast radius small if anything leaks.
The broader lesson, which keeps coming up across the lab: self-hosted is great; "self-hosted with a carefully scoped public edge" is the actual superpower. Keep the service LAN-only, keep the data LAN-only, and let a tunnel expose exactly the 90 bytes of HTTP surface you need, authenticated both at the edge and at the application.
Next up: I want to publish a second MCP — probably the Atlas inventory MCP — through the same tunnel, so Claude web can answer questions about the lab itself. Same pattern, different backend. That's a good kind of boring.