The Great Migration
Moving everything from Surface to Jetson. The kernel doesn't support WireGuard. So I built a VPN container from scratch.
Migration day. Forty-seven step checklist. Estimated time: “a few hours.” Actual time: significantly more, because there’s a kernel-level surprise waiting that I don’t know about yet.
First Boot
Athena running on the Jetson for the first time felt like a milestone. Same AI, same personality, same everything. But now on ARM64 Linux with a GPU instead of Windows on x86. Her first message back was perfectly normal. Which is somehow the most satisfying possible outcome. Continuity.
GitHub CLI setup. Clone the workspace. Merge the migration branch. All twelve files land cleanly. Linux versions of the broker, Overwatch, all the config. Start working down the checklist.
Docker installed. Storage mounted. Swap configured. Containers pulling ARM64 images. The linuxserver images are great about multi-arch support. Everything pulled without issues.
Then I hit the wall.
The Kernel Problem
$ modprobe wireguard
modprobe: FATAL: Module wireguard not found
WireGuard has been in the mainline Linux kernel since 5.6. The Jetson runs 5.15. Should work. Does not. NVIDIA’s customized kernel doesn’t include CONFIG_WIREGUARD. Also missing: CONFIG_IP_ADVANCED_ROUTER, which Gluetun needs for its routing rules.
Showstopper. Gluetun fundamentally cannot work on this kernel. The entire VPN strategy is dead on arrival.
Three options. Recompile the kernel, which risks breaking NVIDIA’s custom GPU patches. Use OpenVPN instead, which works in userspace but is significantly slower. Or build a custom VPN container with userspace WireGuard and work around every missing kernel feature.
I went with option three. Build it myself.
The athena-vpn Container
This was the hardest thing I’ve built for Olympus so far.
The core is wireguard-go. A userspace implementation of WireGuard written in Go. It creates a TUN device and handles the cryptographic tunnel entirely without kernel modules. Slower than kernel WireGuard, but it actually works.
The standard wg-quick script assumes kernel WireGuard and uses ip rule commands that require the advanced routing config that’s also missing. I patched it to skip routing table manipulation entirely and handle routing with iptables instead.
Then the kill switch. If the VPN drops, all traffic must stop. No DNS leaks. No IP leaks. Nothing.
#!/bin/bash
# kill-switch.sh
iptables-legacy -F
iptables-legacy -X
# Default: drop everything
iptables-legacy -P OUTPUT DROP
iptables-legacy -P INPUT DROP
iptables-legacy -P FORWARD DROP
# Allow loopback
iptables-legacy -A OUTPUT -o lo -j ACCEPT
iptables-legacy -A INPUT -i lo -j ACCEPT
# Allow WireGuard tunnel
iptables-legacy -A OUTPUT -o wg0 -j ACCEPT
iptables-legacy -A INPUT -i wg0 -j ACCEPT
# Allow WireGuard handshake to VPN endpoint only
iptables-legacy -A OUTPUT -p udp --dport 51820 -d $VPN_ENDPOINT -j ACCEPT
# Allow DNS through tunnel only
iptables-legacy -A OUTPUT -o wg0 -p udp --dport 53 -j ACCEPT
iptables-legacy -A OUTPUT -o wg0 -p tcp --dport 853 -j ACCEPT
Note: iptables-legacy, not iptables. The Jetson’s kernel uses the legacy backend, not nftables. Another fun discovery at a time when I really didn’t need more fun discoveries.
Last piece: DNS-over-TLS via Unbound. DNS leaks are the most common VPN failure mode. Instead of trusting the VPN provider’s DNS, I added Unbound as a local resolver that forwards everything over TLS to Cloudflare and Quad9.
The entrypoint script orchestrates startup. Kill switch first. Fail closed. Then Unbound. Then wireguard-go. Then verify connectivity through the tunnel. If any step fails, the kill switch keeps everything blocked.
When curl ifconfig.me returned the VPN provider’s IP from inside the container? That was a good moment. Several hours of work crystallized into one correct IP address.
Secret Broker on Linux
The broker migration was smoother. Almost pleasant after the VPN ordeal.
sudo useradd -r -s /usr/sbin/nologin athena-broker
sudo mkdir -p /opt/athena/secrets
sudo chown athena-broker:athena-broker /opt/athena/secrets
sudo chmod 700 /opt/athena/secrets
Same security model as Windows. Separate user owns the secrets, only the broker process can read them, HMAC authentication for all requests. But Unix permissions are so much cleaner than NTFS ACLs. No DENY rules fighting with Administrators groups. No DPAPI complexities. Just chmod 700 and done.
Deployed as a proper systemd service with auto-restart. The whole thing took maybe twenty minutes. Compared to the VPN container, it felt like cheating.
Nine Containers
By evening, the full stack was up.
athena-vpn Up 2 hours
download-client Up 2 hours
indexer-proxy Up 2 hours
flaresolverr Up 2 hours
sonarr Up 1 hour
radarr Up 1 hour
prowlarr Up 1 hour
jellyfin Up 1 hour
overwatch Up 45 minutes
Nine containers running on an ARM64 board that draws 15 watts. The Surface drew 30-60W for the same workload. For something running 24/7, that’s real money.
Cleaning Up
Last step: removing the Windows cruft. PowerShell scripts. Windows path references in skills. The old broker. All replaced with Linux equivalents or just deleted.
The workspace is Linux-native now. No more cross-platform compromises. No more “works on Windows but…” edge cases. It feels like moving from a rental into a place you own.
What I Learned
The kernel limitation was humbling. You can plan for weeks, write comprehensive migration packages, and still get blindsided by something you didn’t think to check. Who verifies that a vendor kernel includes WireGuard? Apparently you should.
But the fix, building a custom VPN container with userspace WireGuard, is arguably better than what I had before. More portable. More transparent. I understand every layer. The Gluetun kill switch was a black box. Mine is forty lines of iptables I wrote myself.
Sometimes the obstacle is the way. Tomorrow: make the media stack actually serve media.