Media Stack Goes Live
Three critical bugs, 74GB of sneakernet, and the moment the entire pipeline actually works end-to-end.
Yesterday I got nine containers running. Today’s job: make them do something useful. Turns out “containers running” and “pipeline working” are very different states of being, separated by at least three critical bugs and a USB drive.
Jellyfin Fresh Start
The Jellyfin database didn’t survive the migration cleanly. Windows SQLite database had path references baked in. C:\media\... everywhere. Rather than surgically fix every entry, I nuked it. Fresh wizard. New user. New API key.
Added media library paths, triggered a scan. Empty library, obviously, because all the media was still physically on the Surface across the room.
74 Gigs Via Sneakernet
The files needed to physically move. I put everything on a USB drive and walked it across the room. Sneakernet: still the fastest protocol for bulk data transfer.
The USB drive was formatted ExFAT because it came from a Windows machine. The Jetson’s kernel doesn’t have ExFAT support built in. Of course it doesn’t.
$ sudo mount /dev/sdb1 /mnt/usb
mount: unknown filesystem type 'exfat'
$ sudo apt install exfat-fuse exfat-utils
$ sudo mount -t exfat /dev/sdb1 /mnt/usb
74 gigs. Twenty-three minutes of rsync. Then a Jellyfin library scan. Content started appearing. All present, all playable.
That was the easy part.
Bug One: Path Mapping Mismatch
Searched for a new show in Sonarr. It found it, grabbed a release, sent it to the download client. The download completed successfully. Sonarr did nothing. The episode sat in the download folder, unimported, while Sonarr insisted it couldn’t find the file.
The problem: the download client sees files at /downloads/complete/ShowName/episode.mkv. Sonarr expects them at /media/downloads/complete/ShowName/episode.mkv. Same physical directory on the host. Different Docker volume mounts. Different internal paths.
The fix is Remote Path Mapping in Sonarr’s settings. You tell it: when the download client says the file is at /downloads/whatever, look for it at /media/downloads/whatever instead. Same concept in Radarr. Three-minute fix once you understand the problem. The debugging took an hour of staring at logs and questioning every volume mount decision I’d ever made.
Bug Two: The Monitoring Trap
Second test. Added a TV series. Sonarr found it but wouldn’t search for episodes. No automatic grab. No manual search trigger. Nothing. The series page showed all episodes grayed out.
Here’s the thing about Sonarr that isn’t obvious at all: monitoring is three levels deep. The series must be monitored. Each season must be monitored. Each individual episode must be monitored. If any level is unmonitored, nothing happens.
I’d been adding series and assuming they were fully monitored because the series itself showed the monitored icon. But the seasons weren’t. And even after toggling seasons on, individual episodes within them were still off.
The fix is straightforward once you know. Select all episodes, monitor them. Or use the API. But figuring out that monitoring has three independent layers? That’s an hour of your life you don’t get back.
Bug Three: Downloads Lingering Forever
Downloads completed. Imports happened. But completed files stayed in the download client. Indefinitely. Disk slowly filling with completed downloads that had already been moved to the media library.
The download client defaults to keeping completed files around. Good for availability. Impractical for an automated pipeline. Set the retention to remove completed downloads after import. Sonarr has already imported and renamed everything by that point, so the download copy is redundant.
Future improvement: keep files available for a few hours first, then remove.
The Pipeline Test
All three bugs squashed. Time for the real test.
Search. Sonarr finds a release. Sends it to the download client. Download completes through the VPN tunnel. Sonarr detects it, renames it, moves it to the media library. Jellyfin picks it up on the next scan. Overwatch’s webhook fires. Telegram notification lands.
End-to-end. Automated. Working. From “I want to watch something” to “it’s in Jellyfin” without any manual intervention beyond the initial request.
That felt good.
Overwatch v3
With the pipeline confirmed, I rebuilt Overwatch properly. Version 2 was a proof of concept. Version 3 is the real thing.
1,335 lines of Python. Zero external dependencies. HTTP webhook server for Sonarr, Radarr, and download client events. Telegram integration. Download progress tracking. Health check endpoint. TOTP verification endpoint that I’ll need tomorrow. Deployed as a systemd service with auto-restart.
Zero dependencies is a deliberate choice. No pip install. No virtualenv. No dependency hell. If Python 3 exists on the machine, Overwatch runs. Period. I’ve been burned enough times by package managers to appreciate code that just works without asking for anything.
Updating the Skills
All six custom skills needed updating for the new reality. Windows paths became Linux paths. PowerShell references became bash. Broker client invocations changed. Port bindings got verified against the actual running containers.
This is the unglamorous work that separates “it works on my machine” from “it actually works.” Every skill got tested against the live Jetson services. Search, download, progress check, library query. Boring. Essential.
What I Learned
The three bugs were all integration bugs. Each service worked perfectly in isolation. The download client downloaded fine. Sonarr searched fine. Jellyfin served fine. The connections between them, path mappings, monitoring states, lifecycle management, that’s where everything falls apart.
Eternal lesson of distributed systems, even tiny ones running on a single board: the hard part isn’t the components. It’s the glue.
The media stack is live. The family can request content. Athena handles the rest. Tomorrow we lock it all down, because an AI with shell access and Docker privileges is a security incident waiting to happen if you don’t think carefully about trust.