I got tired of paying for streaming apps, yet still having to fight them. YouTube downloads on my phone were unreliable, Netflix would randomly lock me out while traveling, and most smart-TV apps are even worse.
I built my own pipeline: local [[intake|subscription list]], download, organize, re-encode, and watch. It works on my phone and on my TV.
I still pay for Youtube Premium, Amazon Prime, and Netflix; I consider this pipeline a more useful version of a DVR. At the end of the day the quality is actually worse (since I’m optimizing for storage, especially for downloading to my phone), but the interface is exactly what I want.
Media downloader + player setup (Fedora KDE, rootless containers)
This runs on Fedora KDE (immutable) with rootless Podman containers managed by systemd user services (with linger). I wanted something that behaves like a small service: predictable startup, explicit networks, explicit paths, and as few host changes as possible.
In day-to-day use it:
- Finds media automatically, downloads it, and files it away correctly
- Handles both torrents and YouTube subscriptions
- Reboots cleanly since it runs on my main PC which is shutoff if I’m not home (ideally/sometimes…. I am missing something obvious about podman startup sequence)
What’s running
Host choices:
- OS: Fedora KDE Immutable
- Containers: rootless Podman
- Service manager: systemd user units + linger
Services:
- Jellyfin (reads libraries only)
- Prowlarr (indexer management)
- Sonarr (TV)
- Radarr (movies)
- qBittorrent (torrents)
- FlareSolverr (Cloudflare challenges)
- Tdarr + Tdarr Node (post-import re-encoding)
- ytdl-sub + yt-dlp (YouTube)
Basic Flow
- Prowlarr feeds Sonarr/Radarr
- qBittorrent and ytdl-sub handle downloads
- Sonarr/Radarr import and organize
- Tdarr re-encodes
- Jellyfin serves the library read-only to clients
Networking
I ended up with two networking setups.
Internal services:
- Everything that needs to talk to everything else sits on a shared Podman bridge network:
media-net - Containers name, not
localhost:1234 - This covers Sonarr, Radarr, Prowlarr, FlareSolverr, Jellyfin, and Tdarr
qBittorrent:
- Runs on host networking
- I bound it to a VPN interface which has a kill switch
Storage layout and the permissions mess
I keep all persistent state in two places:
- Container config:
~/.containers/media/... - Media and downloads:
~/containers-media/...
Things I screwed up at first:
- LinuxServer images run as a fixed internal user (
abc) - Bind mounts need ownership and permissions that match what the container expects
- SELinux relabeling with
:Zcan break when the same mount is shared across multiple containers
The Sonarr/Radarr import problem and the real fix
The most annoying problem was the media would get lost between subscription and Jellyfin. The issue came down to paths.
qBittorrent (host network) and Sonarr/Radarr (bridge network) were looking at different filesystem paths for the same downloads
Fix:
- Use Remote Path Mapping in Sonarr and Radarr:
- Remote:
/downloads - Local:
/media/downloads
- Remote:
- Add explicit host gateway resolution so the bridge-network containers can reach the host-based downloader
Indexers and FlareSolverr
- Indexers live only in Prowlarr
- Sonarr/Radarr are added as apps inside Prowlarr
- Prowlarr pushes indexers downstream
FlareSolverr:
- Only kicks in when assigned to a given indexer
This avoids copy/pasting indexer settings into three different UIs.
YouTube subscriptions (ytdl-sub)
- Channels are defined in a YAML file
- Downloads land in the same
/downloadsarea as torrents - Output formatting matches what Jellyfin expects
- A “recent only” policy keeps storage from creeping upward forever
When automation breaks, I can still trigger the downloader manually inside the container.
This needs to be updated if YouTube changes their interface, and is the least stable part of my system.
I also use this for music off YouTube, although I still just use YouTube Music. The Premium version isn’t enshitified yet.
Tdarr post-processing
Tdarr runs after import to compress the files. I get 90% compression on most BitTorrent downloads, but it is still roughly HD.
- It re-encodes in place to reduce storage use
- A separate Tdarr Node does the heavy work
What it’s trying to do
This flow targets oversized, high-bitrate files and recompresses them to a controlled HEVC profile. The goal is disk savings with predictable output, and no pointless churn.
How it behaves
- Skip anything already processed
- Skip anything that won’t benefit
- Encode with one consistent profile
- If it errors, try a fallback path
- Record the outcome so it won’t loop
Details:
-
Input file → skiplist check If it’s already been handled, it stops there.
-
Early exits
- File size under ~200 MB: skip and add to skiplist
- Video bitrate under ~1 Mbps: skip This is about not wasting CPU/GPU time on files that are already “small enough” in practice.
-
Primary encode path (FFmpeg)
- Video:
libx265, medium preset - Target bitrate ~550 kbps with a capped peak
- Audio: Opus @ 128 kbps
- Subtitles copied
- Metadata preserved After a successful encode, it replaces the original file atomically.
- Video:
-
Fallback path If the primary path errors:
- Reset flow error
- Run a second FFmpeg path with the same conservative HEVC settings
- Rename output to
.mkvbefore replacement The point is to avoid half-written files and stuck queues.
-
Post-processing Successful files go on the skiplist. That gives me idempotence: a file doesn’t get “optimized” five times.
Boot and reliability
- Each container has a generated systemd user service
- Linger is enabled so they start at boot
Early failures were mostly self-inflicted:
- Stale mount paths
- Small systemd unit syntax mistakes
Once cleaned up, everything comes up reliably. (Except qbittorent… sometimes the VPN fails to connect and it just gets stuck download metadata. Sometimes other pods too…)