Self-Hosting Everything: Why I Run k3s at Home

There's a moment every developer hits where they look at their monthly cloud bill and think: "I'm paying HOW much for this?"
That was me, a couple years ago. And instead of accepting it like a normal person, I went full overcorrection mode and now run a two-node Kubernetes cluster in my office. One machine is a beefy mini-PC I've had for years. The other is a decommissioned Dell R710 I bought off eBay that sounds like a jet engine and I refuse to apologize for it.
No regrets. Mostly.
Why Self-Host Anything?
Three things: cost, control, and the fact that I genuinely find this fun.
Cost is obvious. When you're running a handful of apps, managed hosting adds up fast. A database here, a deploy pipeline there, object storage somewhere else. Before you know it you're paying $200/month for things sitting idle 80% of the time. I'd rather buy server hardware once and call it done.
Control is the part people underestimate. When you self-host, you know exactly where your data lives. You know what version of Postgres you're running. You know why your app is slow (it's always the query you didn't index). There's no opaque platform layer between you and your code. When something breaks, you understand why, which means you can actually fix it.
The fun part is purely subjective. If debugging a Kubernetes networking issue at 11 PM on a Saturday sounds like a nightmare, self-hosting is probably not for you. If it sounds like a puzzle worth solving, welcome to the club.
The Journey from Docker Compose to k3s
I started where everyone starts: Docker Compose on a single VPS. It works great until it doesn't. You outgrow one machine, something crashes and takes everything with it, or you start wanting proper deploys without maintaining a 200-line bash script.
I poked at a few managed Kubernetes offerings before landing on k3s. They were fine. But I kept paying for compute I wasn't fully using, and "managed" meant I couldn't see what was actually happening under the hood.
k3s is a stripped-down Kubernetes distribution from Rancher. It's lightweight enough to run on a Raspberry Pi, which means it runs effortlessly on real server hardware. The install is almost offensively simple:
curl -sfL https://get.k3s.io | sh -
That's it. Working Kubernetes cluster. Adding a second node is one more command with an agent token. The whole thing took me maybe 20 minutes to get running, which felt wrong after everything I'd heard about how hard Kubernetes was.
What I'm Actually Running
Two nodes. My control-plane machine (named gmork, because of course it is) handles the cluster brain work and runs a bunch of services. My eBay R710 (carrion) is the beefy worker, with enough RAM that I've stopped worrying about resource limits almost entirely.
Rough inventory of what's running:
- Media stack: The full arr ecosystem. Plex handles playback.
- Home automation: Home Assistant, some custom integrations Rēsse will never know I spent 6 hours wiring up
- Supabase: Fully self-hosted. Postgres, Kong API gateway, storage, auth, the works
- Nextcloud: Because I'm not paying for iCloud when I have a NAS three feet away
- A bunch of web apps: Things I've shipped, things in progress, things in that permanent state of "I'll finish this weekend"
- ArgoCD: The thing that ties all of this together
External traffic goes through a Cloudflare Tunnel so I don't expose my home IP or punch holes in my firewall. Internal services talk over the cluster network. It all just works, which still surprises me sometimes.
GitOps with ArgoCD Is the Part That Changed Everything
Before ArgoCD, "deploying" meant SSH-ing into a machine and running something. Or maintaining a collection of scripts. Or just winging it and hoping nothing broke in a weird way.
ArgoCD watches a Git repo. When you push a change to your Kubernetes manifests, it sees the diff and syncs the cluster to match. That's basically the whole thing. Your Git repo becomes the source of truth for everything running in the cluster.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/you/infrastructure
targetRevision: HEAD
path: apps/my-app
destination:
server: https://kubernetes.default.svc
namespace: my-app
syncPolicy:
automated:
prune: true
selfHeal: true
My deploy workflow now: push code to GitHub, GitHub Actions builds and pushes a Docker image, update the image tag in the infrastructure repo, ArgoCD picks it up and rolls it out. No SSH. No scripts. No "wait which environment am I pointed at."
The first time I pushed a commit and watched ArgoCD automatically redeploy everything correctly, I felt like a wizard. A very tired wizard who had spent two weekends setting this up, but still.
The Part Where I Admit It's Not All Sunshine
I'm not going to pretend this is zero friction.
The learning curve is real. Kubernetes has opinions about everything, and learning those opinions takes time. You will lose an afternoon to a networking issue that turns out to be one wrong annotation on a service. You will misconfigure resource requests and wonder why your pod is getting evicted. These are solvable problems, but they require patience.
Storage is the hardest part of running stateful workloads. Out of the box, Kubernetes doesn't give you great answers here. I've ended up with a combination of local path storage for things that need to stay on a specific node, and NFS mounts from a NAS for shared stuff. It works, but it required actual thought and more than one do-over.
The other thing nobody talks about: hardware fails. When you're running this at home, you are the SRE on call. I caught a failing drive early because I have monitoring set up. Without that, it would have been a very bad day. Redundancy matters. Backups matter more. Don't skip either of those just because it feels like overkill for a home lab.
Should You Do This?
Depends entirely on what you're optimizing for.
If you want the fastest path to getting something deployed: use a PaaS. Vercel, Fly.io, Railway. They're good products. There's no shame in using them.
If you want to deeply understand your infrastructure, if you're running enough workloads that managed costs are actually biting you, or if you just want a lab that teaches you real production ops skills: k3s at home is a genuinely good answer.
I've learned more about networking, storage, and distributed systems from running this cluster than from years of using managed services. When something breaks, I have to actually understand why. That's frustrating in the moment and useful in every job conversation afterward.
And honestly? There's something satisfying about running your whole digital life on hardware you own, sitting in your office, that you can touch. Maybe that's just me. But I doubt it.
Where to Start
Start with a single node k3s install on any Linux machine you have available. An old PC, a used mini-PC off Amazon, whatever. Get comfortable with kubectl. Deploy a few things. Break them and fix them. Then decide if you want to go deeper.
The k3s docs are actually solid. The ArgoCD getting started guide is worth working through. From there you're just adding pieces until you have something useful.
It'll take longer than you expect. You'll definitely break something at a bad time. And you'll end up with a setup you actually understand end-to-end, which is worth more than any managed service ever gave me.