I don’t get why people use VMs for stuff when there’s docker.
Thanks!
Outside of that:
Docker & k8s are great for sharing resources, VMs allow you to explicitly not share resources.
VMs can be simpler to backup, restore, migrate.
Some software only runs in VMs.
Passing through displays, USB devices, PCI devices, network interfaces etc. often works better with a VM than with Docker.
For my setup, I have a handful of VMs and dozens of containers. I have a proxmox cluster with the VMs, and some of the VMs are Talos nodes, which is my K8s cluster, which has my containers. Separately I have a zimaboard with the pfsense & reverse proxy for my cluster, and another machine with pfsense as my home router.
My primary networking is done on dedicated boxes for isolation (not performance).
My VMs run: plex, home assistant, my backup orchestrator, and a few windows test hosts. This is because:
- The windows test hosts don't containerise well; I'd rather containerise them. - plex has a dedicated network port and storage device, which is simpler to set up this way. - Home Assistant uses a specific USB port & device, which is simpler to set up this way. - I don't want plex, home assistant, or the backup orchestrator to be affected by issues relating to my other services / k8s. These are the services where small transient or temporary issues would impact the whole household.
Also note, I don't use the proxmox container support (I use talos) for two reasons. 1 - I prefer k8s to manage services. 2 - the isolation boundary is better.
I also use K8s at work, so this is a nice contrast to use something else for my home lab experiments. And tbh, I often find that if I want something done (or something breaks), muscle-memory-Linux-things come back to me a lot quicker than some obscure K8s incantation, but I suppose that's just my personal bias.
Several of my VMs (which are very different than containers, obviously - even though I believe VMs on K8s _can_ be a thing...) run (multiple) docker containers.
Whatever uses that storage usually runs in a Docker inside an LXC container.
If I need something more isolated (think public facing cloudflare) - that's a separate docker in another network routed through another OPNSense VM.
Desktop - VM where I passed down a whole GPU and a USB hub.
Best part - it all runs on a fairly low power HW (<20W idle NAS plus whatever the harddrives take - generally ~5W / HDD).
especially useful if you want multiple of those, and also helpful if you don't want one of them anymore.
I hadn't heard about mealie yet, but sounds like a great one to install.
>No space left on device.
>In other words, you can lock yourself out of PBS. That’s… a design.
Run PBS in LXC with the base on a zfs dataset with dedup & compression turned off. If it bombs you can increase disk size in proxmox & reboot it. Unlike VMs you don't need to do anything inside the container to resize FS so this generally works as fix.
>PiHole
AGH is worth considering because it has built in DoH
>Raspberry Pi 5, ARM64 Proxmox
Interesting. I'm leaning more towards k8s for integrating pis meaningfully
DDR4 anything is becoming very expensive right now because manufacturers have been switching over to DDR5.
On the plus side I have a lot of non-ECC DDR4 sticks that I'm dumping into the expensive market rn
Technitium has all the bells and whistles along with being cross platform.
The Dell is essentially the main machine that runs everything we actually use - the other hardware is either used as redundancy or for experiments (or both). I got the Pi from a work thing and this has been a fun use case. Not that I necessarily recommend it...
It’s the SUV that has off-road tires but never leaves the pavement, the beginner guitarist with an arena-ready amp, the occasional cook with a $5k knife. No judgment, everyone should do what they want, but the discussions get very serious even though the stakes are low.
I’ve had one or two machines running serving stuff at home for a couple decades [edit: oh god, closer to 2.5 decades…], including serving public web sites for a while, and at no point would I have thought the term “home lab” was a good label for what I was doing.
but proxmox and kubernetes are overkill, imo, for most homelab setups. setting them up is a good learning experience but not necessarily an appropriate architecture for maintaining a few mini PCs in a closet long term.
you can ignore the gatekeeping.
There’s a lot of overlap between “I run a server to store my photos” and “I run a bunch of servers for fun”, which has resulted in annoying gatekeeping (or reverse gatekeeping) where people tell each other they are “doing it wrong”, but on Reddit at least it’s somewhat being self-organized into r/selfhosted and r/homelab, respectively.
It's funny. I did this (before it really became a more mainstream hobby, this was early 00s), but now that I work in ops I barely even want to touch a computer after work.
proxmox is great, though. It's worth running it even if you treat it as nothing more than a BMC.
I personally enjoy the big machines (I've also always enjoyed meaninglessly large benchmark numbers on gaming hardware) and enterprise features, redundancy etc. (in other words, over-engineering).
I know others really enjoy playing with K8s, which is its own rabbit hole.
My main goal - apart from the truly useful core services - is to learn something new. Sometimes it's applicable to work (I am indeed an SWE larping as a sysadmin, as another commenter called out :-) ), sometimes it's not.