I'm aware bare metal exists and it's not always practical to just provision more servers, yet I think for most workloads you're not getting the benefit of Kubernetes if you have say 3 servers and lose 1/3 of your capacity to do software updates.
Even with small 3 node cluster of of raspberry pis, you can run anything you can run in simple docker, and have it survive outages/reboots/etc.
At home, I have a few raspberry pis, orangepi RV (riscv nodes), and my main nodes are large high core and RAM VMs running on Proxmox.
Each one has different capabilities. Some have lots of fast storage attached for longhorn, some have 10Gb/25Gb networking, etc.
And the great part is if I wanted to collapse down to just the SBCs? I would just need to scale down some replicas of high men or high cpu stuff I’m testing.
Of course at job, I just pick the node shape and capabilities I need and don’t think about it.
Yeah, I’m probably the exception for running kubernetes at home, but I would argue if you are running more than a handful of docker containers, you should probably be using kubernetes anyway.
Especially if you care about things being up, or want to be able to seamlessly shuffle stuff around for maintenance. Not to mention my entire infrastructure is repeatable with just a small git repo of fluxcd stuff
When we restart one node, postgresql switches automatically over, fe/be is webscale anyway.
It works very well.
Looking at the issues, people try to shoehorn a thousand unique behaviours into a general purpose tool, just to avoid a bit of old school sysadmin-ing. There's a guy wanting to change TZ of the running cluster, and want "Kured" to support that use case so it's only updated during night - in an ever changing TZ.
Whats also missing is rebalancing of pods. Rescheduler