It makes sense because you are unlikely to run production workloads at home.
So you don't really need a half a terabyte of RAM and a 220v power supply for the world's most expensive electric space heater.
Instead people are most often interested in developing infrastructure-as-code or testing deployment strategies or doing tests to see what happens when outages happen. Logging, metrics collecting, simulating network failure, simulating software attacks. etc.
In most of those cases having a number of smaller machines makes more sense then trying to emulate a small datacenter on a one or two big ones.
In practice I think most people end up with 2 or 3 'big machines' for times when they do need the Umph or want to have a big storage array for their "linux ISO collections". Then having a number of Pis or HP mini desktops in arrays is just for good fun.
If I want to simulate full blown workloads and benchmarking then I can just use AWS or Azure for that. A lot cheaper to lease verts for a evening or two, then buy big machines and leaving them idle 99.8% of the time.
So, now I am left with building another system and I need to decide form factor. Is this going to be headless or run a GUI of some kind with a monitor attached? Should I buy a big ole tower case or move to a 6u or 12u rack system. I want more VRAM and I need as much PCIe as possible. One thing for sure is that I don't want it to be Raspberry Pi based. I have two Pi4 collecting dust that were fun and impressive for what they are.
I saw these mini racks and wondered how they would work with an extended ATX board. Could these be useful as some kind of "open air" or mining type case where you simply bolt stuff on. Definitely going to investigate, so while the exact application of mini-racking pi's is not my jam, I am thankful that it was brought up.
Or you could just put the dell sideways on a rack shelf and be ok with it for now... while you decide what will go there.
On the other hand, I don't go to the trouble this guy goes to. I just have a cheap mini PC plugged into Ethernet sitting on top of my router.
There are also other Arm SBCs that are much faster (and more efficient) than a Pi, like the Orange Pi 5 Max. Many homelab-related apps run just as well if not better on that, just have to make sure to settle on a supported OS distro.
At least in theory, a Pi cluster has better failure modes than a single machine even if it's less powerful overall. And yes, I'm currently running on an old laptop -- but it's all a bit ad-hoc and I really want something a bit more uniform, ideally with at least some of the affordances I'm enjoying when deploying stuff professionally.
Do yourself a favor and buy a NAS and a compatible UPS. Any modern NAS software will speak one of the UPS IP protocols to handle graceful shutdown if your power goes off. Once you have the money, buy a second NAS and put it in a relatives house, set up a wireguard/tailscale tunnel between the two devices, and use it for offsite backups.
For what it's worth, my house isn't a single power domain -- I'm running some of my services on an old laptop that's still got some battery. But it goes to sleep if the power goes off, and needs physical intervention to recover. Which is an excellent example of redundant systems providing multiple single points of failure and of the perils of using random left-over hardware to run stuff.
A pre-built NAS (configured to raid 5 at least) is worth the cost - storage should be set-and-forget since drives will fail and hot-swapping drives and automatic rebuilds should be zero downtime to life. Commercial NAS solutions have proven backup workflows.
For compute and databases, home setups can mirror to cloud or remote locations. Proxmox makes this straightforward with its web admin - just a few clicks to spin up replicas.
Modern consumer hardware and internet are quite reliable now. Business-grade equipment performs even better.
Noise and Power Consumption is a huge consideration. I think about it as cost of gpu or computing power per watt. While the information might not be out there, USFF devices like a one or a few Lenovo Thinkcentre m920q works pretty flawlessly with Proxmox or anything else. For a rack, they can be treated as quasi blade servers on a shelf stacked in a rack, or some cases as well. Again, up to most people
These types of USFF devices often have a desktop grade cpu in a mobile setup that is engineering/industrial grade built, and 130w power supplies, and mostly idling. There are newer generations of these types of devices from most of the manufacturers too.
Cost of electricity does add up, but more than that, keeping your wattage footprint low lets you use a more reasonable UPS for battery backup power, if there is a need, it can all stay up much, much longer.
I have an M1 Mini which is more powerful than any of these, but MacOS is not really suitable for tinkering and anything that's not Apple(TM).
Doing GPU/intensive type tasks is more purpose driven than a homelab. For this, you can get a NAS with dedicated GPU for transcoding if you wanted, etc.
The call for large amounts of compute/GPU makes a lot of sense, and there's a lot of ways to get there depending on what's needed, relative to the electricity bill you're OK with if it ends up idling a lot more than anticipated.
Adding a mac mini/studio for crunching certain things might be enough for a single person or household. Adding other demands or users beyond that could change it.
I'm familiar with racks and gear, and had way too much of it when I pulled out of datacenters and went more virtual and cloud. The nice thing now is a lot of that virtualization can come home with a bit of the data center (power backups, internet backups, etc)
Also, I learned about this device from this post and immediately bought one for my existing home server remote access: https://jetkvm.com/
I'll mention the broken link on their Discord.
Holy moly these are getting expensive 1k for something that goes in the closet is wild.
This was posted a while back, it has some good resources.
https://mikrotik.com/product/rmk2_10
looks like this: https://cdn.mikrotik.com/web-assets/rb_images/2242_hi_res.pn...
I wish there was some kind of firmly defined standard for exactly half of a 1U width so that different manufacturers' devices could be attached together.