The best way I can describe it is:
There are people who just want to use a car to get from A to B; there are those who enjoy the act of driving, maybe take it to the track on a lapping day; and there are those who enjoy having a shell of a car in the garage and working on it. There's of course a definite overlap and Venn diagram :-).
My approach / suggestion - Understand what type are you in relation to any given technology vs what is the author's perspective.
I will never resent the time (oh God so much time!) I've spent in the past mucking with homelabs and storage systems. Good memories and tons of learning! today I have family and kids and just need my storage to work. I'm in a different Venn circle than the author - sure I have knowledge and experience and could conceivably save a few bucks (eh not as given as articles make it seem;), as long as I value my time appropriately low and don't mind the necessary upkeep and potential scheduled and unscheduled "maintenance windows" to my non-techie users.
But I must admit I'm in the turn-key solution phase of my life and have cheerfully enjoyed a big-name NAS over last 5 years or so :).
The trick with old computers harnessed as NAS is the often increased space, power, and setup/patching/maintenance work requirements, compared to hopefully some learning experience and a sense of control.
You know, I thought I was too, so I threw in the towel and migrated one my NAS to TrueNAS, since it's supposed to be one of those "turn-key solutions that doesn't require maintenance", and everything got slower, harder to maintain and even managed to somehow screw up one of my old disks when I added it to my pool.
The next step after that was to migrate to NixOS and bit the bullet to ensure the stuff actually works. I'd love to just give someone money and not having to care, but it seems the motto of "If you want something done correctly, you have to do it yourself" lives deep in me, and I just cannot stomach loosing the data on my NAS, so it ends up really hard to trust any of those paid-for solutions when they're so crap.
Of course, gitea and surroundings, or similar ci/cd can be a fun thing to dabble with if you aren't totally over that from work.
Another fun idea is to run the rapidly developing immich as a photo storage solution. But in general, the best inspiration is the awesome-selfhosted list.
- Other network protocols (NFS, ftp, sftp, S3)
- Apps that need bulk storage (e.g., Plex, Immich)
- Syncthing node
- SSH support (for some backup tools, for rsync, etc)
- You're already running a tiny Linux box in your home, so maybe also Pihole / VPN server / host your blog?
You've got compute attached to storage, and people find lots of ways to use that. Synology even has an app store.
Obviously got a bunch of datasets just for storage, one for time machine backups over the network and then dedicated ones for apps.
I'm using for almost all my self hosted apps.
Home Assistant, Plex, Calibre, Immich, Paperless NGX, Code Server, Pi-Hole, Syncthing and a few others.
I've got Tailscale on it and I'm using a convenience package called caddy-reverse-proxy-cloudflare to make my apps available on subdomains of my personal domain (which is on CloudFlare ) by just adding labels to the docker containers.
And since I'm putting the Tailscale address as the DNS entry on CloudFlare, they can only be accessed by my devices when they're connected to Tailscale.
I think at this point what's amazing is the ease with which I can deploy new apps if I need something or want to try something.
I can have Claude whip up a docker compose and deploy it with Dockge.
- Home Assistant
- GitHub backups
- Self-hosting personal projects
- File sync
- golink service
- freshrss RSS reader
- Media server
- Alternative frontends for reddit/youtube
- GitHub Actions runners
- Coder instance
- Game servers (Minecraft, Factorio)
Admittedly, this is more of a project for fun than for the end result. You could achieve all of the above by paying for services or doing something else.
https://github.com/shepherdjerred/homelab/tree/main/src/cdk8...
I tend to be cloud-antagonistic bc I value control more than ease.
Some of that is practical due to living on the Gulf coast where local infra can disappear for a week+ at a time.
Past that, I find that cloud environments have earned some mistrust because internal integrity is at risk from external pressures (shareholders, governments, other bad actors). Safeguarding from that means local storage.
To be fair to my perspective, much of my day job is restoring functionality, lost due to the endless stream of anti-user decisions by corps (and sometimes govs).
Cloud costs would be... exorbitant. 19 TB and I'm nowhere near done ripping my movies. Dropbox would be $96/month, Backblaze $114/month, and OneDrive won't let me buy that much capacity.
I've been running a server with multiple TB of storage for many years and have been using an old PC in a full tower case for the purpose. I keep thinking about replacing the hardware, but it just never seems worth the money spent although it'd reduce the power usage.
I have it sharing data mainly via SSHFS and NFS (a bit of SMB for the wife's windows laptop and phone). I run NextCloud and a few *arr services (for downloading Linux ISOs) in docker.
(Currently 45TB in use on my system)
Edit: as no-one is asking, I base my system on mergerfs which was inspired by this excellent site: https://perfectmediaserver.com/02-tech-stack/mergerfs/
My (Synology) NAS also serves as a Time Machine backup and hosts an LDAP backend for my.
For my personal NAS machine, I've used a Debian server with SnapRAID and mergerfs for nearly a decade now, using a combination of old and new HDDs. Debian is rock-solid, and I've gone through a couple of major version upgrades without issues. This setup is flexible, robust, easy/cheap to expand, and requires practically zero maintenance. I could automate the SnapRAID sync and "scrub", but I like doing it manually. Best of all, it's conceptually and technically simple to understand, and doesn't rely on black magic at the filesystem level. All my drives are encrypted with LUKS and use standard ext4. SnapRAID is great, since if one data drive fails, I don't lose access to the entire array. I've yet to experience a drive failure, though, so I haven't actually tested that in practice.
So I would recommend this approach if you want something simple, mostly maintenance-free, while remaining fully in control.
Non-production is my kubernetes cluster running all the various websites, AI workflows, and other cool tools i love playing with.
Production is everything in between my wife typing in google.com and google; or between my kids and their favorite shows on Jellyfin.
You can guess which one has the managed solutions, and which one has my admittedly-reliable-but-still-requires-technical-expertise-to-fix-when-down unmanaged solutions.
It's wild how much more cost effective this would be than pretty much any commercial NAS offering. It's ridiculous when you consider total system lifecycle cost (with how easy it is to upgrade unraid storage pools).
Looking right now and my local Microcenter builds essentially three things: desktop PCs, some kind of "studio" PC, and "Racing Simulators". Turnkey NASs would move a lot of inventory I'd wager.
That said, I prefer straight Debain to Unraid. I feel Unraid saves you a weekend on the command line setting it up the first time (nothing wrong with that!), but after playing with the trial I just went back to Debian, I didn't feel like there was $250 of value there for me ¯\_(ツ)_/¯. Almost everything on my server is in Linuxserver.io Docker containers anyways, and I greatly prefer just writing a Docker Compose file over clicking through a ton of GUI drop downs. Once you're playing with anything beyond SMB shares, you're likely either technically savvy or blindly following a guide anyways, so running commands through ssh is actually easier to follow along with a guide than clicking in a UI, since you can just copy and paste. YMMV.
I don't have unlimited bandwidth or time and want to continue the tinkering phase on things that interest me rather than the tools that enable such.
Similarly what I was once told when looking at private planes was "What's your mission?" and they've stuck with me ever since, even if I'm never gonna buy a plane.
One person's mission might be backing up their family photos while someone else's mission is a full *arr stack.
It is not some sort of learning and growing experience. The entirety of the maintenance on the first one I put together somewhere between 10-15 years ago is to apt-get update and dist-upgrade on it periodically, upgrade the OS to the latest stable whenever I get around to it, and when I log in and get a message that a disk is failing or failed, shut it down until I can buy a replacement. This happens once every 4 or 5 years.
The trick with big-name NAS is that they go out of business, change their terms, or install spyware on your computer and you end up involved in tons of drama over your own data. This guide is even a bit overblown. Just use MDADM.* It will always be there, it will always work, you can switch OSes or move the drives to another system and the new one will instantly understand your drives - they really become independent of the computer altogether. When it comes to encryption, all of the above goes for LUKS through cryptsetup. The box is really just a dumb box that serves shares, it's the drives that are smart.
I guess MDADM is a (short) learning experience, but it's not one that expires. LUKS through cryptsetup is also very little to learn (remember to write zeros to the drive after encrypting it), but it's something that turnkey solutions are likely to ignore, screw up, or lock you into something proprietary through. Instead of getting a big SSD for a boot drive, just use one of those tiny PCIe cards, as small and cheap as you can get it. If it dies, just buy another one, slap it in, install Debian, and you'll be running again in an hour.
With all this I'm not talking about a "homelab" or any sort of social club, just a computer that serves storage. The choice isn't between making it into a lifestyle/personality or subscribing to the managed experience. Somehow people always seem to make it into that.
tl;dr: use any old desktop, just use Debian Stable, MDADM, and cryptsetup. Put the OS on a 64G PCIe or even a thumb drive (whatever you have laying around.)
* Please don't use ZFS, you don't need it and you don't understand it (if you do, ignore me), if somebody tells you your NAS needs 64G of RAM they are insane. All it's going to do is turn you into somebody who says that putting together a NAS is too hard and too expensive.
Consider mergerfs + snapraid.
Id also argue if you can setup md you can probably figure out how to setup zfs. It looks scary on the RAM, because it uses “idle” ram, but it will immediately release it when any other app needs it. People use ZFS on raspberry Pi’s all the time without problems.
While using desktops for this has sometimes been nice, the big things I want out of a server are
- low power usage when running 24/7
- reliable operation
- quiet operation
- performance but they don't need much
So I've had dual Xeon servers and 8-core Ryzen servers but my favorites are a miniForums with a mobile Ryzen quad core, and my UGREEN NAS. They check all the boxes for server / NAS. Plus both were under $300 before upgrades / storage drives.
Often my previous gaming desktop sells for a lot more than that ... I just sold my 4 year old video card for $220. Not sure what the rest of the machine will be used for, but it's not a good server because the 12-core CPU simply isn't power efficient enough.
I just ordered my first minisforum box (MS-02 Ultra) to serve as my main storage NAS + homelab... first time ordering any of these Chinese boxes, but nothing else checked off all he requirements I had as well as it. Hopefully works out well for me.
I run Windows Server 2022 to support IIS / SQL Server so it's not a perfect fit for me personally, but I suspect for many home servers or NAS setup it would work well.
I was pretty disappointed to find out that none of the ms-01 ms-a1 or ms-a2 have a ATX power button header. This means you need to solder wires to the tiny tactile switch and connect those to something like a pi-kvm to get true power control/status and ipmi/redfish
Just seems like something simple they could have easily included if they wanted to really target the homelab space
https://www.reddit.com/r/UgreenNASync/comments/1nr2j39/encry...
It's possible because you can install a different OS, TrueNAS, etc. but it's not something I personally worry about.
It's even relatively straightforward: start it up with a keyboard and video attached, enter the BIOS, and turn off the watchdog settings. I'd also recommend turning off the onboard eMMC altogether for the following FYI.
Just FYI: If you blow away the UGREEN OS off the eMMC, restoring it requires opening a support ticket with them, and it's some weird dance to restore it because apparently they've locked down their 'custom' Debian just enough for 'their' hardware.
As per someone on a Facebook group, "you CANNOT share the file as their system logs once you restore your device and flags it as used. It will fail the hardware test if the firmware has been installed again".
Because I've installed something that can't feed the watchdog, I just turn the watchdog off.
Their OS install crap, I assume they're just trying to make sure that you can't try to put it on your own hardware (sort of like how people pirate Synology DiskStation).
Sell the gaming GPU and put in something that does video out, or use a CPU with an iGPU.
Big gaming cases with quiet fans are quiet.
Selling the GPU and tuning or swapping the CPU can put money in your pocket to pay for storage.
Big case also means big space.
Wouldn’t running something like this 24/7 cause a substantial energy consumption? Costs of electricity being one thing, carbon footprint an another. Do we really want such a setup running in each household in addition to X other devices?
Are you saying it’s fine to drive a huge truck if you’re single and just need to get around the block to buy a pack of eggs, just because the emissions are nothing compared to those required for making that smaller, more efficient car that you could buy instead?
Of course this is a contrived example that ignores the used vehicle market or the possibility of walking around the block.
Obviously depends on the actual usage, and parent's specific setup, lots of motherboards/CPUs/GPUs/RAM allow you to tune the frequencies and allows you to downclock almost anything. Finally, we have no idea about the energy source in this case, could be they live in a country with lots of wind and solar power, if we're being charitable.
Because solar wind and hydro have no impact on the environment at all. Or nuclear.
I wish people would understand that waste is waste. Even less waste is still waste.
(I don't argue for fossil fuels here, mind you.)
Plus, the countries have shared grids. Any kWh you use can't be used by someone else, so may come from coal when they do, for all you know. It's a false rationalization.
> I wish people would understand that waste is waste. Even less waste is still waste.
So if I have 10 mining rigs connected to the state power grid, what the source of that energy has matters nothing for the environment? If I use a contract that 100% guarantees it comes from solar, it has the same environmental impact as if I use a cheaper contract that guarantees 100% coal power?
I'm not sure if I misunderstand what you're saying, or you're misunderstanding what I said before, but something along the lines got lost in transmission I think.
> I wish people would understand that waste is waste
I think the point is that the configuration from the post can easily run as low as maybe 30-40W on idle, but as high as a couple hundred depending on utilization. An off-the-shelf NAS probably spikes at most in the ~35W range, with idle/spindle-off utilization in the 10W range (I'm using my 4-bay Synology DS920+ as a reference). Normally the biggest contributor to NAS energy usage is the number of HDDs, so the more you add, the more it consumes, but in this configuration the CPU, the RAM, and the GPU are all "oversized" for the NAS purpose.
While reusing parts for longer helps a lot for carbon footprint of the material itself, running that machine 24/7/365 is definitely more CO2-heavy w.r.t. electricity usage than an off-the-shelf NAS. And additional entropy in the environment in the form of heat is still additional entropy, whether it comes from coal or solar panels.
1. You affect the mix! Your rigs create new solar and decommission coal plants! The world is cleaner!
2. You claim a "clean slice" of the existing mix. You feel good because you use only solar, but MRI machines still use power, so their mix is now "dirtier" without changing the actual state of the world.
In real systems, it's probably a combination of the above. I assume our decisions only meaningfully matter by exerting market pressures over longer timescales.
>In general, you want to get the fastest boot drive you can.
Pretty much all NAS like operation systems run in memory, so in general you're better off running the OS from some shitty 128gb sata ssd and using the nvme for data/cache/similar where it actually matters. Some OS are even happy to use a usb stick but that only works for OS designed to accommodate this (unraid I think does). Something like proxmox would destroy the stick.
Also, on HDDs - worth reading up on SMR drives before buying. And these days considering an all flash build if you don't have TBs of content
Never used proxmox myself, but is that the common issue of "logs written to flash consuming writes"? Or something else? The former is probably just changing a line in the config to fix, if it's just that.
> And these days considering an all flash build if you don't have TBs of content
Maybe we're thinking in different scales, but doesn't almost all NAS' have more than 1TB of content? My own personal NAS currently has 16TB in total, I don't want to even imagine what the cost of that would be if I went with SSDs instead of HDDs. I still have SSD for caching, but main data store in a NAS should most likely be HDDs unless you have so much money you just have to spend it.
Depends on what you’re storing. With fast gigabit internet there just isn’t much of a need to store ahem Linux isos locally anymore as anything can be procured in a couple mins. Most people just aren’t producing that much original data on their own either (exceptions exist ofc - people in video making space etc)
Plus it’s not that expensive anymore. I’ve got around 6TB of 100% mirrored flash without even trying (was aiming for speed and redundancy). Most of it used enterprise ones. Think I paid around 50 a TB.
Re proxmox - some of their multi node orchestration stuff is famous for chewing up drives at wild rates for some people. People losing a 1% of ssd life every couple days. Hasn’t affected me so haven’t looked into details
SSDs are generally expected to be used as write-through caches with the main disc pool. However, if you have a bunch you can add them to a ZFS array and it works pretty much flawlessly.
* Don't use raid5. Use btrfs-raid1 or use mdraid10 with >=2 far-copies.
* Don't use raid6. Use btrfs-raid1c3 or use mdraid10 with >=3 far-copies.
* Don't use ZFS on Linux. If you really want ZFS, run FreeBSD.
The multiple copy formats outperform the parity formats on reads by a healthy margin, both in btrfs and in mdraid. They're also remarkably quieter in operation and when scrubbing, night and day, which matters to me since mine sits in a corner of my living room. When I switched from raid6 to 3-far-copy-mdraid10, the performance boost was nice, but I was completely flabbergasted by the difference in the noise level during scrubs.
Yes, they're a bit less space efficient, but modern storage is so cheap it doesn't matter, I only store about 10TB of data on it.
I use btrfs: it's the most actively tested and developed filesystem in Linux today, by a very wide margin. The "best" filesystem is the one which is the most widely tested and developed, IMHO. If btrfs pissed in your cheerios ten years ago and you can't figure out how to get over it, use ext4 with metadata_csum enabled, I guess.
I use external USB enclosures, which is something a lot of people will say not to do. I've managed to get away with it for a long time, but btrfs is catching some extremely rare corruption on my current NAS, I suspect it's a firmware bug somehow corrupting USB3 transfer data but I haven't gotten to the bottom of it yet: https://lore.kernel.org/linux-btrfs/20251111170142.635908-1-...
The drives stay spun down 99% of the time, because I also use a ZFS mirrored pool on SSDs for “hot” files, although Btrfs could also work if you're opposed to ZFS because it's out of tree.
Basically using this idea, but with straight Debian instead of ProxMox: https://perfectmediaserver.com/05-advanced/combine-zfs-and-o...
I also use mergerfs 'ff' (first found) create order, and put the SSDs first in the ordered fstab list of the mergerfs mount point. This gives me tiered storage: newly created files and reads hit the SSDs first. I use a mover script that runs nightly with the SnapRAID sync/scrub to keep space on the SSDs open.
https://github.com/trapexit/mergerfs/blob/master/tools/merge...
If ZFS ever goes upstream, I will certainly enjoy tinkering with it. But until it does, I just don't see the point, I build my own kernels and dealing with the external code isn't worth the trouble. There's already more than enough to tinker with :)
All my FreeBSD machines run ZFS, FWIW.
I've figured out it only happens if I'm streaming data over the NIC at the same time as writing to the disks (while copying from one local volume to another), but that's all I really know right now. I seriously doubt it's a software bug.
Using an old gaming PC for a NAS is kind of like trying to use an old track car that you've taken out the interior, added in a roll cage, and welded the doors shut as your kid's first car. Like yeah it will totally work, and they can impress all of their friends as they cosplay as the Dukes of Hazzard, but it's really not optimal for the task at hand.
I just upgraded my NAS setup to a Terramaster F4-425 Plus (running Debian) and it's great. The N150 CPU in it sips power, and the whole thing is tiny and easy to hide away in a media cabinet. One ultra-quiet Nocuta fan is all that's needed to keep it cool. It's so nice to use the right tool for the job.
EDIT: I'd recommend all of these guides / articles, I basically cherry-picked what I liked from all of them and ended up with something I'm really happy with:
* https://perfectmediaserver.com
* https://github.com/trapexit/mergerfs/blob/master/mkdocs/docs...
* https://blog.muffn.io/posts/muffins-awesome-nas-stack/
It's difficult for me to accept it's better given all the above.
On a side note: I hate web GUI's. I used to think they were the best thing since sliced bread but the constant churn combined with endless menus and config options with zero hints or direct help links led me to hate them. The best part is the documentation is always a version or two behind and doesn't match the latest and greatest furniture arrangement. Maybe that has improved but I'd rather understand the tools themselves.
ECC DDR5 boots insanely fast since the BIOS can quickly verify the tune passes. This is even true when doing your initial adjustment / verification of manufacturer spec.
Do you know a system that does this? Looking for this too
ZFS is the “gold standard” here
By fine I mean running all these at the same time: firefox with several tabs, development tools, Blender and GIMP. All snappy and fast. Even the HDD in the laptop is only an annoyance during/after a cold boot. Then it makes no difference. I daily drive both for the past 8-15 years. The laptop sits at ~10-15W idle and the i5 in it is a workhorse if needed.
Of course there are uses for better hardware, I am not dismissing upgrades. But the whole modern hw/sw situation is a giant shipwreck and a huge waste of resources/energy. I've tried very expensive new laptops for work (look up "embodied energy"), and Windows 11 right-click takes half a second to respond and Unity3D can take several minutes to boot up. It's really sad.
edit: To be honest I have to add a counter-example: streaming >=1080p60 video from YT is kind of a no-no, but that's related to the first sentence of my post.
I am not saying you are wrong in general.
"Old", right. That old PC I'm about to throw away has 2 GB of RAM.
I have a beQuiet case and six 30TB HDDs, and I plan to put the Ubuntu with a Plex server on a NVME SSD and do a ZFS 4+2.
Can anyone point me to a better/quieter set-up? Thank you in advance.
I'll also point out that there are a lot of folks out there who don't have very large demands when it comes to computing, and would be served perfectly well by a 5-10 year old system. Even low-end gaming (Fortnight, GTA V, Minecraft, Roblox, etc.) can run perfectly fine on a computer built with $300-400 of used parts.
But don't do this just so you can upgrade your current pc.
I'd vouch more for old laptops, which are generally not upgradeable, come with built-in UPS, if you remove the screen is as thin as a notebook and can handle low usage. Then you can connect either directly or via other interfaces a bunch of disks and you're golden.
But for anything where your data is important isn't ECC memory still critical for a NAS in this day and age?
E.G. a Steamdeck is or smartphone are both relegated as toy devices that are not for serious computing.
Initially I naively tried to run the two drives right off the USB3 ports in the Pi, and that basically crashed within a day - but that is of course because I was exceeding the power draw. An external hub and supply helped, but didn't fully fix the issue.
I’m not buying anything else and I’m also swapping out any non-noctua fan in my parts when possible (e.g. bought a scythe cooler due to ‘interesting’ dimensional constraints and swapped its fan with a noctua one.)
They almost never run 100%, though, and I have a recurring task set up to clean dust outta my filters, computers and servers.
Also a fan is like $10?
Things which are more vital than that are the disks, power supply, rams.
All the music and videos I watch are through streaming. I don’t have a personal business or anything that requires more than 1 tb.
Now if your NAS use case is streaming media files to multiple devices (TV set top boxes, etc), sure, NAS makes sense if the NAS you build is very low idle power. But if you just need the storage for actual computing it is a waste of time and money.
KISS.