Can you give a pic or link on what you are using?
HDDs doesn't like micromovements. If you put it on a pink foam mat (both a computer and yoga ones) it wouldn't matter. If you 'rigid mount' it but your screws would came lose then your HDD wouldn't like it because it wo&ld result in microvibrations from the self induced oscillations.
Rubber washers are good because they eat those microvibrations. The hard foam which is talked about in the linked article is not good because it is bad from the all aspects - too hard to eat up microvibrations, too soft to be a rigid mount.
The worst thing you can do is to rigid mount an HDD to a case which is a subject to a constant vibration load eg from a heavy duty fan or some engine.
Here's a Wayback Machine copy of the page when that does happen: https://web.archive.org/web/20251006052340/https://ounapuu.e...
Sometimes larger sized models of those (15TB+) can be found with very good pricing. :)
I'm always interested in these DIY NAS builds, but they also feel just an order of magnitude too small to me. How do you store ~100 TB of content with room to grow without a wide NAS? Archiving rarely used stuff out to individual pairs of disks could work, as could running some kind of cluster FS on cheap nodes (tinyminimicro, raspberry pi, framework laptop, etc) with 2 or 4x disks each off USB controllers. So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill.
In the cloud (S3) or on offline (unpowered HDDs or tapes or optical media) I suppose. Most people just don't store that much content.
> So far none of this seems to solve the problem that is solved quite elegantly by the 1U enterprise box... if only you don't look at the power bill.
What kind of power bill are you talking about? I'd expect the drives to be about 10W each steady state (more when spinning up), so 180W. I'd expect a lower-power motherboard/CPU running near idle to be another 40W (or less). If you have a 90% efficient PSU, then maybe 250W in total.
If you're way more than that, you can probably swap out the old enterprisey motherboard/RAM/CPU/PSU for something more modern and do a lot better. Maybe in the same case.
I'm learning 1U is pretty unpleasant though. E.g. I tried an ASRock B650M-HDV/M.2 in a Supermicro CSE-813M. A standard IO panel is higher than 1U. If I remove the IO panel, the motherboard does fit...but the VRM heatsink also was high enough that the top case bows a bit when I put it on. I guess you can get smaller third party VRM heat sinks, but that's another thing to deal with. The CPU cooler options are limited (the Dynatron A42 works, but it's loud when the CPU draws a lot of power). 40mm case fans are also quite loud to move the required airflow. You can buy noctuas or whatever, but they won't really keep it cool. The ones that actually do spin very fast and so are very loud. You must have noticed this too, although maybe you have a spot for the machine where you don't hear the noise all the time.
I'm trying 2U now. I bought and am currently setting up an Innovision AS252-A06 chassis: 8 3.5" hot swap bays, 2U, 520mm depth. (Of course you can have a lot more drives if you go to 2.5" drives, give up hot swap, and/or have room for a deeper chassis.) Less worry about if stuff will fit, more room for airflow without noise.
And if you need a good fan that’s quiet enough for the CPU, you’re looking at 4U. Otherwise, you’ll need AIOs hooked up to the aforementioned 120s.
Depends on the CPU, I imagine. I'm using one with a 65W TDP. I'm hopeful that I can cool that quietly with air in 2U, without having to nerf it with lower BIOS settings. Many NASs have even lower power CPUs like the Intel N97 and friends.
"Old server hardware" for $300 is a bit of a variation, in that you're just buying something from 5 years ago so that its cheaper. But if you want to improve power-efficiency, buy a CPU from today rather than an old one.
--------
IIRC, the "5 year old used market" for servers is particularly good because many datacenters and companies opt for a ~5-year upgrade cycle. That means 5-year-old equipment is always being sold off at incredible rates.
Any 5-year-old server will obviously have all the features you need for a NAS (likely excellent connectivity, expandibility, BMS, physical space, etc. etc.). Just you have to put up with power-efficiency specs of 5 years ago.
Usually the chips with explicitly integrated GPUs (G-suffix, or laptop chips) are monolithic and can hit 10W or lower.
Check for PCIe bifurcation support. If that's there you can pop in a PCIe to quad M.2 adapter. That will split a PCIe x16 slot into 4 x M.2s. Each of those (and the M.2s already on the motherboard) can then be loaded with either an NVMe drive or an M.2 to SATA adapter, with each adapter providing 6 x SATA ports. That setup gives a lot of flexibility to build out a fairly extensive storage array with both NVMe and spinning platters and no USB in sight.
As a nice side effect of the honestly bonkers amount of compute in those boards there's also plenty of capacity to run other VM workloads on the same metal which lets a lot of the storage access happen locally rather than over the network. For me, that means the on-board 2.5GbE NIC is more than fine, but if not you can also load a M.2 to 10GbE adapter(s) as needed.
Otherwise, you can have a couple of HDD racks in which you can insert HDDs when needed (SATA allows live insertion and extraction, like USB).
Then you have an unlimited amount of offline storage, which can be accessed in a minute by swapping HDDs. You can keep an index of all files stored offline on the SSD of your PC, for easy searches without access to HDDs. The index should have all relevant metadata, including content hashes, for file integrity verification and for duplicate files identification.
Having 2 HDD racks instead of just 1 allows direct copies between HDDs and doubles the capacity accessible without swapping HDDs. Adding more than 2 adds little benefit. Moreover, some otherwise suitable MBs have only 2 SATA connectors.
Or else you can use an LTO drive, which is a very steep initial investment, but its cost is recovered after a few hundred TB by the much cheaper magnetic tapes.
Tapes have a worse access time, of the order of one minute after tape insertion, but they have much higher sequential transfer speeds than cheap SATA HDDs. Thus for retrieving big archive files or movies they save time. Transfers from magnetic tape must be done either directly to an NVME SSD or to an NVME SSD through Ethernet of 10 Gb/s or faster, otherwise their intrinsic transfer speed will not be reached.
https://www.supermicro.com/en/products/motherboard/A2SDi-H-T...
I'm moving to Lenovo tiny m75q series for now due to low idle power and heat generated.
Personally, I just don't have that much data, 24TB mirrored for important data is probably enough, and I have my old mirror set avaialable for media like recorded tv and maybe dvds and blu-rays if I can figure out a way to play them that I like better than just putting the discs in the machine.
if you're willing to wait and bid-snipe you can find deals like that routinely; just wait to find one with the size drives you want.
if you just need the drives similar lot sales are available for high power-on time zero errors enterprise drives. I bought a lot of 6x 6tb drives two weeks ago for 120 usd and they all worked fine. If you have the bay space and a software solutuon that lets you swap them in and out as needed without distorting data then there is a lot of 'hobby fun' to be had with managing a storage rack.
Depending on your drive enclosure it should also be able to power down drives that aren't actively being used.
Recertified/used enterprise equipment is the only way to affordably host 100s of terabytes at home.
The author is using a ThinkPad T430.
Any experiences?
Personally I’m in the process of building a NAS with an old 9th gen Intel i5. Many mobos support 6 SATA ports and three mirrored 20 TB pairs is enough storage for me. I’m guessing it’ll be a bit more power hungry than a ugreen/synology/etc appliance but there will also be plenty of headroom for running other services.
[1] https://www.truenas.com/docs/core/13.0/gettingstarted/coreha...
Obviously, direct SATA is still better if possible, but if not, these are probably the next best thing.
Also HDD power management is often complicated by the bridge chip sometimes intervening.
Not recommended for long-term use.
I hate blanket recommendations like this by docs. To me, it just sounds like some guy had a problem a few times and now it's canon. It's like saying "avoid Seagate because their 3tb drives sucked." Well they did, but now they seem to be fine.
If you're occasionally copying data to an external USB drive, that's totally fine. That's what they were designed for.
The issue is that they were not designed for continuous use, or much more demanding applications like rebuilding/resilvering a drive. It's during these applications that issues occur, which is a double whammy, because it can cause permanent data loss if your USB drive fails during a recovery operation. I did a little more research after posting my last comment and came across this helpful post on the TrueNAS forums going into more depth: https://forums.truenas.com/t/why-you-should-avoid-usb-attach...
Mainly I wouldn't do it because of there's space and SATA ports it seems stupid. Hotter. Worse HW.
Can't really see much good reason to do it tbh except it's in a small hot case which is relatively easy to move around. Maybe if you do occasionally backups and you don't care about scrubbing and redundancy? Otherwise why not shuck them and throw them in a case?
Yes. It's pricey but it's never been a problem. It can connect like 12 HDDs with 256GB ram and has 10GBe and runs at a tiny TDP. Has IPMI. Fits in a tiny case.
The only issue I had with this motherboard was that it was difficult to find someone who sold it. Love it
Also I don't see the built-in UPS. The external drives still use external power
Would not recommend; if you want a UPS just buy one, the small ones are not that expensive, like 70 USD.
Makes batteries live way longer.
I use USB chassis of hard drives to work as the "NAS" part, and it works fairly well, and this box is also my router (using a 10 GbE thunderbolt adapter) though my biggest issue comes with large updates in NixOS.
For reasons that are still not completely clear to me, when I do a very large system update (rebuilding Triton-llvm for Immich seems to really do it), the internal network will consistently cut out until I reboot the machine. I can log in through the external interface with Tailscale and my phone, so the machine itself is fine, but for whatever reason the internal network will die.
And that's kind of the price you pay for using a non-server to do server work. It will generally work pretty well, but I find that it does require a bit more babysitting than a rack mount server did.
However, there are disconnects/reconnects every now and then. If you use a standard raid over these usb drives, almost every disconnect/reconnect will trigger a rebuild — and rebuilds take many hours. If you are unlucky enough to have multiple disconnects during a rebuild, you are in trouble.
They all had onboard gige so it worked fine - native vlan for the inbound Comcast connection, tagged vlans out to a switch for the various LAN connections.
They were from the era of DVD drives so I was able to put an extra HDD in the DVD slot to expand storage with. One model even had a eSATA port.
They worked great. Built-in UPS and they come with a reliable crash cart built-in!
If you want redundancy, look at something like SnapRAID, http://www.snapraid.it
If you want to combine into a single volume, consider rclone. These remotes specifically are the ones I'm thinking could be useful,
Good luck o7
The UNAS Pro 8 just came out and I'm thinking about getting it, switching away from my aging Synology setup ... only thing I wish it had was a UPS server as my Synology currently serves that purpose to trigger other machines to shut down ...
4+ years ago I bought 20 "new" (can't validate), "seagate manufactured" (can't validate) "OS" SAS drives, and 2 started throwing errors in truenas quickly (sadly after I had the ability to return them). Had another 20 WD and Segate drives I shucked at the same time (was going into 3 12x SAS/SATA machines and 1 4x SATA NAS). The NAS got sidelined as had to use the SATA drives were meant for and no longer trusted the SAS drives so wanted to keep the 2 extra drives as backup. Which was a good idea, as over the next 4 years another 2 of SAS drives started throwing similar errors.
so 20% of the white label drives didn't really last, while 100% of the shucked drives have. What was even worse, the firmware on the "OS" drives was crap, while it "technically" had smart data, it didn't provide any stats, just passed/not passed. (main lesson learned from this, don't accept
Another anecdote: For a long time I wasn't sure what to do with the SAS drives as in the past I used unused drives for this for cold offline storage, but SAS docks were very expensive ($200+). Recently it seems they have come down in price to under $50 so I bought and was able to fill the drives up (albeit very slowly, it seems they did have problems (was only getting 10-20MB/s), but at least I was able to validate their contents a few times after that, a bit less slow (80MB/s).
Aside: 3 weeks ago I had multiple power outages that I thought created problems in one of the shucked drives (was getting uncorrectable reads, though ZFS handled it ok) and a smart long test show pending sectors. But after force writing all the pending sectors with hdparm, none of the sectors were reallocated. I now think it just had bad partial writes when the power outage hit, so the sectors literally had bad data as the error correcting code didn't match up, also explains why they were all in blocks of 8), and multiple smart long tests later and "fingers crossed", everything seems fine.
But even the cheapest high capacity SSD deals are still a lot more expensive than hard drive array.
I’ll continue replacing failing hard drives for a few more years. For me that has meant zero replacements over a decade, though I planned for a 5% annual failure rate and have a spare drive in the case ready to go. I could replace a failed drive from the array in the time takes to shut down, swap a cable to the spare drive, and boot up again.
SSDs also need to be examined for power loss protection. The results with consumer drives are mixed and it’s hard to find good info about how common drives behave. Getting enterprise grade drives with guaranteed PLP from large on-onboard capacitors is ideal, but those are expensive. Spinning hard drives have the benefit of using their rotational inertia to power the drive long enough to finish outstanding writes.
After going deep on the spec sheets and realizing that all but the best consumer drives have miserably low DWPD numbers I switched to enterprise (U.2 style) two years ago. I slam them with logs, metrics data, backups, frequent writes and data transfers, and have had 0 failures.
https://www.servethehome.com/we-bought-1347-used-data-center...
I bought over 200 over the last year, and the average wear level was 96%, and 95% had a wear above 88%.
Not to say you shouldn't backup your data, but personally I wouldn't be to affected if one of my personal drives errored out, especially if they contained unused personal files from 10+ years ago (legal/tax/financials are another matter).
Mainly I don't want to lose anything that took work to make or get. Personal photos, videos, source code, documents, and correspondence are the highest priority.
Software solutions like Windows Storage Spaces, ZFS, XFS, unRAID, etc. etc are "just better" than traditional RAID.
Yes, focus on 2x parity drive solutions, such as ZFS's "raidz2", or other such "equivalent to RAID6" systems. But just focus on software solutions that more easily allow you to move hard drives around without tying them to motherboard-slots or other such hardware issues.
RAID does not mean or imply hardware RAID controllers, which you seem to incorrectly assume.
Software RAID is still 100% RAID.
------
The best advice I can give is to use a real solution like ZFS, Storage Spaces and the like.
It's not sufficient to say 'Use RAID' because within the Venn Diagram of things falling under RAID is a whole bunch of shit solutions and awful experiences.
It's still enabled in the firmware of some vendors' laptops -- ones deep in Microsoft's pockets, like Dell, who personally I would not touch unless the kit were free, but gullible IT managers buy the things.
My personal suspicion is that it's an anti-Linux measure. It's hard to convert such a machine to AHCI mode without reformatting unless you have more clue than the sort of person who buys Dell kit.
In real life it's easy: set Windows to start in Safe Mode, reboot, go into the firmware, change RAID mode to AHCI, reboot, exit Safe Mode.
Result, Windows detects a new disk controller and boots normally, and now, all you need to do is disable Bitlocker and you can dual-boot happily.
However that's more depth of knowledge than I've met in a Windows techie in a decade, too.
I like btrfs for this purpose since it's extremely easy to setup over cli, but any of the other options mentioned will work.
> btrfs is quite infamous for eating your data.
This is the reason for the slogan on the bcachefs website:
"The COW filesystem for Linux that won't eat your data".
After over a decade of in-kernel development, Btrfs still can't either give an accurate answer to `df -h`, or repair a damaged volume.
Because it can't tell a program how much space is free, it's trivially easy to fill a volume. In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.
IMHO this is entirely unacceptable in an allegedly enterprise-ready filesystem.
The fact that its RAID is even more unstable merely seals the deal.
> In my personal experience, writing to a full volume corrupts it irretrievably 100% of the time, and then it cannot be repaired.
While I get the frustration, I think you could have probably resolved both of them by reading the manual. Btrfs separates metadata & regular data, meaning if you create a lot of small files your filesystem may be 'full' while still having data available; `btrfs f df -h <path>` would give you the break down. Since everything is journaled & CoW it will disallow most actions to prevent actual damage. If you run into this you can recover by adding an additional disk for metadata (can just be a loopback image), rebalancing, and then taking steps to resolve the root cause, finally removing the additional disk.
May seem daunting but it's actually only about 6 commands.
What that means is I wrote the manual.
Now, disclaimer, not that manual: I did not work on filesystems or Btrfs, not at all. (I worked on SUSE's now-axed-because-of-Rancher container distro CaaSP, and on SLE's support for persistent memory, and lots of other stuff that I've now forgotten because it was 4 whole years and it was very nearly 4 years ago.)
I am however one of the many people who have contributed to SUSE's excellent documentation, and while I didn't write the stuff about filesystems, it is an error to assume that I don't know anything about this. I really do. I had meetings with senior SUSE people where I attempted to discuss the critical weaknesses of Btrfs, and my points were pooh-poohed.
Some of them still stalk me on social media and regularly attack me, my skills, my knowledge, and my reputation. I block them where I can. Part of the price of being online and using one's real name. I get big famous people shouting that I am wrong sometimes. It happens. Rare indeed is the person who can refute me and falsify my claims. (Hell, rare enough is the person who knows the difference between "rebut" and "refute".)
So, no, while I accept that there may be workarounds that a smart human may be able to do, I strongly suspect that these things are accessible to software, to tools such as Zypper and Snapper.
In my repeated direct personal experience, using openSUSE Leap and openSUSE Tumbleweed, routine software upgrades can fill up the root filesystem. I presume this is because the packaging tools can't get accurate values for free space, probably because Btrfs can't accurately account for space used or about to be used by snapshots, and a corrupt Btrfs root filesystem can't be turned back into a valid consistent one using the automated tools provided.
Which is why both SUSE's and Btrfs's own docs say "do not use the repair tools unless you are instructed to by an expert."
"This release introduces the [Btrfs] RAID stripe tree, a new tree for logical file extent mapping where the physical mapping may not match on multiple devices. This is now used in zoned mode to implement RAID0/RAID1* profiles, but can be used in non-zoned mode as well. The support for RAID56 is in development and will eventually fix the problems with the current implementation."
I've not kept with more recent releases but there has been progress on the issue
RAID0/1/10 has been stable for a while.
Redundancy rather than individual reliability.
Storing 18TB (let alone with raid) on SSDs is something only those earning Silicon Valley tech wages can afford.
In essence, what we together are saying is that people with super-sensitive sleep that are also easily upset, and that don't have ultra-high salaries, cannot really afford 18 TB of data (even though they can afford an HDD), and that's true.
Quote from Toshiba's paper on this. [1]
Hard disk drives for enterprise server and storage usage (Enterprise Performance and Enterprise Capacity Drives) have MTTF of up to 2 million hours, at 5 years warranty, 24/7 operation. Operational temperature range is limited, as the temperature in datacenters is carefully controlled. These drives are rated for a workload of 550TB/year, which translates into a continuous data transfer rate of 17.5 Mbyte/s[3]. In contrast, desktop HDDs are designed for lower workloads and are not rated or qualified for 24/7 continuous operation.
From Synology
With support for 550 TB/year workloads1 and rated for a 2.5 million hours mean time to failure (MTTF), HAS5300 SAS drives are built to deliver consistent and class-leading performance in the most intense environments. Persistent write cache technology further helps ensure data integrity for your mission-critical applications.
[1] https://toshiba.semicon-storage.com/content/dam/toshiba-ss-v...
[2] https://www.synology.com/en-us/company/news/article/HAS5300/...
If you're buying them from the second hand market, you don't likely get the warranty (and is likely why they're on the second hand market)
Max operating range is ~60C for spinning disks and ~70C for SSD. Optimal is <40-45C. The larger agents facilties afaik tend to run as hot as they can.
It doesn't apply for the single drive, only for a large number of drives. E.g. if you have 100000 drives (2.4 million hours MTTF) in a server building with the required environmental conditions and maximum workload, be prepared to replace a drive once a day in average.
1: https://datablocks.dev/blogs/news/white-label-vs-recertified...
And you should use some form of redundancy/backups anyway. It's also a good idea to not use all disks from the same batch to avoid correlated failures.
1. A large company (think cloud storage provider or something) wanting to build out storage infrastructure buys a large amount of drives from Seagate.
2. When the company receives the drives from Seagate, they randomly sample from the lot to make sure the drives are fully functional and meet specifications.
3. The company identifies issues from the sampled drives. These can range from dents/dings in the casing or torn labels to firmware or reliability issues.
4. The company returns the entire lot to Seagate as defective. Seagate now doesn't want anything to do with these drives, so they relabel them as "OS" with no Seagate branding and sell them as-is at a discount to drive resellers.
5. The drive resellers may or may not do further testing on the drives (you can probably tell by how much of a warranty a given reseller offers) before selling them onto people wanting cheap storage.
thanks for the great article!!
2 remarks from my side:
* some smartctl -a ... output would have been nice ~ i don't care if it is from "when the drives where shipped" or from any later point in time
* prices are somewhat ... aehm ... lets call them "uncompetitive" at least for where i'm at (austria, central-europe, eu)
i compared prices normalized by cost pro TB with new (!) drives from the austrian price-portal "geizhals"
for example: for 3,5 inches HDDs sorted by "price / TB"
* https://geizhals.at/?cat=hde7s&xf=5704_3.5%22~5717_SATA%203G...
sometimes the prices are slightly higher for the used (!) drives ... sometimes also a bit lower, but imho (!) not enough to justify buying refurbished drives over new (!) ones ...
just my 0.02€
The author is Estonian; the website name (and his name) 'õunapuu' means 'apple tree'. I love Estonian names: often closely tied to nature.
god speed!
> Half of tech YouTube has been sponsored by companies like...
It just struck me that the product reviews are a part of the social realm that is barely explored.
Imagine a video website like TikTok or YouTube etc where all videos are organized under products. Priority to those who purchased the product and a category ranked by how many similar products you've purchased.
The thing sort of exists currently in some hard to find corner of TEMU etc but there are no channels or playlists.
Viewers want to see opinions from specific people they’ve come to trust, not the first video that comes up for a product.
I just purchased a bicycle chain cleaning device. It was absurdly cheap. The plastic was extruded poorly, it was hard to assemble, it was not entirely obvious how to use it. However! It did the job and it barely got dirty. I expected it to be full of rusty oil both inside and outside but it accumulated just a tiny smudge on the inlet. If anyone made a video it would be a fantastic product.
Amazon is just not interested in organizing it properly.
You should have a look what river of fresh nonsense is uploaded to YouTube. The difference is that amazon has you look at it as if something valuable.
2. The world is filled to the brim with videos about "fantastic products".
I’ve seen how PR firms interact with creators. It’s much easier to get the small time creators to take your product and make a positive video because getting some free product is the biggest payout they’re getting from their channel. They will always give positive reviews because they have more to gain from flattering the companies that send them free stuff than from the $1.50 they’re going to earn in ad money.
The PR firms who worked with the company I was at had a long list of small time video creators who would reliably produce positive videos of something as long as you sent them a free product. The creators know this game.
In comparison, I’d rather read a general review magazine with a long history. At least they don’t try to trick me into believing they are working out of the goodness of their hearts, and they usually aren’t married to a single big sponsor.
Online reviews are broken beyond repair.
Do any of these still exist?
1: https://review.kakaku.com/review/K0001682323/ | https://review-kakaku-com.translate.goog/review/K0001682323/...
White labeling avoids lawsuits.