TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986
I currently do not have time for a clear how to, but some relevant references would be:
https://www.freedesktop.org/software/systemd/man/latest/syst...
https://www.krose.org/~krose/measured_boot
Integrating this better into Proxmox projects is definitively something I'd like to see sooner or later.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less
Any idea what that failure mode could have been? It worries me tremendously to keep data on an SSD now.
Then shortly after I had a BTRFS fail without failing Hardware on another drive
Just backup your stuff with 3-2-1 strategy and you're OK.
I'd recommend a combination of syncthing, restic and ZFS (with zfs-auto-snapshot, sanoid or zrepl) and maybe bluray (as readonly medium)
When it comes to self hosted servers for example, using tiny computers as servers often gets you massive power savings that do make a difference compared to buying off-lease rack mount servers that can idle in the hundreds of watts.
Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).
There are many similar articles.
Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.
Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.
If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.
In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.
(see above, same question)
If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.
An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.
A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.
You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.
Linux ISOs?
There are several sub $150 units that allow you to upgrade the ram, limited to one 32gb stick max. You can use an nvme to sata adapter to add plenty of spinning rust or connect it to a das.
While I wouldn’t throw any vms on these, you have enough headroom for non-ai home sever apps.
Intel also means it has QuickSync, so you won't need to buy a N150. However, I tend to be sceptic about these aliexpress-boxes, too. Reliable server manufacturers, like Dell, HP, Lenovo or Fujitsu (RIP) are way more reliable.
I do think they bridge the server gap between using a rpi and a full server. Likely for the hobbyist that doesn't yet have the need (or space) for better hardware.
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
https://www.aliexpress.com/item/1005006369887180.html
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
IPMI could be replaced with NanoKVM or JetKVM...
You could also go 32gb+ Intel optane Boot and enterprise SATA data, depending on your use case
I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.
By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.
Modern Power MOSFETs are cheaper and more efficient. 10 Years ago 80Gold efficiency was a bit expensive and 80Bronze was common.
Today, 80Gold is cheap and common and only 80Platinum reaches into the exotic level.
If your peak power draw is <200W, I would recommend an efficient <450W power supply.
Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.
Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.
I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.
My use case is a backup server for my macs and cold storage for movies.
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).
Very quiet so I can have it in my living room plugged into my TV. < 10W power.
I have no room for a big noisy server.
While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?
Fujitsu D3417-B12
Intel Xeon 1225
64GB ecc
WD SN850x 2TB
mATX case
Pico PSU 150
For backup I use a 2TB enterprise HDD and ZFS sendFor snapshotting i use zfs-auto-snapshot
So really nothing recommendable for buying today. You could go for this
https://www.aliexpress.com/item/1005006369887180.html
Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.
Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.
A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.
If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.
Storage is easier as an appliance that just runs.
What would you use instead?
ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.
Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.
Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.
So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.
But agree about backups.
If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.
Hence the first sentence of my three sentence post.
So I made a simple comment to point out the conflict, a little bit rude but not intended to escalate the level of rudeness, and easier for both of us than writing out a whole big thing.
The only place I can put a NAS is my living room. I'm not putting a fucking 4-bay synology on my entertainment shelf. And if I can hear it, it is too loud.
these mini-NAS boxes are about the size of a single 3.5 HDD
https://buy.hpe.com/us/en/compute/tower-servers/proliant-mic...
Pity they're Intel cpu's though. :(
HPE have announced 12th Gen servers for their other lines recently, so maybe the Microservers will get a 12th Gen update this year too. Hopefully with AMD cpus rather than the Intel crap.
The Hardware is great but >1000 bucks is pretty hard even for enthusiasts
Turns out I actually have power supplies that alone draw over 30W with zero load; when trying for the lowest idle power consumption I've found that the choice of power supply matters a lot,
Turns out that power supply and motherboard are the most important to save power - besides low C-states (powertop). I had best results with Fujitsu D3x17 / d3644 and Gigabyte C246 wu2.
Today these are unicorns not worth hunting for. Like I said: No modern server grade board is that good while being cheap. You could take a look at
Kontron K3851-R ATX
If i remember correctly. Kontron bought Fujitsus Mainboard segment a while ago.This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
Examples: https://www.qnap.com/en/product/tbs-464, https://www.qnap.com/en/product/tbs-h574tx, https://www.asustor.com/en/product?p_id=80
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
- Warm storage between mobile/tablet and cold NAS
- Sidecar server of functions disabled on other OSes
- Personal context cache for LLMs and agents
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
https://www.minisforum.com/pages/n5_pro
https://store.minisforum.com/en-de/products/minisforum-n5-n5...
no RAM 1.399€
16GB RAM 1.459€
48GB RAM 1.749€
96GB RAM 2.119€
96GB DDR5 SO-DIMM costs around 200€ to 280€ in Germany.https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM...
I wonder if that 128GB kit would work, as the CPU supports up to 256GB
https://www.amd.com/en/products/processors/laptop/ryzen-pro/...
I can't force the page to show USD prices.
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
DDR5 ECC is not good enough. What if you have faulty RAM and ECC is constantly correcting it without you knowing it? There's no value in that. You need the OS to be informed so that you are aware of it. It also does not protect errors which occur between the RAM and the CPU.
This is similar to HDDs using ECC. Without SMART you'd have a problem, but part of SMART is that it allows you to get a count of ECC-corrected errors so that you can be aware of the state of the drive.
True ECC takes the role of SMART in regards of RAM, it's just that it only reports that: ECC-corrected errors.
On a NAS, where you likely store important data, true ECC does add value.
https://geizhals.de/?cat=ramddr3&sort=r&xf=1454_49152%7E1590...
Kingston Server Premier SO-DIMM 48GB, DDR5-5600, CL46-45-45, ECC KSM56T46BD8KM-48HM for 250€
Which then means 500€ for the 96GB
FLASHSTOR 6 Gen2 (FS6806X) $1000 - https://www.asustor.com/en/product?p_id=90
LOCKERSTOR 4 Gen3 (AS6804T) $1300 - https://www.asustor.com/en/product?p_id=86
At some point though, SSDs will beat hard drives on total price (including electricity). I’d like a small and efficient ECC option for then.
With modern HDDs I can however saturate the SATA controller, whilst it has 2x SATA 3 and 2x SATA 2, I can only achieve ~5-6 Gbps cumulative, not the 18Gbps you would expect. It's certainly not a disaster, but does mean writes especially are slower than expected (2x write amplification).
Most models I find reuse the most powerful usb-c port as ... recharging port so unusable as DC UPS.
Context: my home server is my old https://frame.work motherboard running proxmox VE with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at idle.
I've had the River Pro for a few months and it's worked perfectly for that use case. And UnRaid supports it as of a couple months ago.
Lots of results on Ali for a query “usb-c ups battery”.
Check some models from cuktech and anker
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
Not the "cube" sized, but surprisingly small still. I've got one under the desk, so I don't even register it is there. Stuffed it with 4x 4TB drives for now.
unfortunately most people still consider ECC unnecessary, so options are slim
(I assume M.2 cards are the same, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
Also, these things are just begging for a 10Gbps Ethernet port, since you're going to lose out on a ton of bandwidth over 2.5Gbps... though I suppose you could probably use the USB-C port for that.
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
It’d be great if you could fully utilise the M.2 speed but they are not about that.
Why not a single large M.2? Price.
I'm hopeful 4/8 TB NVMe drives will come down in price someday but they've been remarkably steady for a few years.
No issues so far. The system is completely stable. Though, I did add a separate fan at the bottom of the Odroid case to help cool the NVMe SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets maxed out. Maybe I could try bonding the 2 networking ports but I don't have any client devices that could use it.
I had an eye on the Beelink ME Mini too, but I don't think the NVMe disks are sufficiently cooled under load, especially on the outer side of the disks.
If you have any other questions, feel free to contact our support team at support-pc@bee-link.com — we’re always happy to help!
I have the same problem, but it is not a problem for my Seagate X16s, that have been going strong for years.
I know of FriendlyElec CM3588, are there others?
Why buy a tiny, m.2 only mini-NAS if your need is better met by a vanilla 2-bay NAS?
I have an 8 drive NAS running 7200 RPM drives, which is on a wall mounted shelf drilled into the studs.
On the other side of that wall is my home office.
I had to put the NAS on speaker springs [1] to not go crazy from the hum :)
[1] https://www.amazon.com.au/Nobsound-Aluminum-Isolation-Amplif...
You can install a third-party OS on it.
Just something to be aware of.
Recovery from a lost drive would be slower, for sure.
Not saying premium drives dont have their place- but for 95% of people $200/TB ($100 premium over lower tiers) is a waste.
I know u can patch microcode at runtime/boot but I don’t think that covers all vulnerabilities
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
What in the WORLD is preventing these systems from getting at least 10gbps interfaces? I have been waiting for years and years and years and years and the only thing on the market for small systems with good networking is weird stuff that you have to email Qotom to order direct from China and _ONE_ system from Minisforum.
I'm beginning to think there is some sort of conspiracy to not allow anything smaller than a full size ATX desktop to have anything faster than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the solution.)
Price and price. Like another commenter said, there is at least one 10Gbe mini NAS out there, but it's several times more expensive.
What's the use case for the 10GbE? Is ~200MB/sec not enough?
I think the segment for these units is low price, small size, shared connectivity. The kind of thing you tuck away in your house invisibly and silently, or throw in a bag to travel with if you have a few laptops that need shared storage. People with high performance needs probably already have fast nvme local storage is probably the thinking.
Yes; its far from close enough. My litmus test for consumers is, can they put their video files on network storage and work with them? And with modern cellphone video, the answer is no it is not convenient to do that. Consumer networking is in a tarpit. Business grade systems have networking that is three orders of magnitude faster; there is no other component between systems that are that dissimilar in performance, not even GPUs.
When I'm talking to an array of NVMe? No where near enough, not when each drive could do 1000MB/s of sequential writes without breaking a sweat.
If I could get the same unit for like $299 I'd run it like that for my NAS too, as long as I could run a full backup to another device (and a 3rd on the cloud with Glacier of course).
Power hungry yes, good cabling maybe?
I run 10G-Base-T on two Cat5e runs in my house that were installed circa 2001. I wasn't sure it would work, but it works fine. The spec is for 100 meter cable in dense conduit. Most home environments with twisted pair in the wall don't have runs that long or very dense cabling runs, so 10g can often work. Cat3 runs probably not worth trying at 10G, but I've run 1G over a small section of cat3 because that's what was underground already.
I don't do much that really needs 10G, but I do have a 1G symmetric connection and I can put my NAT on a single 10G physical connection and also put my backup NAT router in a different location with only one cable run there... thr NAT routers also do NAS and backup duty, so I can have a little bit of physical separation between them plus I can reboot one at a time without losing NAT.
Economical consumer oriented 10g is coming soon, lots of announcements recently and reasonableish products on aliexpress. All of my current 10G NICs are used enterprise stuff, and the switches are used high end (fairly loud) SMB. I'm looking forward to getting a few more ports in the not too distant future.
They definitely exist, two examples with 10 GbE being the QNAP TBS-h574TX and the Asustor Flashstor 12 Pro FS6712X.
To be sure... is the data compressible, or repeated? I have encountered an SSD that silently performed compression on the data I wrote to it (verified by counting its stats on blocks written). I don't know if there are SSDs that silently deduplicate the data.
(An obvious solution is to copy data from /dev/urandom. But beware of the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds to read 1GB from /dev/urandom, so that would be the bottleneck in a write test. But at least for a read test, it doesn't matter how long the data took to write.)
It's a file server (when did we started calling these "NAS"?) with Samba, NFS but also some database stuff. No VMs or dockers. Just a file and database server.
It has full disk encryption with TPM unlocking with my custom keys so it can boot unattended. I'm quote happy with it.
I start with normal full disk encryption and enrolling my secure boot keys into the device (no vendor or MS keys) then I use systemd-cryptenroll to add a TPM2 key slot into the LUKS device. Automatic unlock won't happen if you disable secure boot or try to boot anything other than my signed binaries (since I've opted to not include the Microsoft keys).
systemd-cryptenroll has a bunch of stricter security levels you can chose (PCRs). Have a look at their documentation.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
I do this. One mergerfs mount with an ssd and three hdds made to look like one disk. Mergerfs is set to write to the ssd if it’s not full, and read from the ssd first.
A chron job moves out the oldest files on the ssd once per night to the hdds (via a second mergerfs mount without the ssd) if the ssd is getting full.
I have a fourth hdd that uses snap raid to protect the ssd and other hdds.
Can’t tell you how it worked out performance-wise, because I didn’t really benchmark it. But it was easy enough to set up.
These days I just use SATA SSDs for the whole array.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
For me, the media library is less than 4TB. I have some datasets that, put together, go to 20TB or so. All this is handled with a microserver with 4 SATA spinning metal drives (and a RAID-1 NVMe card for the OS and).
I would imagine most HN'ers to be closer to the 4TB bracket than the 40TB one. Where do you sit?
Not sure if anyone else has dealt with this and/or how this setup works over wifi.
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
The entire cabinet uses under 1kwh/day, costing me under $40/year here, compared to my previous Synology and home-made NAS which used 300-500w, costing $300+/year. Sure I paid about $1500 in total when I bought the QNAP and the NVMe drives but just the electricity savings made the expense worth it, let alone the performance, features etc.
SSD = Solid State Drive
So you're moving from solid state to solid state?
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or USB. I think I'd want HMDI and let the nas handle streaming/decoding. But if my TV can handle enough formats. maybe USB will do.
anyone have experience with this?
I've been using a combination of media server on my Mac with client on Apple TV and I have no end of glitches.
It gets a lot of use in my household. I have my server (a headless Intel iGPU box) running it in docker with the Intel iGPU encoder passed through.
I let the iGPU default encode everything realtime, and now that plex has automatic subtitle sync, my main source of complaints is gone. I end up with a wide variety of formats as my wife enjoys obscure media.
One of the key things that helped a lot was segregating Anime to it own TV collection so that anime specific defaults can be applied there.
You can also run a client on one of these machines directly, but then you are dealing with desktop Linux.
As you want to bring the data server right to the TV, and you'll output the video via HDMI, just use any PC. There are plenty of them designed for this (usually they're fanless for reducing noise)... search "home theater PC."
You can install Kodi as the interface/organizer for playing your media files. It handles the all the formats... the TV is just the ouput.
A USB CEC adapter will also allow you to use your TV remote with Kodi.
I just want a backup (with history) of the data-SSD. The backup can be a single drive + perhaps remote storage
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
I want smaller, cooler, quieter, but isn’t the key attribute of SSDs their speed? A raid array of SSDs can surely achieve vastly better than 2.5gbps.
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.
I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.
Probably a silly arrangement but I like it.
2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.
But yeah, if you want fast storage just stick the SSD in your workstation, not on a mini PC hanging off your 2.5Gbps network.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
There was some stuff in DDR5 that made ECC harder to implement (unlike DDR4 where pretty much everything AMD made supported unbuffered ECC by default), but its still ridiculous how hard it is to find something that supports DDR5 ECC that doesn't suck down 500W at idle.