Intel N150 is the first consumer Atom [1] CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
With some currently still a bit of hands-on approach you can set up measured boot that can measure everything from the BIOS (settings) through the kernel, the initrd, and also kernel command line parameters.
I currently do not have time for a clear how to, but some relevant references would be:
As a Schrödinger-like property, it may vary by observer and not be publicly documented.. One could start with a commercial product that ships with coreboot, then try to find identical hardware from an upstream ODM. A search for "bootguard" or "coreboot" on servethehome forums, odroid/hardkernel forums, phoronix or even HN, may be helpful.
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
The selling point for the people in the Plex community is the N100/N150 include Intel’s Quicksync which gives you video hardware transcoding without a dedicated video card. It’ll handle 3 to 4 4K transcoded streams.
There are several sub $150 units that allow you to upgrade the ram, limited to one 32gb stick max. You can use an nvme to sata adapter to add plenty of spinning rust or connect it to a das.
While I wouldn’t throw any vms on these, you have enough headroom for non-ai home sever apps.
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
It depends on what you consider "lots" of data. For >20tb yes absolutely obviously by a landslide. But if you just want self-hosted Google Drive or Dropbox you're in the 1-4TB range where mechanical drives are a very bad value as they have a pretty significant price floor. WD Blue 1tb hdd is $40 while WD Blue 1tb nvme is $60. The HDD still has a strict price advantage, but the nvme drive uses way less power, is more reliable, doesn't have spinup time (consumer usage is very infrequently accessed, keeping the mechanical drives spinning continuously gets into that awkward zone of worthwhile)
And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less
I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.
Low power, low noise, low profile system, LOW ENTRY COST. I can easily get a beelink me mini or two and build a NAS + offsite storage.
Two 1TB SSDs for a mirror are around 100€, two new 1TB HDDs are around 80€.
You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.
There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.
Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).
I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.
I wonder if they were just hit with the bathtub curve?
Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.
Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.
Spin down isn't as problematic today. It really depends on your setup and usage.
If the stuff you access often can be cashed to SSDs you rarely access it.
Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.
In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.
It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.
I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.
That's not how L2ARC works. It's not how the ZIL SLOG works, either.
If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.
An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.
A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.
I think you're right generally, but I wanna call out the ODROID H4 models as an exception to a lot of what you said. They are mostly upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does support in-band ECC which kinda checks the ECC box. They've got a Mini-ITX adapter for $15 so it can fit into existing cases too.
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
Not sure about the odroid but I got myself the nas kit from friendly elec. With the largest ram it was about 150 bucks and comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly cool even under load.
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
You can get a 1 -> 4 M.2 adapter for these as well which would give each one a 1x PCIe lane (same as all these other boards). If you still want spinning rust, these also have built-in power for those and SATA ports so you only need a 12-19v power supply. No idea why these aren't more popular as a basis for a NAS.
No ECC is the biggest trade off for me, but the C236 express chipset has very little choice for CPUs, they are all 4 core 8 thread. Ive got multiple x99 platform systems and for a long time they were the king of cost efficiency, but lately the ryzen laptop chips are becoming too good to pass up, even without ECC. Eg Ryzen 5825u minis
Most computers don't have ECC. So it might be essential in theory but in practice things work fine without (for standard personal, even work, use cases).
While I get your point about size, I'd not use RAID-5 for my personal homelab. I'd also say that 6x2TB drives are not the optimal solution for low power consumption. You're also missing out server quality BIOS, Design/Stability/x64 and remote management. However, not bad.
While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?
I'm interested in learning more about your setup. What sort of system did you put together for $350? Is it a normal ATX case? I really like the idea of running proxmox but I don't know how to get something cheap!
Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.
Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.
A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.
If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.
ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.
Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.
I'd almost always use RAID-1 or if I had > 4 disks, maybe RAID-6. RAID-5 seems very cost effective at first, but if you loose a drive the probability of losing another one in the restoring process is pretty high (I don't have the numbers, but I researched that years ago). The disk-replacement process produces very high load on the non defective disks and the more you have the riskier the process. Another aspect is that 5 drives draw way more power than 2 and you cannot (easily) upgrade the capacity, although ZFS offers a feature for RAID5-expansion.
Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.
So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.
I would run 2 or more parity disks always. I have had disks fail and rebuilding with only one parity drive is scary (have seen rebuilds go bad because a second drive failed whilst rebuilding).
Were those arrays doing regular scrubs, so that they experience rebuild-equivalent load every month or two and it's not a sudden shock to them?
If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.
If the drives are the same age and large parts of the drive haven't been read from for a long time until the rebuild you might find it already failed. Anecdotally around 12 years ago the chances of a second disk failing during a raid 5 rebuild (in our setup) was probably more like 10-20%
My goal isn't to be rude, but when you skip over a critical part of what I'm saying it causes a communication issue. Are you correcting my numbers, or intentionally giving numbers for a completely different scenario, or something in between? Is it none of those and you weren't taking my comment seriously enough to read 50 words? The way you replied made it hard to tell.
So I made a simple comment to point out the conflict, a little bit rude but not intended to escalate the level of rudeness, and easier for both of us than writing out a whole big thing.
I agreed with this generally until learning the long way why RAID 5 minimum is the only way to have some peace of mind and always a nas with at least 1-2 extra bays than you need.
I've had a synology since 2015. Why, besides the drives themselves, would most home labs need to upgrade?
I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.
By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.
A 80Bronze 300W can still be more efficient than a 750W 80Platinum on mainly low loads. Additionally, some of the devices are way more efficient than they are certified for. A well known example is the Corsair RM550x (2021).
If your peak power draw is <200W, I would recommend an efficient <450W power supply.
Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.
Understandable... Well, the bottleneck for a Proxmox Server often is RAM - sometimes CPU cores (to share between VMs). This might not be the case for a NAS-only device.
Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.
I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.
Should a mini-NAS be considered a new type of thing with a new design goal? He seems to be describing about a desktop worth of storage (6TB), but always available on the network and less power consuming than a desktop.
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
> Should a mini-NAS be considered a new type of thing with a new design goal?
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
HDD-based NASes are used for all kinds of storage amounts, from as low as 4TB to hundreds of TB. The SSD NASes aren’t really much different in use case, just limited in storage amount by available (and affordable) drive capacities, while needing less space, being quieter, but having a higher cost per TB.
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
Looks like it only draws 45w which could allow this to be powered over POE++ with a splitter, but it has an integrated AC input and PSU - that's impressive regardless considering how small it is but not set up for PD or POE
I've been running one of these quad nvme mini-NAS for a while. They're a good compromise if you can live with no ECC. With some DIY shenanigans they can even run fanless
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
I’m a TrueNAS/FreeNAS user, currently running an ECC system. The traditional wisdom is that ECC is a must-have for ZFS. What do you think? Is this outdated?
Been running without for 15+ on my NAS boxes, built using my previous desktop hardware fitted with NAS disks.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
That traditional wisdom is wrong. ECC is a must-have for any computer. The only reason people think ECC is mandatory for ZFS is because it exposes errors due to inherent checksumming and most other filesystems don't, even if they suffer from the same problems.
Don't you have to read that data into RAM before you can generate the CRC? Which means without ECC it could get silently corrupted on the way to the cache?
ECC is a must-have if you want to minimize the risk of corruption, but that is true for any filesystem.
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
One way to look at it is ECC has recently become more affordable due to In-Band ECC (IBECC) providing ECC-like functionality for a lot of newer power efficient Intel CPUs.
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
Is this "full" ECC, or just the baseline improved ECC that all DDR5 has?
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
This is full ECC, the CPU supports it (AMD Pro variant).
DDR5 ECC is not good enough. What if you have faulty RAM and ECC is constantly correcting it without you knowing it? There's no value in that. You need the OS to be informed so that you are aware of it. It also does not protect errors which occur between the RAM and the CPU.
This is similar to HDDs using ECC. Without SMART you'd have a problem, but part of SMART is that it allows you to get a count of ECC-corrected errors so that you can be aware of the state of the drive.
True ECC takes the role of SMART in regards of RAM, it's just that it only reports that: ECC-corrected errors.
On a NAS, where you likely store important data, true ECC does add value.
The DDR5 on-die ECC doesn’t report memory errors back to the CPU, which is why you would normally want ECC RAM in the first place. Unlike traditional side-band ECC, it also doesn’t protect the memory transfers between CPU and RAM. DDR5 requires the on-die ECC in order to still remain reliable in face of its chip density and speed.
The Aoostar WTR max is pretty beefy, supports 5 nvme and 6 hard drives, and up to 128GB of ECC ram. But it’s $700 bare bones, much more than these devices in the article.
Aoostar WTR series is one change away from being the PERFECT home server/nas. Passing the storage controller IOMMU to a VM is finicky at best. Still better than the vast majority of devices that don't allow it at all. But if they do that, I'm in homelab heaven. Unfortunately, the current iteration cannot due to a hardware limitation in the AMD chipset they're using.
I have the pro. I'm not sure if the Max will do passthrough but a quick google seems to indicate that it won't. (There's a discussion on the proxmox forum)
One of the arm ones is yes. Can't for the life of me remember which though - sorry - either something in bananapi or lattepanda part of universe I think
This discussion got me curious: how much data are you all hoarding?
For me, the media library is less than 4TB. I have some datasets that, put together, go to 20TB or so. All this is handled with a microserver with 4 SATA spinning metal drives (and a RAID-1 NVMe card for the OS and).
I would imagine most HN'ers to be closer to the 4TB bracket than the 40TB one. Where do you sit?
That's cool, except that NAND memories are horrible to hoard data. It has to be powered all the time as cells needs to be refreshed periodically and if you exceed threshold of like 80 percent of occupied storage you will get huge performance penalty due to internal memory organization.
My main challenge is that we don't have wired ethernet access in our rooms so even if I bought a mini-NAS and attached it to the router over ethernet, all "clients" will be accessing it over wifi.
Not sure if anyone else has dealt with this and/or how this setup works over wifi.
There is also power line Ethernet adapters that are well above 300mbps. Even if that's the nameplate speed on your wifi router, you won't have as much fluctuation on the power line. But both adapters need to be on the same breaker
Related question: does anyone know of an usb-c powerbank that can be effectively used as UPS? That is to say is able to be charged while maintaining power to load (obviously with rate of charge greater by a few watts than load).
Most models I find reuse the most powerful usb-c port as ... recharging port so unusable as DC UPS.
Context: my home server is my old https://frame.work motherboard running proxmox VE with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at idle.
This isn't a power bank, but the EcoFlow River makes for a great mobile battery pack for many uses (like camping, road trips, etc) but also qualifies for a UPS (which means it has to be able to switch over to battery power with certain milliseconds.. that part i'm not sure, but the professional UPSs switch over in < 10ms. I think EcoFlow is < 30ms but I'm not 100% sure).
I've had the River Pro for a few months and it's worked perfectly for that use case. And UnRaid supports it as of a couple months ago.
Powerbank is a wrong keyword here, what you want to look for is something like “USB-C power supply with battery”, “USB-C uninterruptible power supply”, etc.
Lots of results on Ali for a query “usb-c ups battery”.
These are cute, I'd really like to see the "serious" version.
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
Not the "cube" sized, but surprisingly small still. I've got one under the desk, so I don't even register it is there. Stuffed it with 4x 4TB drives for now.
I use a 12600H MS-01 with 5x4tb nvme. Love the SFP+ ports since the DAC cable doesn't need ethernet to SFP adapters. Intel vPro is not perfect but works just fine for remote management access. I also plug a bus powered dual ssd enclosure to it which is used for Minio object storage.
It's a file server (when did we started calling these "NAS"?) with Samba, NFS but also some database stuff. No VMs or dockers. Just a file and database server.
It has full disk encryption with TPM unlocking with my custom keys so it can boot unattended. I'm quote happy with it.
An evil maid is not on my threat level. I'm more worried about a burglar getting into my house and stealing my stuff and my data with it. It's a 1l PC with more than 10TBs of data so it fits in a small bag.
I start with normal full disk encryption and enrolling my secure boot keys into the device (no vendor or MS keys) then I use systemd-cryptenroll to add a TPM2 key slot into the LUKS device. Automatic unlock won't happen if you disable secure boot or try to boot anything other than my signed binaries (since I've opted to not include the Microsoft keys).
systemd-cryptenroll has a bunch of stricter security levels you can chose (PCRs). Have a look at their documentation.
Which SSDs do people rely on? Considering PLP (power loss protection), write endurance/DWPD (no QLC), and other bugs that affect ZFS especially? It is hard to find options that do these things well for <$100/TB, with lower-end datacenter options (e.g., Samsung PM9A3) costing maybe double what you see in a lot of builds.
QNAP TS435XeU 1U short-depth NAS based on Marvell CN913x (SoC successor to Armada A388) with 4xSATA, 2xM.2, 2x10GbE, optional ECC RAM and upstream Linux kernel support, https://news.ycombinator.com/item?id=43760248
I was about to order that GMKtek G9 and then saw Jeff's video about it on the same day. All those issues, even with the later fixes he showed, are a big no-no for me. Instead, I went with a Odroid H4-Ultra with an Intel N305, 48GB Crucial DDR5 and 4x4TB Samsung 990 Evo SSDs (low-power usage) + a 2TB SATA SSD to boot from. Yes, the SSDs are way overkill and pretty expensive at $239 per Samsung 990 Evo (got them with a deal at Amazon). It's running TrueNAS.
I am somewhat space-limited with this system, didn't want spinning disks (as the whole house slightly shakes when pickup or trash trucks pass by), wanted a fun project and I also wanted to go as small as possible.
No issues so far. The system is completely stable. Though, I did add a separate fan at the bottom of the Odroid case to help cool the NVMe SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets maxed out. Maybe I could try bonding the 2 networking ports but I don't have any client devices that could use it.
I had an eye on the Beelink ME Mini too, but I don't think the NVMe disks are sufficiently cooled under load, especially on the outer side of the disks.
Yes, it is a cheaply built wooden house, as is typical in Southern California, from the 70s. The backside of the house is directly at the back alley, where the trash bins and through traffic are (see [1] for an example). The NAS is maybe 4m away from the trash bins.
Fair point! I agree.
The NAS sits in a corner with no airflow/ventilation and there is no AC here. In the corner it does get 95F-105F in late summer and I did not want to take the risk of it getting too hot.
Question regarding these mini pcs: how do you connect them to plain old hard drives ? Is thunderbolt / usb these days reliable enough to run 24/7 without disconnects like an onboard sata?
I've run a massive farm (2 petabytes) of ZFS on FreeBSD servers with Zraid over consumer USB for about fifteen years and haven't had a problem: directly attaching to the motherboard USB ports and using good but boring controllers on the drives like the WD Elements series.
Good question. I imagine for the silence and low power usage without needing huge amounts of storage. That said, I own an n100 dual 3.5 bay + m.2 mini PC that can function as a NAS or as anything and I think it's pretty neat for the price.
I have experienced them - I have a B650 AM5 motherboard and if I connect a Orico USB HDD enclosure to the fastest USB ports, the ones comming directly from the AMD CPU (yes, it's a thing now), after 5-10 min the HDD just disappears from the system. Doesn't happen on the other USB ports.
Well, AMD makes a good core but there are reasons that Intel is preferred by some users in some applications, and one of those reasons is that the peripheral devices on Intel platforms tend to work.
> Testing it out with my disk benchmarking script, I got up to 3 GB/sec in sequential reads.
To be sure... is the data compressible, or repeated? I have encountered an SSD that silently performed compression on the data I wrote to it (verified by counting its stats on blocks written). I don't know if there are SSDs that silently deduplicate the data.
(An obvious solution is to copy data from /dev/urandom. But beware of the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds to read 1GB from /dev/urandom, so that would be the bottleneck in a write test. But at least for a read test, it doesn't matter how long the data took to write.)
I’ve been always puzzled by the strange choice of raiding multiple small capacity M.2 NVMe in these tiny low-end Intel boxes with severely limited PCIe lanes using only one lane per SSD.
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
The latest small M.2 NAS’s make very good consumer grade, small, quiet, power efficient storage you can put in your living room, next to the tv for media storage and light network attached storage.
It’d be great if you could fully utilise the M.2 speed but they are not about that.
Still think its highly underrated to use fs-cache with NASes (usually configured with cachefilesd) for some local dynamically scaling client-side nvme caching.
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
I was recently looking for a mini PC to use as a home server with, extendable storage. After comparing different options (mostly Intel), I went with the Ryzen 7 5825U (Beelink SER5 Pro) instead. It has an M.2 slot for an SSD and I can install a 2.5" HDD too. The only downside is that the HDD is limited by height to 7 mm (basically 2 TB storage limit), but I have a 4 TB disk connected via USB for "cold" storage. After years of using different models with Celeron or Intel N CPUs, Ryzen is a beast (and TDP is only 15W). In my case, AMD now replaced almost all the compute power in my home (with the exception of the smartphone) and I don't see many reasons to go back to Intel.
Is it possible (and easy) to make a NAS with harddrives for storage and an SSD for cache? I don't have any data that I use daily or even weekly, so I don't want the drives spinning needlessly 24/7, and I think an SSD cache would stop having to spin them up most of the time.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
I do this. One mergerfs mount with an ssd and three hdds made to look like one disk. Mergerfs is set to write to the ssd if it’s not full, and read from the ssd first.
A chron job moves out the oldest files on the ssd once per night to the hdds (via a second mergerfs mount without the ssd) if the ssd is getting full.
I have a fourth hdd that uses snap raid to protect the ssd and other hdds.
Thanks. I looked it up and it seems that lvmcache uses dm-cache and is easier to use, I guess putting that in front of some kind of RAID volume could be a good solution.
NVMe NAS is completely and totally pointless with such crap connectivity.
What in the WORLD is preventing these systems from getting at least 10gbps interfaces? I have been waiting for years and years and years and years and the only thing on the market for small systems with good networking is weird stuff that you have to email Qotom to order direct from China and _ONE_ system from Minisforum.
I'm beginning to think there is some sort of conspiracy to not allow anything smaller than a full size ATX desktop to have anything faster than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the solution.)
>What in the WORLD is preventing these systems from getting at least 10gbps interfaces?
Price and price. Like another commenter said, there is at least one 10Gbe mini NAS out there, but it's several times more expensive.
What's the use case for the 10GbE? Is ~200MB/sec not enough?
I think the segment for these units is low price, small size, shared connectivity. The kind of thing you tuck away in your house invisibly and silently, or throw in a bag to travel with if you have a few laptops that need shared storage. People with high performance needs probably already have fast nvme local storage is probably the thinking.
You can order the Mac mini with 10gbps networking and it has 3 thunderbolt 4 ports if you need more. Plus it has an internal power supply making it smaller than most of these mini PCs.
That's what I'm running as my main desktop at home, and I have an external 2TB TB5 SSD, which gives me 3 GB/sec.
If I could get the same unit for like $299 I'd run it like that for my NAS too, as long as I could run a full backup to another device (and a 3rd on the cloud with Glacier of course).
It's annoying, around 10 years ago 10gbps was just starting to become more and more standard on bigger NAS, and 10gbps switches were starting to get cheaper, but then 2.5GbE came out and they all switched to that.
That's because 10GbE tech is not there yet. Everything overheats and drops-out all the time, while 2.5GbE just works. In several years from now, this will all change, of course.
It especially sucks when even low end mini PCs have at least multiple 5Gbps USB ports, yet we are stuck with 1Gbps (or 2.5, if manufacturer is feeling generous) ethernet. Maybe IP over Thunderbolt will finally save us.
> Copper 10gig is power hungry and demands good cabling.
Power hungry yes, good cabling maybe?
I run 10G-Base-T on two Cat5e runs in my house that were installed circa 2001. I wasn't sure it would work, but it works fine. The spec is for 100 meter cable in dense conduit. Most home environments with twisted pair in the wall don't have runs that long or very dense cabling runs, so 10g can often work. Cat3 runs probably not worth trying at 10G, but I've run 1G over a small section of cat3 because that's what was underground already.
I don't do much that really needs 10G, but I do have a 1G symmetric connection and I can put my NAT on a single 10G physical connection and also put my backup NAT router in a different location with only one cable run there... thr NAT routers also do NAS and backup duty, so I can have a little bit of physical separation between them plus I can reboot one at a time without losing NAT.
Economical consumer oriented 10g is coming soon, lots of announcements recently and reasonableish products on aliexpress. All of my current 10G NICs are used enterprise stuff, and the switches are used high end (fairly loud) SMB. I'm looking forward to getting a few more ports in the not too distant future.
I think the N100 and N150 suffer the same weakness for this type of use case in the context of SSD storage 10gb networking. We need a next generation chip that can leverage more PCI lanes with roughly the same power efficiency.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
I am currently running a 8 4TB NVMe NAS via OpenZFS on TrueNAS Linux. It is good but my box is quite large. I made this via a standard AMD motherboard with both built-in NVMe slots as well as a bunch of expansion PCEi cards. It is very fast.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
Whenever these things come up I have to point out the most of these manufactures don’t do bios updates. Since spectre/meltdown we see cpu and bios vulnerabilities every few months-yearly.
I know u can patch microcode at runtime/boot but I don’t think that covers all vulnerabilities
I’ve been thinking about moving from SSDs for my NAS to solid state. The drive are so loud, all the time, it’s very annoying.
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
I've been using a QNAP TBS-464 [1] for 4 years now with excellent results. I have 4x 4TB NVMe drives and get about 11TB usable after RAID. It gets slightly warm but I have it in my media cabinet with a UPS, Mikrotik router, PoE switches, and ton of other devices. Zero complaints about this setup.
The entire cabinet uses under 1kwh/day, costing me under $40/year here, compared to my previous Synology and home-made NAS which used 300-500w, costing $300+/year. Sure I paid about $1500 in total when I bought the QNAP and the NVMe drives but just the electricity savings made the expense worth it, let alone the performance, features etc.
Thanks, I’ll give it a look. I’m running a Synology right now. It only has 2 drives, so just swapping those out for SSDs would cost as much as a whole 4xNVMe setup, as I have 8TB HDDs in there now.
Ceph or MooseFS are the two that I've seen most popular. All networked FS have drawbacks, I used to run a lot of Gluster, and it certainly added a few grey hairs.
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or USB. I think I'd want HMDI and let the nas handle streaming/decoding. But if my TV can handle enough formats. maybe USB will do.
anyone have experience with this?
I've been using a combination of media server on my Mac with client on Apple TV and I have no end of glitches.
I've been running Plex on my AppleTV 4k for years with few issues.
It gets a lot of use in my household. I have my server (a headless Intel iGPU box) running it in docker with the Intel iGPU encoder passed through.
I let the iGPU default encode everything realtime, and now that plex has automatic subtitle sync, my main source of complaints is gone. I end up with a wide variety of formats as my wife enjoys obscure media.
One of the key things that helped a lot was segregating Anime to it own TV collection so that anime specific defaults can be applied there.
You can also run a client on one of these machines directly, but then you are dealing with desktop Linux.
Streaming (e.g., Plex or Jellyfin or some UPnP server) helps you send the data to the TV client over the network from a remote server.
As you want to bring the data server right to the TV, and you'll output the video via HDMI, just use any PC. There are plenty of them designed for this (usually they're fanless for reducing noise)... search "home theater PC."
You can install Kodi as the interface/organizer for playing your media files. It handles the all the formats... the TV is just the ouput.
A USB CEC adapter will also allow you to use your TV remote with Kodi.
thanks!
I've tried Plex, Jellyfin etc on my Mac. I've tried three different Apple TV apps as streaming client (Infuse, etc). They are all glitchy. Another key problem is if I want to bypass the streaming server on my Mac and have Infuse on the Apple TV just read files from the Mac the option is Windows NFS protocol...which gives way too much sharing by providing the Infuse app with a Mac id/password.
Just get a nvidia shield. It plays pretty much anything still even though a fairly old device. Your aim should not be to transcode but to just send data when it comes to video.
(I assume M.2 cards are the same, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
Also, these things are just begging for a 10Gbps Ethernet port, since you're going to lose out on a ton of bandwidth over 2.5Gbps... though I suppose you could probably use the USB-C port for that.
True, but it's still concerning. For example, I have a NAS with some long-term archives that I power on maybe once a month. Am I going to see SSD data loss from a usage pattern like that?
Would be nice to see what those little N100 / N150 (or big brother N305 / N350) can do with all that NVMe. Raw throughput is pretty whatever but hypothetically if the CPU isn't too gating, there's some interesting IOps potential.
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
Not only the lanes, but putting through more than 6 Gbps of IO on multiple PCIe devices on the N150 bogs things down. It's only a little faster than something like a Raspberry Pi, there are a lot of little IO bottlenecks (for high speed, that is, it's great for 2.5 Gbps) if you do anything that hits CPU.
The CPU bottleneck would be resolved by the Pentium Gold 8505, but it still has the same 9 lanes of PCIe 3.0.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
A single SSD can (or at least NVMe can). You have to question whether or not you need it -- what are you doing that you would go line-speed a large portion of time that the time savings are worth it. Or it's just a toy, totally cool too.
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.
I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.
My use is a bit into the cool-toy category. I like having VMs where the NAS has the VMs and the backups, and like having the server connect to the NAS to access the VMs.
Even if the throughput isn't high, it sure is nice having the instant response time & amazing random access performance of a ssd.
2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.
i want a NAS i can puf 4tb nvme’s in and a 12tb hdd running backup every night. with ability to shove a 50gbps sfp card in it so i can truly have a detached storage solution.
The lack of highspeed networking on any small system is completely and totally insane. I have come to hate 2.5gbps for the hard stall it has caused on consumer networking with such a passion that it is difficult to convey. You ship a system with USB5 on the front and your networking offering is 3.5 orders of magnitude slower? What good is the cloud if you have to drink it through a straw?
How often do you use IPMI on a server? I have a regular desktop running Proxmox, and I haven't had to plug in a monitor since I first installed it like 2 years ago
I will wait until the have AMD efficient chip for one very simple reason: AMD graciously allow ECC on some* cpus.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
And good luck finding a single fucking computer for sale that even uses these "Pro" CPUs, because they sure as hell don't sell them to the likes of Minisforum and Beelink.
There was some stuff in DDR5 that made ECC harder to implement (unlike DDR4 where pretty much everything AMD made supported unbuffered ECC by default), but its still ridiculous how hard it is to find something that supports DDR5 ECC that doesn't suck down 500W at idle.
Intel N150 is the first consumer Atom [1] CPU (in 15 years!) to include TXT/DRTM for measured system launch with owner-managed keys. At every system boot, this can confirm that immutable components (anything from BIOS+config to the kernel to immutable partitions) have the expected binary hash/tree.
TXT/DRTM can enable AEM (Anti Evil Maid) with Qubes, SystemGuard with Windows IoT and hopefully future support from other operating systems. It would be a valuable feature addition to Proxmox, FreeNAS and OPNsense.
Some (many?) N150 devices from Topton (China) ship without Bootguard fused, which _may_ enable coreboot to be ported to those platforms. Hopefully ODROID (Korea) will ship N150 devices. Then we could have fanless N150 devices with coreboot and DRTM for less-insecure [2] routers and storage.
[1] Gracemont (E-core): https://chipsandcheese.com/p/gracemont-revenge-of-the-atom-c... | https://youtu.be/agUwkj1qTCs (Intel Austin architect, 2021)
[2] "Xfinity using WiFi signals in your house to detect motion", 400 comments, https://news.ycombinator.com/item?id=44426726#44427986
With some currently still a bit of hands-on approach you can set up measured boot that can measure everything from the BIOS (settings) through the kernel, the initrd, and also kernel command line parameters.
I currently do not have time for a clear how to, but some relevant references would be:
https://www.freedesktop.org/software/systemd/man/latest/syst...
https://www.krose.org/~krose/measured_boot
Integrating this better into Proxmox projects is definitively something I'd like to see sooner or later.
Where are you seeing devices without Bootguard fused? I'd be very curious to get my hands on some of those...
As a Schrödinger-like property, it may vary by observer and not be publicly documented.. One could start with a commercial product that ships with coreboot, then try to find identical hardware from an upstream ODM. A search for "bootguard" or "coreboot" on servethehome forums, odroid/hardkernel forums, phoronix or even HN, may be helpful.
While it may be tempting to go "mini" and NVMe, for a normal use case I think this is hardly cost effective.
You give up so much by using an all in mini device...
No Upgrades, no ECC, harder cooling, less I/O.
I have had a Proxmox Server with a used Fujitsu D3417 and 64gb ecc for roughly 5 years now, paid 350 bucks for the whole thing and upgraded the storage once from 1tb to 2tb. It draws 12-14W in normal day use and has 10 docker containers and 1 windows VM running.
So I would prefer a mATX board with ECC, IPMI 4xNVMe and 2.5GB over these toy boxes...
However, Jeff's content is awesome like always
The selling point for the people in the Plex community is the N100/N150 include Intel’s Quicksync which gives you video hardware transcoding without a dedicated video card. It’ll handle 3 to 4 4K transcoded streams.
There are several sub $150 units that allow you to upgrade the ram, limited to one 32gb stick max. You can use an nvme to sata adapter to add plenty of spinning rust or connect it to a das.
While I wouldn’t throw any vms on these, you have enough headroom for non-ai home sever apps.
Another thing is that unless you have a very specific need for SSDs (such as heavily random access focused workloads, very tight space constraints, or working in a bumpy environment), mechanical hard drives are still way more cost effective for storing lots of data than NVMe. You can get a manufacturer refurbished 12TB hard drive with a multi-year warranty for ~$120, while even an 8TB NVMe drive goes for at least $500. Of course for general-purpose internal drives, NVMe is a far better experience than a mechanical HDD, but my NAS with 6 hard drives in RAIDz2 still gets bottlenecked by my 2.5GBit LAN, not the speeds of the drives.
It depends on what you consider "lots" of data. For >20tb yes absolutely obviously by a landslide. But if you just want self-hosted Google Drive or Dropbox you're in the 1-4TB range where mechanical drives are a very bad value as they have a pretty significant price floor. WD Blue 1tb hdd is $40 while WD Blue 1tb nvme is $60. The HDD still has a strict price advantage, but the nvme drive uses way less power, is more reliable, doesn't have spinup time (consumer usage is very infrequently accessed, keeping the mechanical drives spinning continuously gets into that awkward zone of worthwhile)
And these prices are getting low enough, especially with this NUC-based solutions, to actually be price competitive with the low tiers of drive & dropbox while also being something you actually own and control. Dropbox still charges $120/yr for the entry level plan of just 2TB after all. 3x WD Blue NVMEs + an N150 and you're at break-even in 3 years or less
I appreciate you laying it out like that. I've seen these NVME NAS things mentioned and had been thinking that the reliability of SSDs was so much worse than HDDs.
Low power, low noise, low profile system, LOW ENTRY COST. I can easily get a beelink me mini or two and build a NAS + offsite storage. Two 1TB SSDs for a mirror are around 100€, two new 1TB HDDs are around 80€.
You are thinking in dimensions normal people have no need for. Just the numbers alone speaks volumes, 12TB, 6 hdds, 8TB NVMes, 2.5GB LAN.
Don’t forget about power. If you’re trying to build a low power NAS, those hdds idle around 5w each, while the ssd is closer to 5mw. Once you’ve got a few disks, the HDDs can account for half the power or more. The cost penalty for 2TB or 4TB ssds is still big, but not as bad as at the 8TB level.
such power claims are problematic - you're not letting the HDs spin down, for instance, and not crediting the fact that an SSD may easily dissipate more power than an HD under load. (in this thread, the host and network are slow, so it's not relevant that SSDs are far faster when active.)
I experimented with spindowns, but the fact is, many applications needs to write to disk several times per minute. Because of this I only use SSD's now. Archived files are moved to the Cloud. I think Google Disk is one of the best alternatives out there, as it has true data streaming built in the MacOS or Windows clients. It feels like an external hard drive.
There's a lot of "never let your drive spin down! They need to be running 24/7 or they'll die in no time at all!" voices in the various homelab communities sadly.
Even the lower tier IronWolf drives from Seagate specify 600k load/unload cycles (not spin down, granted, but gives an idea of the longevity).
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...
Here is someone that had significant corruption until they stopped: https://www.xda-developers.com/why-not-to-spin-down-nas-hard...
There are many similar articles.
I wonder if it has to do with the type of HDD. The red NAS drives may not like to be spun down as much. I spin down my drives and have not had a problem except for one drive, after 10 years continuous running, but I use consumer desktop drives which probably expect to be cycled a lot more than a NAS.
I wonder if they were just hit with the bathtub curve?
Or perhaps the fact that my IronWolf drives are 5400rpm rather than 7200rpm means they're still going strong after 4 years with no issues spinning down after 20 minutes.
Or maybe I'm just insanely lucky? Before I moved to my desktop machine being 100% SSD I used hard drives for close to 30 years and never had a drive go bad. I did tend to use drives for a max of 3-5 years though before upgrading for more space.
Letting hdds spin down is generally not advisable in a NAS, unless you access it really rarely perhaps.
Spin down isn't as problematic today. It really depends on your setup and usage.
If the stuff you access often can be cashed to SSDs you rarely access it. Depending on your file system and operating system only drives that are in use can be spun up. If you have multiple drive arrays with media some of it won't be accessed as often.
In an enterprise setting it generally doesn't make sense. For a home environment disks you generally don't access the data that often. Automatic downloads and seeding change that.
Is there any (semi-)scientific proof to that (serious question)? I did search a lot to this topic but found nothing...
(see above, same question)
It's probably decades old anecdata from people who re commissioned old drives that were on the shelf for many years. The theory is that the grease on the spindle dries up and seizes up the platters.
I've put all of my surveillance cameras on one volume in _hopes_ that I can let my other volumes spin down. But nope. They spend the vast majority of their day spinning.
Did you consider ZFS with L2ARC? The extra caching device might make this possible...
That's not how L2ARC works. It's not how the ZIL SLOG works, either.
If a read request can be filled by the OS cache, it will be. Then it will be filled by the ARC, if possible. Then it will be filled by the L2ARC, if it exists. Then it will be filled by the on-disk cache, if possible; finally, it will be filled by a read.
An async write will eventually be flushed to a disk write, possibly after seconds of realtime. The ack is sent after the write is complete... which may be while the drive has it in a cache but hasn't actually written it yet.
A sync write will be written to the ZIL SLOG, if it exists, while it is being written to the disk. It will be acknowledged as soon as the ZIL finishes the write. If the SLOG does not exist, the ack comes when the disk reports the write complete.
> […] mechanical hard drives are still way more cost effective for storing lots of data than NVMe.
Linux ISOs?
I think you're right generally, but I wanna call out the ODROID H4 models as an exception to a lot of what you said. They are mostly upgradable (SODIMM RAM, SATA ports, M.2 2280 slots), and it does support in-band ECC which kinda checks the ECC box. They've got a Mini-ITX adapter for $15 so it can fit into existing cases too.
No IPMI and not very many NVME slots. So I think you're right that a good mATX board could be better.
Well, if you would like to go mini (with ECC and 2.5G) you could take a look at this one:
https://www.aliexpress.com/item/1005006369887180.html
Not totally upgradable, but at least pretty low cost and modern with an optional SATA + NVMe combination for Proxmox. Shovel in an enterprise SATA and a consumer 8TB WD SN850x and this should work pretty good. Even Optane is supported.
IPMI could be replaced with NanoKVM or JetKVM...
That looks pretty slick with a standard hsf for the CPU, thanks for sharing
Not sure about the odroid but I got myself the nas kit from friendly elec. With the largest ram it was about 150 bucks and comes with 2,5g ethernet and 4 NVME slots. No fan and keeps fairly cool even under load.
Running it with encrypted zfs volumes and even with a 5bay 3.5 Inch HDD dock attached via USB
https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Kit
You can get a 1 -> 4 M.2 adapter for these as well which would give each one a 1x PCIe lane (same as all these other boards). If you still want spinning rust, these also have built-in power for those and SATA ports so you only need a 12-19v power supply. No idea why these aren't more popular as a basis for a NAS.
No ECC is the biggest trade off for me, but the C236 express chipset has very little choice for CPUs, they are all 4 core 8 thread. Ive got multiple x99 platform systems and for a long time they were the king of cost efficiency, but lately the ryzen laptop chips are becoming too good to pass up, even without ECC. Eg Ryzen 5825u minis
For a home NAS, ECC is as needed as it is on your laptop.
ECC is essential indeed for any computer. But the laptop situation is truly dire, while it's possible to find some NAS with ECC support.
Most computers don't have ECC. So it might be essential in theory but in practice things work fine without (for standard personal, even work, use cases).
these little boxes are perfect for my home
My use case is a backup server for my macs and cold storage for movies.
6x2Tb drives will give me a 9Tb raid-5 for $809 ($100 each for the drives, $209 for the nas).
Very quiet so I can have it in my living room plugged into my TV. < 10W power.
I have no room for a big noisy server.
While I get your point about size, I'd not use RAID-5 for my personal homelab. I'd also say that 6x2TB drives are not the optimal solution for low power consumption. You're also missing out server quality BIOS, Design/Stability/x64 and remote management. However, not bad.
While my Server is quite big compared to a "mini" device, it's silent. No CPU Fan only 120mm case fans spinning around 500rpm, maybe 900rpm on load - hardly noticable. I've also a completely passive backup solution with a Streacom FC5, but I don't really trust it for the chipsets, so I also installed a low rpm 120mm fan.
How did you fit 6 drives in a "mini" case? Using Asus Flashstor or beelink?
I'm interested in learning more about your setup. What sort of system did you put together for $350? Is it a normal ATX case? I really like the idea of running proxmox but I don't know how to get something cheap!
My current config:
For backup I use a 2TB enterprise HDD and ZFS sendFor snapshotting i use zfs-auto-snapshot
So really nothing recommendable for buying today. You could go for this
https://www.aliexpress.com/item/1005006369887180.html
Or an old Fujitsu Celsius W580 Workstation with a Bojiadafast ATX Power Supply Adapter, if you need harddisks.
Unfortunately there is no silver bullet these days. The old stuff is... well too old or no longer available and the new stuff is either to pricey, lacks features (ECC and 2.5G mainly) or to power hungry.
A year ago there were bargains for Gigabyte MC12-LE0 board available for < 50bucks, but nowadays these cost about 250 again. These boards also had the problem of drawing too much power for an ultra low power homelab.
If I HAD to buy one today, I'd probably go for a Ryzen Pro 5700 with a gaming board (like ASUS ROG Strix B550-F Gaming) with ECC RAM, which is supported on some boards.
> I'd not use RAID-5 for my personal homelab.
What would you use instead?
ZFS is better than raw RAID, but 1 parity per 5 data disks is a pretty good match for the reliability you can expect out of any one machine.
Much more important than better parity is having backups. Maybe more important than having any parity, though if you have no parity please use JBOD and not RAID-0.
I'd almost always use RAID-1 or if I had > 4 disks, maybe RAID-6. RAID-5 seems very cost effective at first, but if you loose a drive the probability of losing another one in the restoring process is pretty high (I don't have the numbers, but I researched that years ago). The disk-replacement process produces very high load on the non defective disks and the more you have the riskier the process. Another aspect is that 5 drives draw way more power than 2 and you cannot (easily) upgrade the capacity, although ZFS offers a feature for RAID5-expansion.
Since RAID is not meant for backup, but for reliability, losing a drive while restoring will kill your storage pool and having to restore the whole data from a backup (e.g. from a cloud drive)is probably not what you want, since it takes time where the device is offline. If you rely on RAID5 without having a backup you're done.
So I have a RAID1, which is simple, reliable and easy to maintain. Replacing 2 drives with higher capacity ones and increasing the storage is easy.
I would run 2 or more parity disks always. I have had disks fail and rebuilding with only one parity drive is scary (have seen rebuilds go bad because a second drive failed whilst rebuilding).
But agree about backups.
Were those arrays doing regular scrubs, so that they experience rebuild-equivalent load every month or two and it's not a sudden shock to them?
If your odds of disk failure in a rebuild are "only" 10x normal failure rate, and it takes a week, 5 disks will all survive that week 98% of the time. That's plenty for a NAS.
If the drives are the same age and large parts of the drive haven't been read from for a long time until the rebuild you might find it already failed. Anecdotally around 12 years ago the chances of a second disk failing during a raid 5 rebuild (in our setup) was probably more like 10-20%
> and large parts of the drive haven't been read from for a long time
Hence the first sentence of my three sentence post.
If I wanted to deal with snark I'd reply to people on Reddit.
My goal isn't to be rude, but when you skip over a critical part of what I'm saying it causes a communication issue. Are you correcting my numbers, or intentionally giving numbers for a completely different scenario, or something in between? Is it none of those and you weren't taking my comment seriously enough to read 50 words? The way you replied made it hard to tell.
So I made a simple comment to point out the conflict, a little bit rude but not intended to escalate the level of rudeness, and easier for both of us than writing out a whole big thing.
I agreed with this generally until learning the long way why RAID 5 minimum is the only way to have some peace of mind and always a nas with at least 1-2 extra bays than you need.
Storage is easier as an appliance that just runs.
Storing backups and movies on NVMe ssds is just a waste of money.
Absolutely. I don't store movies at all but if I would, I would add a USB-based solution that could be turned off via shelly plug / tasmota remotely.
I've had a synology since 2015. Why, besides the drives themselves, would most home labs need to upgrade?
I don't really understand the general public, or even most usages, requiring upgrade paths beyond get a new device.
By the time the need to upgrade comes, the tech stack is likely faster and you're basically just talking about gutting the PC and doing everything over again, except maybe power supply.
> except maybe power supply.
Modern Power MOSFETs are cheaper and more efficient. 10 Years ago 80Gold efficiency was a bit expensive and 80Bronze was common.
Today, 80Gold is cheap and common and only 80Platinum reaches into the exotic level.
A 80Bronze 300W can still be more efficient than a 750W 80Platinum on mainly low loads. Additionally, some of the devices are way more efficient than they are certified for. A well known example is the Corsair RM550x (2021).
If your peak power draw is <200W, I would recommend an efficient <450W power supply.
Another aspect: Buying a 120 bucks power supply that is 1.2% more efficient than a 60 bucks one is just a waste of money.
Understandable... Well, the bottleneck for a Proxmox Server often is RAM - sometimes CPU cores (to share between VMs). This might not be the case for a NAS-only device.
Another upgrade path is to keep the case, fans, cooling solution and only switch Mainboard, CPU and RAM.
I'm also not a huge fan of non x64 devices, because they still often require jumping through some hoops regarding boot order, external device boot or power loss struggle.
Should a mini-NAS be considered a new type of thing with a new design goal? He seems to be describing about a desktop worth of storage (6TB), but always available on the network and less power consuming than a desktop.
This seems useful. But it seems quite different from his previous (80TB) NAS.
What is the idle power draw of an SSD anyway? I guess they usually have a volatile ram cache of some sort built in (is that right?) so it must not be zero…
> Should a mini-NAS be considered a new type of thing with a new design goal?
Small/portable low-power SSD-based NASs have been commercialized since 2016 or so. Some people call them "NASbooks", although I don't think that term ever gained critical MAS (little joke there).
Examples: https://www.qnap.com/en/product/tbs-464, https://www.qnap.com/en/product/tbs-h574tx, https://www.asustor.com/en/product?p_id=80
HDD-based NASes are used for all kinds of storage amounts, from as low as 4TB to hundreds of TB. The SSD NASes aren’t really much different in use case, just limited in storage amount by available (and affordable) drive capacities, while needing less space, being quieter, but having a higher cost per TB.
With APSD the idle draw of a SSD is in the range of low tens of milliwatts.
> Should a mini-NAS be considered a new type of thing with a new design goal?
> less power consuming than a desktop
Not really seeing that in these minis. Either the devices under test haven't been optimized for low power, or their Linux installs have non-optimal configs for low power. My NUC 12 draws less than 4W, measured at the wall, when operating without an attached display and with Wi-Fi but no wired network link. All three of the boxes in the review use at least twice as much power at idle.
I love reviews like these. I'm a fan of the N100 series for what they are in bringing low power x86 small PCs to a wide variety of applications.
One curiosity for @geerlingguy, does the Beelink work over USB-C PD? I doubt it, but would like to know for sure.
That, I did not test. But as it's not listed in specs or shown in any of their documentation, I don't think so.
Looks like it only draws 45w which could allow this to be powered over POE++ with a splitter, but it has an integrated AC input and PSU - that's impressive regardless considering how small it is but not set up for PD or POE
I've been running one of these quad nvme mini-NAS for a while. They're a good compromise if you can live with no ECC. With some DIY shenanigans they can even run fanless
If you're running on consumer nvmes then mirrored is probably a better idea than raidz though. Write amplification can easily shred consumer drives.
I’m a TrueNAS/FreeNAS user, currently running an ECC system. The traditional wisdom is that ECC is a must-have for ZFS. What do you think? Is this outdated?
Been running without for 15+ on my NAS boxes, built using my previous desktop hardware fitted with NAS disks.
They're on 24/ and run monthly scrubs, as well as monthly checksum verification of my backup images, and not noticed any issues so far.
I had some correctable errors which got fixed when changing SATA cable a few times, and some from a disk that after 7 years of 24/7 developed a small run of bad sectors.
That said, you got ECC so you should be able to monitor corrected memory errors.
Matt Ahrens himself (one of the creators of ZFS) had said there's nothing particular about ZFS:
There's nothing special about ZFS that requires/encourages the use of ECC RAM more so than any other filesystem. If you use UFS, EXT, NTFS, btrfs, etc without ECC RAM, you are just as much at risk as if you used ZFS without ECC RAM. Actually, ZFS can mitigate this risk to some degree if you enable the unsupported ZFS_DEBUG_MODIFY flag (zfs_flags=0x10). This will checksum the data while at rest in memory, and verify it before writing to disk, thus reducing the window of vulnerability from a memory error.
I would simply say: if you love your data, use ECC RAM. Additionally, use a filesystem that checksums your data, such as ZFS.
https://arstechnica.com/civis/viewtopic.php?f=2&t=1235679&p=...
That traditional wisdom is wrong. ECC is a must-have for any computer. The only reason people think ECC is mandatory for ZFS is because it exposes errors due to inherent checksumming and most other filesystems don't, even if they suffer from the same problems.
I'm curious if it would make sense for write caches in RAM to just include a CRC32 on every block, to be verified as it gets written to disk.
Don't you have to read that data into RAM before you can generate the CRC? Which means without ECC it could get silently corrupted on the way to the cache?
that's just as true with ecc as without
ECC is a must-have if you want to minimize the risk of corruption, but that is true for any filesystem.
Sun (and now Oracle) officially recommended using ECC ever since it was intended to be an enterprise product running on 24/7 servers, where it makes sense that anything that is going to be cached in RAM for long periods is protected by ECC.
In that sense it was a "must-have", as business-critical functions require that guarantee.
Now that you can use ZFS on a number of operating systems, on many different architectures, even a Raspberry Pi, the business-critical-only use-case is not as prevalent.
ZFS doesn't intrinsically require ECC but it does trust that the memory functions correctly which you have the best chance of achieving by using ECC.
One way to look at it is ECC has recently become more affordable due to In-Band ECC (IBECC) providing ECC-like functionality for a lot of newer power efficient Intel CPUs.
https://www.phoronix.com/news/Intel-IGEN6-IBECC-Driver
Not every new CPU has it, for example, the Intel N95, N97, N100, N200, i3-N300, and i3-N305 all have it, but the N150 doesn't!
It's kind of disappointing that the low power NAS devices reviewed here, the only one with support for IBECC had a limited BIOS that most likely was missing this option. The ODROID H4 series, CWWK NAS products, AOOSTAR, and various N100 ITX motherboards all support it.
https://danluu.com/why-ecc/ has an argument for it with an update from 2024.
Ultimately comes down to how important the data is to you. It's not really a technical question but one of risk tolerance
That makes sense. It’s my family photo library, so my risk tolerance is very low!
Are there any mini NAS with ECC ram nowadays? I recall that being my personal limiting factor
Minisforum N5 Pro Nas has up to 96 GB of ECC RAM
https://www.minisforum.com/pages/n5_pro
https://store.minisforum.com/en-de/products/minisforum-n5-n5...
96GB DDR5 SO-DIMM costs around 200€ to 280€ in Germany.https://geizhals.de/?cat=ramddr3&xf=15903_DDR5~15903_SO-DIMM...
I wonder if that 128GB kit would work, as the CPU supports up to 256GB
https://www.amd.com/en/products/processors/laptop/ryzen-pro/...
I can't force the page to show USD prices.
Note the RAM list linked above doesn't show ECC SODIMM options.
Thank you. I thought I had it selected in the beginning, but no. The list then contains only one entry
https://geizhals.de/?cat=ramddr3&sort=r&xf=1454_49152%7E1590...
Kingston Server Premier SO-DIMM 48GB, DDR5-5600, CL46-45-45, ECC KSM56T46BD8KM-48HM for 250€
Which then means 500€ for the 96GB
Is this "full" ECC, or just the baseline improved ECC that all DDR5 has?
Either way, on my most recent NAS build, I didn't bother with a server-grade motherboard, figuring that the standard consumer DDR5 ECC was probably good enough.
This is full ECC, the CPU supports it (AMD Pro variant).
DDR5 ECC is not good enough. What if you have faulty RAM and ECC is constantly correcting it without you knowing it? There's no value in that. You need the OS to be informed so that you are aware of it. It also does not protect errors which occur between the RAM and the CPU.
This is similar to HDDs using ECC. Without SMART you'd have a problem, but part of SMART is that it allows you to get a count of ECC-corrected errors so that you can be aware of the state of the drive.
True ECC takes the role of SMART in regards of RAM, it's just that it only reports that: ECC-corrected errors.
On a NAS, where you likely store important data, true ECC does add value.
The DDR5 on-die ECC doesn’t report memory errors back to the CPU, which is why you would normally want ECC RAM in the first place. Unlike traditional side-band ECC, it also doesn’t protect the memory transfers between CPU and RAM. DDR5 requires the on-die ECC in order to still remain reliable in face of its chip density and speed.
The Aoostar WTR max is pretty beefy, supports 5 nvme and 6 hard drives, and up to 128GB of ECC ram. But it’s $700 bare bones, much more than these devices in the article.
Aoostar WTR series is one change away from being the PERFECT home server/nas. Passing the storage controller IOMMU to a VM is finicky at best. Still better than the vast majority of devices that don't allow it at all. But if they do that, I'm in homelab heaven. Unfortunately, the current iteration cannot due to a hardware limitation in the AMD chipset they're using.
Good info! Is it the same limitation on WTR pro and max? The max is an 8845hsv versus the 5825u in the pro.
I have the pro. I'm not sure if the Max will do passthrough but a quick google seems to indicate that it won't. (There's a discussion on the proxmox forum)
Yes, but not particularly cheap: https://www.asustor.com/en/product?p_id=89
Asustor has some cheaper options that support ECC. Though not as cheap as those in the OP article.
FLASHSTOR 6 Gen2 (FS6806X) $1000 - https://www.asustor.com/en/product?p_id=90
LOCKERSTOR 4 Gen3 (AS6804T) $1300 - https://www.asustor.com/en/product?p_id=86
One of the arm ones is yes. Can't for the life of me remember which though - sorry - either something in bananapi or lattepanda part of universe I think
HP Microservers.
I got myself a gen8, they’re quite cheap. They do have ECC RAM and take 3.5” hard drives.
At some point though, SSDs will beat hard drives on total price (including electricity). I’d like a small and efficient ECC option for then.
This discussion got me curious: how much data are you all hoarding?
For me, the media library is less than 4TB. I have some datasets that, put together, go to 20TB or so. All this is handled with a microserver with 4 SATA spinning metal drives (and a RAID-1 NVMe card for the OS and).
I would imagine most HN'ers to be closer to the 4TB bracket than the 40TB one. Where do you sit?
More than a hundred TB
That's cool, except that NAND memories are horrible to hoard data. It has to be powered all the time as cells needs to be refreshed periodically and if you exceed threshold of like 80 percent of occupied storage you will get huge performance penalty due to internal memory organization.
My main challenge is that we don't have wired ethernet access in our rooms so even if I bought a mini-NAS and attached it to the router over ethernet, all "clients" will be accessing it over wifi.
Not sure if anyone else has dealt with this and/or how this setup works over wifi.
There is also power line Ethernet adapters that are well above 300mbps. Even if that's the nameplate speed on your wifi router, you won't have as much fluctuation on the power line. But both adapters need to be on the same breaker
Do you have a coax drop in the room? MoCA adapters are a decent compromise - not as good as an ethernet connection, but better than wireless.
Related question: does anyone know of an usb-c powerbank that can be effectively used as UPS? That is to say is able to be charged while maintaining power to load (obviously with rate of charge greater by a few watts than load).
Most models I find reuse the most powerful usb-c port as ... recharging port so unusable as DC UPS.
Context: my home server is my old https://frame.work motherboard running proxmox VE with 64GB RAM and 4 TB NVME, powered by usb-c and drawing ... 2 Watt at idle.
This isn't a power bank, but the EcoFlow River makes for a great mobile battery pack for many uses (like camping, road trips, etc) but also qualifies for a UPS (which means it has to be able to switch over to battery power with certain milliseconds.. that part i'm not sure, but the professional UPSs switch over in < 10ms. I think EcoFlow is < 30ms but I'm not 100% sure).
I've had the River Pro for a few months and it's worked perfectly for that use case. And UnRaid supports it as of a couple months ago.
Powerbank is a wrong keyword here, what you want to look for is something like “USB-C power supply with battery”, “USB-C uninterruptible power supply”, etc.
Lots of results on Ali for a query “usb-c ups battery”.
Any reaons you can't run a USB-C brick attached to a UPS? Some UPS' likely have USB plugs in them too.
These are cute, I'd really like to see the "serious" version.
Something like a Ryzen 7745, 128gb ecc ddr5-5200, no less than two 10gbe ports (though unrealistic given the size, if they were sfp+ that'd be incredible), drives split across two different nvme raid controllers. I don't care how expensive or loud it is or how much power it uses, I just want a coffee-cup sized cube that can handle the kind of shit you'd typically bring a rack along for. It's 2025.
the minisforum devices are probably the closest thing to that
unfortunately most people still consider ECC unnecessary, so options are slim
Best bet probably Flashstor FS6812X https://www.asustor.com/en-gb/product?p_id=91
Not the "cube" sized, but surprisingly small still. I've got one under the desk, so I don't even register it is there. Stuffed it with 4x 4TB drives for now.
The Mac Studio is pretty close + silent and power efficient. But it's isn't cheap like an N100 PC.
I use a 12600H MS-01 with 5x4tb nvme. Love the SFP+ ports since the DAC cable doesn't need ethernet to SFP adapters. Intel vPro is not perfect but works just fine for remote management access. I also plug a bus powered dual ssd enclosure to it which is used for Minio object storage.
It's a file server (when did we started calling these "NAS"?) with Samba, NFS but also some database stuff. No VMs or dockers. Just a file and database server.
It has full disk encryption with TPM unlocking with my custom keys so it can boot unattended. I'm quote happy with it.
Can you expand on the TPM unlocking? Wouldn't this be vulnerable to evil maid attacks?
An evil maid is not on my threat level. I'm more worried about a burglar getting into my house and stealing my stuff and my data with it. It's a 1l PC with more than 10TBs of data so it fits in a small bag.
I start with normal full disk encryption and enrolling my secure boot keys into the device (no vendor or MS keys) then I use systemd-cryptenroll to add a TPM2 key slot into the LUKS device. Automatic unlock won't happen if you disable secure boot or try to boot anything other than my signed binaries (since I've opted to not include the Microsoft keys).
systemd-cryptenroll has a bunch of stricter security levels you can chose (PCRs). Have a look at their documentation.
There are lots of used mini ex-corporate desktops on ebay. Dell Leonovo etc, they're probably the best value.
Which SSDs do people rely on? Considering PLP (power loss protection), write endurance/DWPD (no QLC), and other bugs that affect ZFS especially? It is hard to find options that do these things well for <$100/TB, with lower-end datacenter options (e.g., Samsung PM9A3) costing maybe double what you see in a lot of builds.
ZFS isn't more effected by those, your just more likely to notice them with ZFS. You'll probably never notice write endurance issues on a home NAS
QLC isn't an issue for consumer NAS- are 'you' seriously going to write 160GB/day, every day?
QLC have quite the write performance cliff though, which could be an issue during use or when rebuilding the array.
Just something to be aware of.
The 2.5Gbe network writes against a RAID-Z1 config of 4 drives puts the sustained write speed below that of most QLC drives.
Recovery from a lost drive would be slower, for sure.
What are the non-Intel mini NAS options for lower idle power?
I know of FriendlyElec CM3588, are there others?
QNAP TS435XeU 1U short-depth NAS based on Marvell CN913x (SoC successor to Armada A388) with 4xSATA, 2xM.2, 2x10GbE, optional ECC RAM and upstream Linux kernel support, https://news.ycombinator.com/item?id=43760248
I was about to order that GMKtek G9 and then saw Jeff's video about it on the same day. All those issues, even with the later fixes he showed, are a big no-no for me. Instead, I went with a Odroid H4-Ultra with an Intel N305, 48GB Crucial DDR5 and 4x4TB Samsung 990 Evo SSDs (low-power usage) + a 2TB SATA SSD to boot from. Yes, the SSDs are way overkill and pretty expensive at $239 per Samsung 990 Evo (got them with a deal at Amazon). It's running TrueNAS. I am somewhat space-limited with this system, didn't want spinning disks (as the whole house slightly shakes when pickup or trash trucks pass by), wanted a fun project and I also wanted to go as small as possible.
No issues so far. The system is completely stable. Though, I did add a separate fan at the bottom of the Odroid case to help cool the NVMe SSDs. Even with the single lane of PCIe, the 2.5gbit/s networking gets maxed out. Maybe I could try bonding the 2 networking ports but I don't have any client devices that could use it.
I had an eye on the Beelink ME Mini too, but I don't think the NVMe disks are sufficiently cooled under load, especially on the outer side of the disks.
> (as the whole house slightly shakes when pickup or trash trucks pass by)
I have the same problem, but it is not a problem for my Seagate X16s, that have been going strong for years.
How does this happen? Wooden house? Only 2-3 metres from the road?
Yes, it is a cheaply built wooden house, as is typical in Southern California, from the 70s. The backside of the house is directly at the back alley, where the trash bins and through traffic are (see [1] for an example). The NAS is maybe 4m away from the trash bins.
[1] https://images.app.goo.gl/uviWh9B293bpE1i97
Which load, 250MB/s? Modern NVMes are rated for ~20x speeds. Running at such a low bandwidth, they'll stay at idle temperatures at all times.
Fair point! I agree. The NAS sits in a corner with no airflow/ventilation and there is no AC here. In the corner it does get 95F-105F in late summer and I did not want to take the risk of it getting too hot.
Question regarding these mini pcs: how do you connect them to plain old hard drives ? Is thunderbolt / usb these days reliable enough to run 24/7 without disconnects like an onboard sata?
There are nvme to sata adaptors. It’s a little janky with these as you’ll need to leave a cover off to have access to the ports.
I've run a massive farm (2 petabytes) of ZFS on FreeBSD servers with Zraid over consumer USB for about fifteen years and haven't had a problem: directly attaching to the motherboard USB ports and using good but boring controllers on the drives like the WD Elements series.
The last sata controller (onboard or otherwise) that I had with known data corruption and connection issues is old enough to drive now
Would you not simply buy a regular NAS?
Why buy a tiny, m.2 only mini-NAS if your need is better met by a vanilla 2-bay NAS?
Good question. I imagine for the silence and low power usage without needing huge amounts of storage. That said, I own an n100 dual 3.5 bay + m.2 mini PC that can function as a NAS or as anything and I think it's pretty neat for the price.
Noise is definitely an issue.
I have an 8 drive NAS running 7200 RPM drives, which is on a wall mounted shelf drilled into the studs.
On the other side of that wall is my home office.
I had to put the NAS on speaker springs [1] to not go crazy from the hum :)
[1] https://www.amazon.com.au/Nobsound-Aluminum-Isolation-Amplif...
This sounds exactly like what I'm looking. Care to share the brand&model?
AOOSTAR R1
power regularly hits 50 cents a kilowatt hour where I live. Most of those seem to treat power like its free.
I have been running usb hdds 24/7 connected to raspberry pi as a nas for 10 years without problems
I've never heard of these disconnects. The OWC ThunderBay works well.
I have experienced them - I have a B650 AM5 motherboard and if I connect a Orico USB HDD enclosure to the fastest USB ports, the ones comming directly from the AMD CPU (yes, it's a thing now), after 5-10 min the HDD just disappears from the system. Doesn't happen on the other USB ports.
Well, AMD makes a good core but there are reasons that Intel is preferred by some users in some applications, and one of those reasons is that the peripheral devices on Intel platforms tend to work.
For that money it can make more sense to get a UGreen DXP4800 with built-in N100: https://nas.ugreen.com/products/ugreen-nasync-dxp4800-nas-st...
You can install a third-party OS on it.
> Testing it out with my disk benchmarking script, I got up to 3 GB/sec in sequential reads.
To be sure... is the data compressible, or repeated? I have encountered an SSD that silently performed compression on the data I wrote to it (verified by counting its stats on blocks written). I don't know if there are SSDs that silently deduplicate the data.
(An obvious solution is to copy data from /dev/urandom. But beware of the CPU cost of /dev/urandom; on a recent machine, it takes 3 seconds to read 1GB from /dev/urandom, so that would be the bottleneck in a write test. But at least for a read test, it doesn't matter how long the data took to write.)
I’ve been always puzzled by the strange choice of raiding multiple small capacity M.2 NVMe in these tiny low-end Intel boxes with severely limited PCIe lanes using only one lane per SSD.
Why not a single large capacity M.2 SSD using 4 full lanes and proper backup with a cheaper , larger capacity and more reliable spinning disk?
The latest small M.2 NAS’s make very good consumer grade, small, quiet, power efficient storage you can put in your living room, next to the tv for media storage and light network attached storage.
It’d be great if you could fully utilise the M.2 speed but they are not about that.
Why not a single large M.2? Price.
Would four 2TB SSD be more or less expensive than one 8TB SSD? And also counting power efficiency and RAID complexity?
4 small drives+raid gives you redundancy.
And often are about the same price or less expensive than the one 8TB NVMe.
I'm hopeful 4/8 TB NVMe drives will come down in price someday but they've been remarkably steady for a few years.
Given the write patterns of RAID and the wear issues of flash, it's not obvious at all that 4xNVME actually gives you meaningful redundancy.
Still think its highly underrated to use fs-cache with NASes (usually configured with cachefilesd) for some local dynamically scaling client-side nvme caching.
Helps a ton with response times with any NAS thats primarily spinning rust, especially if dealing with decent amount of small files.
I have 4 M.2 drives in RAID0 with HDD and cloud backup. So far so good! I'm sure I am about to regret saying that...
I was recently looking for a mini PC to use as a home server with, extendable storage. After comparing different options (mostly Intel), I went with the Ryzen 7 5825U (Beelink SER5 Pro) instead. It has an M.2 slot for an SSD and I can install a 2.5" HDD too. The only downside is that the HDD is limited by height to 7 mm (basically 2 TB storage limit), but I have a 4 TB disk connected via USB for "cold" storage. After years of using different models with Celeron or Intel N CPUs, Ryzen is a beast (and TDP is only 15W). In my case, AMD now replaced almost all the compute power in my home (with the exception of the smartphone) and I don't see many reasons to go back to Intel.
Is it possible (and easy) to make a NAS with harddrives for storage and an SSD for cache? I don't have any data that I use daily or even weekly, so I don't want the drives spinning needlessly 24/7, and I think an SSD cache would stop having to spin them up most of the time.
For instance, most reads from a media NAS will probably be biased towards both newly written files, and sequentially (next episode). This is a use case CPU cache usually deals with transparently when reading from RAM.
https://github.com/trapexit/mergerfs/blob/master/mkdocs/docs...
I do this. One mergerfs mount with an ssd and three hdds made to look like one disk. Mergerfs is set to write to the ssd if it’s not full, and read from the ssd first.
A chron job moves out the oldest files on the ssd once per night to the hdds (via a second mergerfs mount without the ssd) if the ssd is getting full.
I have a fourth hdd that uses snap raid to protect the ssd and other hdds.
Also, https://github.com/bexem/PlexCache which moves files between disks based on their state in a Plex DB
Yes. You can use dm-cache.
Thanks. I looked it up and it seems that lvmcache uses dm-cache and is easier to use, I guess putting that in front of some kind of RAID volume could be a good solution.
I used to run a zfs setup with an ssd for L2ARC and SLOG.
Can’t tell you how it worked out performance-wise, because I didn’t really benchmark it. But it was easy enough to set up.
These days I just use SATA SSDs for the whole array.
NVMe NAS is completely and totally pointless with such crap connectivity.
What in the WORLD is preventing these systems from getting at least 10gbps interfaces? I have been waiting for years and years and years and years and the only thing on the market for small systems with good networking is weird stuff that you have to email Qotom to order direct from China and _ONE_ system from Minisforum.
I'm beginning to think there is some sort of conspiracy to not allow anything smaller than a full size ATX desktop to have anything faster than 2.5gbps NICs. (10gbps nics that plug into NVMe slots are not the solution.)
>What in the WORLD is preventing these systems from getting at least 10gbps interfaces?
Price and price. Like another commenter said, there is at least one 10Gbe mini NAS out there, but it's several times more expensive.
What's the use case for the 10GbE? Is ~200MB/sec not enough?
I think the segment for these units is low price, small size, shared connectivity. The kind of thing you tuck away in your house invisibly and silently, or throw in a bag to travel with if you have a few laptops that need shared storage. People with high performance needs probably already have fast nvme local storage is probably the thinking.
> What's the use case for the 10GbE? Is ~200MB/sec not enough?
When I'm talking to an array of NVMe? No where near enough, not when each drive could do 1000MB/s of sequential writes without breaking a sweat.
You can order the Mac mini with 10gbps networking and it has 3 thunderbolt 4 ports if you need more. Plus it has an internal power supply making it smaller than most of these mini PCs.
That's what I'm running as my main desktop at home, and I have an external 2TB TB5 SSD, which gives me 3 GB/sec.
If I could get the same unit for like $299 I'd run it like that for my NAS too, as long as I could run a full backup to another device (and a 3rd on the cloud with Glacier of course).
It's annoying, around 10 years ago 10gbps was just starting to become more and more standard on bigger NAS, and 10gbps switches were starting to get cheaper, but then 2.5GbE came out and they all switched to that.
The SFP+ transceivers are hot and I mean literally.
That's because 10GbE tech is not there yet. Everything overheats and drops-out all the time, while 2.5GbE just works. In several years from now, this will all change, of course.
Speak for yourself. I have AQC cards in a PC and a Mac, Intel gear in my servers, and I can easily sustain full speed.
It especially sucks when even low end mini PCs have at least multiple 5Gbps USB ports, yet we are stuck with 1Gbps (or 2.5, if manufacturer is feeling generous) ethernet. Maybe IP over Thunderbolt will finally save us.
> What in the WORLD is preventing these systems from getting at least 10gbps interfaces?
They definitely exist, two examples with 10 GbE being the QNAP TBS-h574TX and the Asustor Flashstor 12 Pro FS6712X.
Not many people have fiber at home. Copper 10gig is power hungry and demands good cabling.
> Copper 10gig is power hungry and demands good cabling.
Power hungry yes, good cabling maybe?
I run 10G-Base-T on two Cat5e runs in my house that were installed circa 2001. I wasn't sure it would work, but it works fine. The spec is for 100 meter cable in dense conduit. Most home environments with twisted pair in the wall don't have runs that long or very dense cabling runs, so 10g can often work. Cat3 runs probably not worth trying at 10G, but I've run 1G over a small section of cat3 because that's what was underground already.
I don't do much that really needs 10G, but I do have a 1G symmetric connection and I can put my NAT on a single 10G physical connection and also put my backup NAT router in a different location with only one cable run there... thr NAT routers also do NAS and backup duty, so I can have a little bit of physical separation between them plus I can reboot one at a time without losing NAT.
Economical consumer oriented 10g is coming soon, lots of announcements recently and reasonableish products on aliexpress. All of my current 10G NICs are used enterprise stuff, and the switches are used high end (fairly loud) SMB. I'm looking forward to getting a few more ports in the not too distant future.
Consider the terramaster f8 ssd
So Jeff is really decent guy that doesn’t keep terabytes of Linux ISOs.
Something that Apple should have done with TimeCapsule iOS but they were too focused on service revenue.
I think the N100 and N150 suffer the same weakness for this type of use case in the context of SSD storage 10gb networking. We need a next generation chip that can leverage more PCI lanes with roughly the same power efficiency.
I would remove points for a built-in non-modular standardized power supply. It's not fixable, and it's not comparable to Apple in quality.
I am currently running a 8 4TB NVMe NAS via OpenZFS on TrueNAS Linux. It is good but my box is quite large. I made this via a standard AMD motherboard with both built-in NVMe slots as well as a bunch of expansion PCEi cards. It is very fast.
I was thinking of replacing it with a Asustor FLASHSTOR 12, much more compact form factor and it fits up to 12 NVMes. I will miss TrueNAS though, but it would be so much smaller.
You can install TrueNAS on it: https://www.jeffgeerling.com/blog/2023/how-i-installed-truen...
You can install truenas Linux on the flashstor12. It has no GPU or video out, but I installed a m.2 GPU to attach a HDMI monitor
Whenever these things come up I have to point out the most of these manufactures don’t do bios updates. Since spectre/meltdown we see cpu and bios vulnerabilities every few months-yearly.
I know u can patch microcode at runtime/boot but I don’t think that covers all vulnerabilities
Hence the need for coreboot support.
I’ve been thinking about moving from SSDs for my NAS to solid state. The drive are so loud, all the time, it’s very annoying.
My first experience with these cheap mini PCs was with a Beelink and it was very positive and makes me question the longevity of the hardware. For a NAS, that’s important to me.
I've been using a QNAP TBS-464 [1] for 4 years now with excellent results. I have 4x 4TB NVMe drives and get about 11TB usable after RAID. It gets slightly warm but I have it in my media cabinet with a UPS, Mikrotik router, PoE switches, and ton of other devices. Zero complaints about this setup.
The entire cabinet uses under 1kwh/day, costing me under $40/year here, compared to my previous Synology and home-made NAS which used 300-500w, costing $300+/year. Sure I paid about $1500 in total when I bought the QNAP and the NVMe drives but just the electricity savings made the expense worth it, let alone the performance, features etc.
1. https://www.qnap.com/en-us/product/tbs-464
Thanks, I’ll give it a look. I’m running a Synology right now. It only has 2 drives, so just swapping those out for SSDs would cost as much as a whole 4xNVMe setup, as I have 8TB HDDs in there now.
HDD -> SSD I assume For me it’s more and random access times
> moving from SSDs for my NAS to solid state.
SSD = Solid State Drive
So you're moving from solid state to solid state?
That should have been HDD. Typo. Seems too late to edit.
What types of distributed/network filesystem are people running nowadays on Linux?
Ceph or MooseFS are the two that I've seen most popular. All networked FS have drawbacks, I used to run a lot of Gluster, and it certainly added a few grey hairs.
I use Ceph. 5 nodes, 424TiB of raw space so far.
thanks for the article.
I'm dreaming of this: mini-nas connected direct to my tv via HDMI or USB. I think I'd want HMDI and let the nas handle streaming/decoding. But if my TV can handle enough formats. maybe USB will do.
anyone have experience with this?
I've been using a combination of media server on my Mac with client on Apple TV and I have no end of glitches.
I've been running Plex on my AppleTV 4k for years with few issues.
It gets a lot of use in my household. I have my server (a headless Intel iGPU box) running it in docker with the Intel iGPU encoder passed through.
I let the iGPU default encode everything realtime, and now that plex has automatic subtitle sync, my main source of complaints is gone. I end up with a wide variety of formats as my wife enjoys obscure media.
One of the key things that helped a lot was segregating Anime to it own TV collection so that anime specific defaults can be applied there.
You can also run a client on one of these machines directly, but then you are dealing with desktop Linux.
Streaming (e.g., Plex or Jellyfin or some UPnP server) helps you send the data to the TV client over the network from a remote server.
As you want to bring the data server right to the TV, and you'll output the video via HDMI, just use any PC. There are plenty of them designed for this (usually they're fanless for reducing noise)... search "home theater PC."
You can install Kodi as the interface/organizer for playing your media files. It handles the all the formats... the TV is just the ouput.
A USB CEC adapter will also allow you to use your TV remote with Kodi.
thanks! I've tried Plex, Jellyfin etc on my Mac. I've tried three different Apple TV apps as streaming client (Infuse, etc). They are all glitchy. Another key problem is if I want to bypass the streaming server on my Mac and have Infuse on the Apple TV just read files from the Mac the option is Windows NFS protocol...which gives way too much sharing by providing the Infuse app with a Mac id/password.
Just get a nvidia shield. It plays pretty much anything still even though a fairly old device. Your aim should not be to transcode but to just send data when it comes to video.
These look compelling, but unfortunately, we know that SSDs are not nearly as reliable as spinning rust hard drives when it comes to data retention: https://www.tomshardware.com/pc-components/storage/unpowered...
(I assume M.2 cards are the same, but have not confirmed.)
If this isn’t running 24/7, I’m not sure I would trust it with my most precious data.
Also, these things are just begging for a 10Gbps Ethernet port, since you're going to lose out on a ton of bandwidth over 2.5Gbps... though I suppose you could probably use the USB-C port for that.
Your link is talking about leaving drives unpowered for years. That would be a very odd use of a NAS.
True, but it's still concerning. For example, I have a NAS with some long-term archives that I power on maybe once a month. Am I going to see SSD data loss from a usage pattern like that?
no. SSD data loss is in the ~years range
Would be nice to see what those little N100 / N150 (or big brother N305 / N350) can do with all that NVMe. Raw throughput is pretty whatever but hypothetically if the CPU isn't too gating, there's some interesting IOps potential.
Really hoping we see 25/40GbaseT start to show up, so the lower market segments like this can do 10Gbit. Hopefully we see some embedded Ryzens (or other more PCIe willing contendors) in this space, at a value oriented price. But I'm not holding my breath.
The problem quickly becomes PCIe lanes. The N100/150/305 only have 9 PCIe 3.0 lanes. 5Gbe is fine, but to go to 10Gbe you need x2.
Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.
Not only the lanes, but putting through more than 6 Gbps of IO on multiple PCIe devices on the N150 bogs things down. It's only a little faster than something like a Raspberry Pi, there are a lot of little IO bottlenecks (for high speed, that is, it's great for 2.5 Gbps) if you do anything that hits CPU.
The CPU bottleneck would be resolved by the Pentium Gold 8505, but it still has the same 9 lanes of PCIe 3.0.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
This is what baffles me - 2.5gbps.
I want smaller, cooler, quieter, but isn’t the key attribute of SSDs their speed? A raid array of SSDs can surely achieve vastly better than 2.5gbps.
A single SSD can (or at least NVMe can). You have to question whether or not you need it -- what are you doing that you would go line-speed a large portion of time that the time savings are worth it. Or it's just a toy, totally cool too.
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.
I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.
My use is a bit into the cool-toy category. I like having VMs where the NAS has the VMs and the backups, and like having the server connect to the NAS to access the VMs.
Probably a silly arrangement but I like it.
Even if the throughput isn't high, it sure is nice having the instant response time & amazing random access performance of a ssd.
2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.
2.5Gbps is selected for price reasons. Not only is the NIC cheap, but so is the networking hardware.
But yeah, if you want fast storage just stick the SSD in your workstation, not on a mini PC hanging off your 2.5Gbps network.
i want a NAS i can puf 4tb nvme’s in and a 12tb hdd running backup every night. with ability to shove a 50gbps sfp card in it so i can truly have a detached storage solution.
The lack of highspeed networking on any small system is completely and totally insane. I have come to hate 2.5gbps for the hard stall it has caused on consumer networking with such a passion that it is difficult to convey. You ship a system with USB5 on the front and your networking offering is 3.5 orders of magnitude slower? What good is the cloud if you have to drink it through a straw?
10gbps would be a good start. The lack of wifi is mentioned as a downside, but do many people want that on a NAS?
Yeah that’s what I want too. I don’t necessarily need a mirror of most data, some I do prefer, but that’s small.
I just want a backup (with history) of the data-SSD. The backup can be a single drive + perhaps remote storage
Would you really want the backup on a single disk? Or is this backing up data that is also versioned on the SSDs?
These need remote management capabilities (IPMI) to not be a huge PITA.
How often do you use IPMI on a server? I have a regular desktop running Proxmox, and I haven't had to plug in a monitor since I first installed it like 2 years ago
A JetKVM, NanoKVM, or the like is useful if you want to add on some capability.
I haven't even thought about my NAS in years. No idea what you're talking about.
I will wait until the have AMD efficient chip for one very simple reason: AMD graciously allow ECC on some* cpus.
*well, they allowed on all CPUs, but after zen3 they saw how much money intel was making and joined in. now you must get a "PRO" cpu, to get ECC support, even on mobile (but good luck finding ECC sodimm).
And good luck finding a single fucking computer for sale that even uses these "Pro" CPUs, because they sure as hell don't sell them to the likes of Minisforum and Beelink.
There was some stuff in DDR5 that made ECC harder to implement (unlike DDR4 where pretty much everything AMD made supported unbuffered ECC by default), but its still ridiculous how hard it is to find something that supports DDR5 ECC that doesn't suck down 500W at idle.
[flagged]