I currently run 2 Synology NAS's in my setup. I am very satisfied with their performance, but nevertheless I will be phasing them out because their offerings are not evolving in line with customer satisfaction but with profit maximization through segmentation and vertical lock-in.
I'm in a similar position. I'm on my second NAS in the last 12 years. I've been very satisfied with their performance, but this kind of behavior is just completely unacceptable. I guess I'll need to look into QNAP or some other brand. Also, I think my four disc setup is in a RAID 5, but it might be Synology's proprietary version, so I'll need to figure out how to migrate off of that. I don't think I'll be able to just insert the drives in a different NAS and have it work.
Do you have a plan on what you’re going to move to?
I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.
I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.
I have a 8 bay nas from synology and i’m now considering a move out when i’ll have to replace my nas.
Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.
I'm going to buck the nerds and say I wish Drobo was back. I love my 5N, but had to retire it as it began to develop Type B Sudden Drobo Death Syndrome* and switch out to QNAP.
It was simple, it just worked, and I didn't have to think about it.
* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.
Storing encrypted blobs in S3 is my new strategy for bulk media storage. You'll never beat the QoS and resilience of the cloud storage product with something at home. I have completely lost patience with maintaining local hardware like this. If no one has a clue what is inside your blobs, they might as well not exist from their perspective. This feels like smuggling cargo on a federation starship, which is way cooler to me than filling up a bunch of local disks.
I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.
Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.
> If no one has a clue what is inside your blobs, they might as well not exist from their perspective.
This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.
Very few people here will be that interesting, but... worth keeping in mind.
S3 or glacier? Glacier is cost competitive with local disk but not very practical for the sorts of things people usually need lots of local disk for (media & disk images). Interested in how you use this!
20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.
20TB isn't all that much anymore, especially if you do anything like filming, streaming, photography, etc. Even a handful of HQ TV shows can reach several TB rather quickly.
Did you factor in the resilience and redundancy S3 gives you and you cannot opt out from? I have my NAS, and it is cheaper than S3 if I ignore these, but having to run 2 offsite backups would make it much less compelling.
Synology became so bad, they measure disk space in percent, and thresholds cannot be configured to lower than 5%. This may have been okay when volume sizes were in gigabytes, but now with multi-TB drives, 5% is a lot of space.
The result of that is NAS in permanent alarm state because less than 5% space is free. And this makes it less likely for the user to notice when an actual alarm happens because they are desensitised to warnings.
I submitted this to them at least four times, and they reply that this is fine, it’s already decided to be like that, so we will not change it.
Another stupid thing is that notifications about low disk space are sent to you via email and push until it’s about 30 GB free. Then free space goes below 30 GB and reaches zero, yet notifications are not sent anymore.
My multiple reports about this issue always responded along the lines of “it’s already done like that, so we will not change it”.
Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.
I think preventing alarm fatigue is a very good reason to fix issues.
But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.
This is partly why SSDs just lie nowadays and tell you they only have 75-90% of the capacity that is actually built into them. You can't directly access that excess capacity but the drive controller can when it needs to (primarily to extend the life of the drive).
Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.
For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.
But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.
About a year ago ago I bit the bullet and frankestiend together a TrueNAS scale box. I hated the idea of the DIY route and stuck with frustrating SOHO nas devices for way too long. An all in one NAS product that has an app store or supports containers is nice in theory but in my experience always ended up with a NUC running proxmox sitting next to the NAS.
Managing it is fine. It expects you to understand ZFS more than a turnkey RAID 5 + btrfs job, but has an OK UI and seems born out of the fact ZFS people want that customization, not forcing you fend for yourself. I read a 15 minute explainer, built a pool, and didn't have to think about it at all other than replacing a failed drive last month. And all that took was a quick google to make sure I was doing it right, which is exactly how I replaced drives in a standalone nas.
Their hardware has been dogshit for years TBH, this year's upgrades were to like ~2020 tech and some of these models won't be upgraded again until 2030!
The only parts of Synology I really like are some of their media apps are a very tidy package, I've previously written a compatible server using NodeJS that can use their apps so I think I'll have to pursue that idea further given the vastly superior consumer hardware options that exist for NAS.
Do you have some examples of the superior customer hardware options? I currently use 2x12 bay Synology NAS and one of my favorite parts of it is the hardware, the easy access to the hotswap hard drives.
If I could get that form factor, but with a custom NAS software solution, I’d be very interested.
I run a Sliger 3701 [0] with an ASRock Rack Ryzen motherboard (choose B650D4U or X570D4U to suit) using TrueNAS Scale. This replaced my old FreeNAS Atom C2000 setup that ran for over a decade with zero issues.
Add in a Mikrotik CCR2004-1G-2XS-PCIE [1] for high speed networking. Choose your own HBA.
Thank you for the link, currently my needs are more 3.5” high capacity drives, and less SSD storage (aside for a TB or 2 of cache. I just don’t need SSDs for my media collection, it seems like a waste when I can pick up 24-26+ TB drives for so much cheaper.
Can you put a custom OS on it, wiping their software? I have a network rack in my basement (so, shallow depth, can’t fit a “full” depth server) and I’m looking for a dumb box of hard drives with otherwise simple/supported-by-linux hardware that I can use to build my own NAS. I don’t really want to use unifi’s software for it (plus it’s nice to just run plex/jellyfin/etc directly on the box)
I currently plug a USB3 4-bay disk enclosure into my homelab server for this, but the cabling is messy and it doesn’t support 20TB drives. I could upgrade to a newer enclosure, but I’d rather have a “real” rack mount system with drive bays.
At first I was going to balk but then I remembered I paid ~$1.5K for a 12-bay Synology (and again for the 12-bag expansion unit).
This is much larger I think (and it bugs me that it’s not an even number of drives, and the offset of the drives is unpleasant) but it’s rackable so that’s a plus.
The only thing I’d want to know is sound (and I’m sure I can find a YouTube video).
I’ve been looking for an excuse to go all-in on a Ubiquiti setup… Thanks for mention this, I wasn’t aware Ubiquiti had a NAS product.
Looks like their $300/4GB/1U/4bay and $500/8GB/2U/7bay half-depth devices with AWS-Annapurna SoC can run either NAS or NVR Linux. Bluetooth in a rackable device is unusual. OS might be replaceable with mainline Debian.
I wish 1 or more HD manufacturers would get together and sell a NAS that runs TrueNAS on it. Or even an existing NAS manufacturer (UGreen, etc)
All these NAS manufacturers a spending time developing their own OS, when TrueNAS is well established.
TrueNAS isn’t nearly friendly enough for the average user. HexOS may fit that bill, although it seems rather immature. It runs on top of TrueNAS.
I currently run 2 Synology NAS's in my setup. I am very satisfied with their performance, but nevertheless I will be phasing them out because their offerings are not evolving in line with customer satisfaction but with profit maximization through segmentation and vertical lock-in.
I'm in a similar position. I'm on my second NAS in the last 12 years. I've been very satisfied with their performance, but this kind of behavior is just completely unacceptable. I guess I'll need to look into QNAP or some other brand. Also, I think my four disc setup is in a RAID 5, but it might be Synology's proprietary version, so I'll need to figure out how to migrate off of that. I don't think I'll be able to just insert the drives in a different NAS and have it work.
Do you have a plan on what you’re going to move to?
I’ve used (and still use) UnRaid before but switched to Synology for my data a while back due to both the plug-and-play nature of their systems (it’s been rock solid for me) and easily accessible hard drive trays.
I’ve built 3 UnRaid servers and while I like the software, hardware was always an issue for me. I’d love a 12-bay-style Synology hardware device that I could install whatever I wanted on. I’m just not interested in having to halfway deconstruct a tower to get at 1 hard drive. Hotswap bays are all I want to deal with now.
[dead]
I have a 8 bay nas from synology and i’m now considering a move out when i’ll have to replace my nas.
Is there something with 6-8 drives slots on which i could install whatever OS i want ? Ideally with a small form factor. I don’t want to have a giant desktop again for my nas purposes.
I'm going to buck the nerds and say I wish Drobo was back. I love my 5N, but had to retire it as it began to develop Type B Sudden Drobo Death Syndrome* and switch out to QNAP.
It was simple, it just worked, and I didn't have to think about it.
* TB SDDS - a multi-type phenomenon of Drobo units suddenly failing. There were three 'types' of SDDS I and a colleague discovered - "Type A" power management IC failures, "Type B" unexplainable lockups and catatonia, and "Type C" failed batteries. Type B units' SOCs have power and clock go in and nothing going out.
Storing encrypted blobs in S3 is my new strategy for bulk media storage. You'll never beat the QoS and resilience of the cloud storage product with something at home. I have completely lost patience with maintaining local hardware like this. If no one has a clue what is inside your blobs, they might as well not exist from their perspective. This feels like smuggling cargo on a federation starship, which is way cooler to me than filling up a bunch of local disks.
I don't need 100% of my bytes to be instantly available to me on my network. The most important stuff is already available. I can wait a day for arbitrary media to thaw out for use. Local caching and pre-loading of read-only blobs is an extremely obvious path for smoothing over remote storage.
Other advantages should be obvious. There are no limits to the scale of storage and unless you are a top 1% hoarder, the cost will almost certainly be more than amortized by the capex you would have otherwise spent on all that hardware.
> If no one has a clue what is inside your blobs, they might as well not exist from their perspective.
This is not the perspective of actors working on longer timescale. For a number is agencies, preserving some encrypted data is beneficial, because it will be possible to recover in N years, whether any classic improvements, bugs found in key generators, or advances in quantum.
Very few people here will be that interesting, but... worth keeping in mind.
The point of encryption in this context is to defeat content fingerprinting techniques, not the focused resources of a nation state.
S3 or glacier? Glacier is cost competitive with local disk but not very practical for the sorts of things people usually need lots of local disk for (media & disk images). Interested in how you use this!
20TB which u can keep in a 2-bay cute little nas will cost you $4k USD / year on S3 infrequent access tier in APAC (where I am). So "payback time" of local hardware is just 6 months vs S3 IA. That's before you pay for any data transfers.
> S3 or glacier
This is the same product.
> 20TB
I think we might be pushing the 1% case here.
Just because we can shove 20TB of data into a cute little nas does not mean we should.
For me, knowledge that the data will definitely be there is way more important than having "free" access to a large pool of bytes.
20TB isn't all that much anymore, especially if you do anything like filming, streaming, photography, etc. Even a handful of HQ TV shows can reach several TB rather quickly.
Did you factor in the resilience and redundancy S3 gives you and you cannot opt out from? I have my NAS, and it is cheaper than S3 if I ignore these, but having to run 2 offsite backups would make it much less compelling.
Synology became so bad, they measure disk space in percent, and thresholds cannot be configured to lower than 5%. This may have been okay when volume sizes were in gigabytes, but now with multi-TB drives, 5% is a lot of space. The result of that is NAS in permanent alarm state because less than 5% space is free. And this makes it less likely for the user to notice when an actual alarm happens because they are desensitised to warnings. I submitted this to them at least four times, and they reply that this is fine, it’s already decided to be like that, so we will not change it. Another stupid thing is that notifications about low disk space are sent to you via email and push until it’s about 30 GB free. Then free space goes below 30 GB and reaches zero, yet notifications are not sent anymore. My multiple reports about this issue always responded along the lines of “it’s already done like that, so we will not change it”.
Most modern, especially software companies, choose not to fix relatively small but critical problems, yet they actively employ sometimes hundreds of customer support yes-people whose job seems to be defusing customer complaints. Nothing is ever fixed anymore.
I think preventing alarm fatigue is a very good reason to fix issues.
But 5% free is very low. You may want to use every single byte you feel you paid for, but allocation algorithms really break down when free space gets so low. Remember that there's not just a solid chunk of that 5% sitting around at the end of the space. That's added up over all the holes across the volume. At 20-25% free, you should already be looking at whether to get more disks and/or deciding what stuff you don't actually need to store on this volume. So a hard alarm at 5% is not unreasonable, though there should also be a way to set a soft alarm before then.
100%. Those disks are likely working much harder moving the head all over the place to find those empty spaces when it writes.
5% of my 500 GB is 25 GB, which is already a lot of space but understandable. Not many things would fit in there nowadays.
But 5% of a 5 TB volume is 250 GB, that's the size of my whole system disk! Probably not so understandable by the lay person.
This is partly why SSDs just lie nowadays and tell you they only have 75-90% of the capacity that is actually built into them. You can't directly access that excess capacity but the drive controller can when it needs to (primarily to extend the life of the drive).
Some filesystems do stake out a reservation but I don't think any claim one as large as 5% (not counting the effect of fixed-size reservations on very small volumes). Maybe they ought to, as a way of managing expectations better.
For people who used computers when the disks were a lot smaller, or who primarily deal in files much much smaller than the volumes they're stored on, the absolute size of a percentage reservation can seem quite large. And, in certain cases, for certain workloads, the absolute size may actually be more important than the relative size.
But most file systems are designed for general use and, across a variety of different workloads, spare capacity and the impact of (not) keeping it open is more about relative than absolute sizes. Besides fragmentation, there's also bookkeeping issues, like adding one more file to a directory cascading into a complete rearrangement of the internal data structures.
Can you “btrfs send” snapshots in a raid array in DSM to a Linux server?
If it was a ZFS NAS, I could ZFS send to another system.
I want to get the historical data out to an open portable system.
I think they are on their way to exit the consumer market. Their linux kernel is old. And DSM dont update the kernel.
What's better for running Plex?
Assuming I want 4 drives and something that can transcode multiple files in real time.
About a year ago ago I bit the bullet and frankestiend together a TrueNAS scale box. I hated the idea of the DIY route and stuck with frustrating SOHO nas devices for way too long. An all in one NAS product that has an app store or supports containers is nice in theory but in my experience always ended up with a NUC running proxmox sitting next to the NAS.
Managing it is fine. It expects you to understand ZFS more than a turnkey RAID 5 + btrfs job, but has an OK UI and seems born out of the fact ZFS people want that customization, not forcing you fend for yourself. I read a 15 minute explainer, built a pool, and didn't have to think about it at all other than replacing a failed drive last month. And all that took was a quick google to make sure I was doing it right, which is exactly how I replaced drives in a standalone nas.
Plex runs very happily on my Synology box. So I'd want something that was a reasonably drop-in replacement, and not something that required more work.
Their hardware has been dogshit for years TBH, this year's upgrades were to like ~2020 tech and some of these models won't be upgraded again until 2030!
The only parts of Synology I really like are some of their media apps are a very tidy package, I've previously written a compatible server using NodeJS that can use their apps so I think I'll have to pursue that idea further given the vastly superior consumer hardware options that exist for NAS.
Do you have some examples of the superior customer hardware options? I currently use 2x12 bay Synology NAS and one of my favorite parts of it is the hardware, the easy access to the hotswap hard drives.
If I could get that form factor, but with a custom NAS software solution, I’d be very interested.
I run a Sliger 3701 [0] with an ASRock Rack Ryzen motherboard (choose B650D4U or X570D4U to suit) using TrueNAS Scale. This replaced my old FreeNAS Atom C2000 setup that ran for over a decade with zero issues.
Add in a Mikrotik CCR2004-1G-2XS-PCIE [1] for high speed networking. Choose your own HBA.
0: https://www.sliger.com/products/rackmount/storage/cx3701/
1: https://mikrotik.com/product/ccr2004_1g_2xs_pcie
I don't think you'd get 12x too easily but one recent post here is AMD Hawk Point with 6x 3.5" drives + 5x M.2 drives + OCuLink + up to 128GB of RAM.
https://liliputing.com/?s=nas
Thank you for the link, currently my needs are more 3.5” high capacity drives, and less SSD storage (aside for a TB or 2 of cache. I just don’t need SSDs for my media collection, it seems like a waste when I can pick up 24-26+ TB drives for so much cheaper.
My next NAS is going to be Ubiquiti UNAS Pro . 7 drive bays for $499. Can’t beat it.
Can you put a custom OS on it, wiping their software? I have a network rack in my basement (so, shallow depth, can’t fit a “full” depth server) and I’m looking for a dumb box of hard drives with otherwise simple/supported-by-linux hardware that I can use to build my own NAS. I don’t really want to use unifi’s software for it (plus it’s nice to just run plex/jellyfin/etc directly on the box)
I currently plug a USB3 4-bay disk enclosure into my homelab server for this, but the cabling is messy and it doesn’t support 20TB drives. I could upgrade to a newer enclosure, but I’d rather have a “real” rack mount system with drive bays.
What is the File System? BTRFS or ZFS?
I only wish they do a 2 Bay or 4 Bay version. Or better yet something like Time Capsule. 2 Bay + Ubnt Express.
At first I was going to balk but then I remembered I paid ~$1.5K for a 12-bay Synology (and again for the 12-bag expansion unit).
This is much larger I think (and it bugs me that it’s not an even number of drives, and the offset of the drives is unpleasant) but it’s rackable so that’s a plus.
The only thing I’d want to know is sound (and I’m sure I can find a YouTube video).
I’ve been looking for an excuse to go all-in on a Ubiquiti setup… Thanks for mention this, I wasn’t aware Ubiquiti had a NAS product.
Looks like their $300/4GB/1U/4bay and $500/8GB/2U/7bay half-depth devices with AWS-Annapurna SoC can run either NAS or NVR Linux. Bluetooth in a rackable device is unusual. OS might be replaceable with mainline Debian.
https://github.com/NeccoNeko/UNVR-diy-os/blob/main/IMAGES.md
If it can run Gentoo it might be a big energy savings vs my old off lease machine with ZFS …
There's also a QNAP 1U for $600, which adds M.2 NVME and optional 32GB ECC RAM https://news.ycombinator.com/item?id=40868855
We need more Thunderbolt/USB4-to-JBOD 40Gbps storage enclosure options, for use with Ryzen mini PC or Lenovo Tiny.
[dead]