nemesis1782
Contributor
- Joined
- Mar 2, 2021
- Messages
- 105
Hi,
I've been using Synology NAS for quite sometime and currently have a 2415+ which I got working again after a stressfull day of digging through the internet (C2000 issue :( ).
Now while being stressed out of my mind I was also looking to replace what I have and for a long time wanted to get familiar with ZFS. A dead NAS is always a good starting point for that right :P Since my current is back I do have sometime to figure out what would be best in my case without braking the bank.
I have looking into TreuNAS and I'm quite intrigued. However for ZFS I seem to be finding a lot of conflicting information which concerns me. Some topics in which i find conflicting information are:
- stability
- performance
- usability
The consus is however that ZFS has some benefits but does need more carefull planning and the penalties can be high. This will be fairly long post however feel free to wiegh in on just a part of it.
The use case:
So my use case is fairly wide. I need would like the following in decending order of importance:
- Reliable backups for my important data set (currently have this setup to 2 external synologies and to a Azure share all of these are of course encrypted)
- Shares for Documents/Photos/Audio/Video/Binaries
- At least 30TB of usable storage capacity with at least a 1 disk tolerance level
- Plex server
- iScsi
- Docker/Kubernetes capability
- VMWare (or other virtualization) capability
- Anything else that pops in my head :P
So basicly a homelab mascurading as a NAS :P
What do i find important:
Stability and reliability (I like fiddling with systems, however my wife is a user and shee needs to stay happy!)
Performance (IO, sequential as well as random read)
As low risk as possible rebuilds
1 disk failure recovery, Anything more then a 1 disk failure recovery is just a cherry on top
Power usage and cost
The hardware (For the first two planned steps):
Mainboard: Gigabyte GA-Z68X-UD4-B3
CPU: Intel I7 2700K
Memory: Corsair Vengence LP 4x4GB 1600MHZ DDR3 (Non ECC)
Video card: MSI 560TI (will be replaced with somthing low power if it ever goes into production)
Power Supply: Corsair TX750
Network: Initially the one onboard 1GB in the future this might change to a 4 port 1GB (with LAG) or 1 port 10GB PCI card
Disk controller: SATA 6x on board + SATA 2x on board + SATA 2x on board + eSATA 2x on board (Which would give 10 drives, I might extend this with a 8 port LSI at some point)
Disks: Not sure yet have to rummage around for the test setup. In my Synology I have 5x 6TB WD60EFRX-68L0BN1 and 2x 10TB ST10000NM0478-2H7100 which may get migrated. I'm also thinking of adding 3 more 10TB ST10000NM0478-2H7100 for the final setup
SSD's: 240GB kingston as a boot drive and maybe to offload somethings on, 2x1TB Samsung 860 which are in use in the Synology, I might also add some small SSD's for the IO heavy side operations of ZFS.
The Plan (Test):
Download TreuNAS throw some small orphaned disks I have lying around at it and see how far I get.
The Plan (First actual setup):
Delete everything on the test setup and restart with what I've learned. Create 1 VDEV 5x10TB in ZFSR1 add a chache of 2x1TB SSD and 2 smaller SSD's for the IO heavy ZFS operation (do not remember the correct terminology).
- Configure
- Configure backups
- Migrate the data.
- Have a database running on it.
- Have Plex running.
- Get Docker up and running and set it as the master for my Kubernetes Cluster (with 3 raspbarry pi4 slaves).
- Setup Radar/Sonar/Sabnzbd/etc
The Plan (Future setup):
A multi Xenon system with at least 128GB of ECC RAM. There isn't much more to say about this yet ;)
My questions and possible ceveats:
1. ZFS disk space (in)effinciencies: From what I've read ZFS is far less efficient then the alternatives when it comes to usable disk space for the same cost point
1.a. 80% max pool usage is something I see floating around often. Above 80% ZFS seems to loose perfomance drastically. This means in a 5 disk setup you loose 1 disk right off the batt. However if it is that this is required that is fine. What confuses me is that ZFS does not seem to protect itself against this. I mean there must be gaurds that make sure this does not happen right?
1.b. Resilver time and risks. Because of the long resilver time and the high stress on the disks due to resilvering I read that a 2 Disk parity is almost a requirement.
1.c. VDEV width so basicly if you performance you want VDev's which aren't very wide (so have few disks). However each VDEV has their own parity loss which makes it very inefficient. Also adding multiple VDEVs into one pool is not really what you want since if one fails the pool fails. Meaning you have to split up the available data making managing the storage much more cumbersome and inefficient
1.Conclusion/Confusion: Taking all this in it feels like for a 5 disk VDEV you'd loose 4 disks (assuming 3 disk parity + disk due to the 80% factor). That is a steep price!
2. ZFS Performance: From what I read this is either super or extremely bad depending on whom you ask.
2.a. 80% pool size does performance degrade from 80% or does it start at a lower percentage.
2.b. I read that ZFS has two high IO processes I forget what they're called. How large should the SSD's be to mitigate this and what would be a recommended IO speed for these SSD's be?
3. Memory:
3.a. I have 16GB memory which should be way to little for the pool size I'm aiming for is this correct?
3.b. I have non ECC memory. Can I get definitive answer on if you ECC or not. Some say Yes some say No. Can we get a consensus (of course I understand ECC is better and why). The question is will it function well without it.
For now I'll leave at that. Any input would be appriciated, preferably with your reasoning and references included.
Thank you all for taking to read this regards,
Davy Vaessen
I've been using Synology NAS for quite sometime and currently have a 2415+ which I got working again after a stressfull day of digging through the internet (C2000 issue :( ).
Now while being stressed out of my mind I was also looking to replace what I have and for a long time wanted to get familiar with ZFS. A dead NAS is always a good starting point for that right :P Since my current is back I do have sometime to figure out what would be best in my case without braking the bank.
I have looking into TreuNAS and I'm quite intrigued. However for ZFS I seem to be finding a lot of conflicting information which concerns me. Some topics in which i find conflicting information are:
- stability
- performance
- usability
The consus is however that ZFS has some benefits but does need more carefull planning and the penalties can be high. This will be fairly long post however feel free to wiegh in on just a part of it.
The use case:
So my use case is fairly wide. I need would like the following in decending order of importance:
- Reliable backups for my important data set (currently have this setup to 2 external synologies and to a Azure share all of these are of course encrypted)
- Shares for Documents/Photos/Audio/Video/Binaries
- At least 30TB of usable storage capacity with at least a 1 disk tolerance level
- Plex server
- iScsi
- Docker/Kubernetes capability
- VMWare (or other virtualization) capability
- Anything else that pops in my head :P
So basicly a homelab mascurading as a NAS :P
What do i find important:
Stability and reliability (I like fiddling with systems, however my wife is a user and shee needs to stay happy!)
Performance (IO, sequential as well as random read)
As low risk as possible rebuilds
1 disk failure recovery, Anything more then a 1 disk failure recovery is just a cherry on top
Power usage and cost
The hardware (For the first two planned steps):
Mainboard: Gigabyte GA-Z68X-UD4-B3
CPU: Intel I7 2700K
Memory: Corsair Vengence LP 4x4GB 1600MHZ DDR3 (Non ECC)
Video card: MSI 560TI (will be replaced with somthing low power if it ever goes into production)
Power Supply: Corsair TX750
Network: Initially the one onboard 1GB in the future this might change to a 4 port 1GB (with LAG) or 1 port 10GB PCI card
Disk controller: SATA 6x on board + SATA 2x on board + SATA 2x on board + eSATA 2x on board (Which would give 10 drives, I might extend this with a 8 port LSI at some point)
Disks: Not sure yet have to rummage around for the test setup. In my Synology I have 5x 6TB WD60EFRX-68L0BN1 and 2x 10TB ST10000NM0478-2H7100 which may get migrated. I'm also thinking of adding 3 more 10TB ST10000NM0478-2H7100 for the final setup
SSD's: 240GB kingston as a boot drive and maybe to offload somethings on, 2x1TB Samsung 860 which are in use in the Synology, I might also add some small SSD's for the IO heavy side operations of ZFS.
The Plan (Test):
Download TreuNAS throw some small orphaned disks I have lying around at it and see how far I get.
The Plan (First actual setup):
Delete everything on the test setup and restart with what I've learned. Create 1 VDEV 5x10TB in ZFSR1 add a chache of 2x1TB SSD and 2 smaller SSD's for the IO heavy ZFS operation (do not remember the correct terminology).
- Configure
- Configure backups
- Migrate the data.
- Have a database running on it.
- Have Plex running.
- Get Docker up and running and set it as the master for my Kubernetes Cluster (with 3 raspbarry pi4 slaves).
- Setup Radar/Sonar/Sabnzbd/etc
The Plan (Future setup):
A multi Xenon system with at least 128GB of ECC RAM. There isn't much more to say about this yet ;)
My questions and possible ceveats:
1. ZFS disk space (in)effinciencies: From what I've read ZFS is far less efficient then the alternatives when it comes to usable disk space for the same cost point
1.a. 80% max pool usage is something I see floating around often. Above 80% ZFS seems to loose perfomance drastically. This means in a 5 disk setup you loose 1 disk right off the batt. However if it is that this is required that is fine. What confuses me is that ZFS does not seem to protect itself against this. I mean there must be gaurds that make sure this does not happen right?
1.b. Resilver time and risks. Because of the long resilver time and the high stress on the disks due to resilvering I read that a 2 Disk parity is almost a requirement.
1.c. VDEV width so basicly if you performance you want VDev's which aren't very wide (so have few disks). However each VDEV has their own parity loss which makes it very inefficient. Also adding multiple VDEVs into one pool is not really what you want since if one fails the pool fails. Meaning you have to split up the available data making managing the storage much more cumbersome and inefficient
1.Conclusion/Confusion: Taking all this in it feels like for a 5 disk VDEV you'd loose 4 disks (assuming 3 disk parity + disk due to the 80% factor). That is a steep price!
2. ZFS Performance: From what I read this is either super or extremely bad depending on whom you ask.
2.a. 80% pool size does performance degrade from 80% or does it start at a lower percentage.
2.b. I read that ZFS has two high IO processes I forget what they're called. How large should the SSD's be to mitigate this and what would be a recommended IO speed for these SSD's be?
3. Memory:
3.a. I have 16GB memory which should be way to little for the pool size I'm aiming for is this correct?
3.b. I have non ECC memory. Can I get definitive answer on if you ECC or not. Some say Yes some say No. Can we get a consensus (of course I understand ECC is better and why). The question is will it function well without it.
For now I'll leave at that. Any input would be appriciated, preferably with your reasoning and references included.
Thank you all for taking to read this regards,
Davy Vaessen