New member, little knowledge, 1st time build NAS, compatibility, need help/advise

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I'm thinking about Fractal Design Define 7 which I think is good for nas build.
It is, but Fractal's Meshify 2 is better since it has better airflow (and last time I checked it costs less!).
With zraid2 that would mean that performance goes down by 1 disk and reliability goes up by 1 disk also.
It's not exactly so, and the performance drop is not as drastic as you think. Cache also only helps with reads, not writes. If you absolutely seek performance, you seek flash memory. Alzo, it's RAIDZ and not ZRAID.
Can I ask also if a 11th gen motherboard P12R-E-10G-2T will work with zraid? It has also a dual 10Gbe on board but I don't think it will be supported under truenas...
As a rule of thumb, if it's not a realtek NIC it is supported.
I was working back in 2008-2013 in a big computer sales company which I was on IT department and also sales. The most hardware part with issues were HDDs and Ram modules and the latter because even if you touch with your hands the pins you may provide bigger volt than 1.2-1.35 and destroy the modules. Also Memtest sometimes needed 2-3 days or more to just show 1-2 errors! I'm scared of when thinking about used Rams, especially where in my case I will buy the modules from EU and if something happens then the procedure will take time...
In my humble opinion, which is based on my (absolutely incomparable and more limited) experience, you are worrying a bit too much about RAM.
In the EU you are legally bound to a minimum of 30 days of warrantee.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
No I meant WS C246 with 8x sata ports and another case, I'm thinking about Fractal Design Define 7 which I think is good for nas build
Converting a Define 7 to "storage layout" is a bit of a pain, compounded by screwing each drive to its tray and then fixing the trays…
I've found the Nanoxia Deep Silence 8 Pro, which is essentially a cheaper knock-off of the Define, to be much easier to work with for hosting drives. (There may be better cases around but I happen to own these two.)
With zraid2 that would mean that performance goes down by 1 disk and reliability goes up by 1 disk also. I don't know how much faster can Ram and Nvme can provide while caching? What should I expect with zraid2, 6 mechanical drives and 1 Nvme drive, for example a wd black 1tb sn850x?
With a NVMe L2ARC you may expect that data held in cache will be served as fast as the 10 GbE link allows.
For sustained writes, the limit will always be the sustained write speed of the pool. There is no way around that.
Note that L2ARC must be sized according to RAM and you'd really need 128 GB RAM to have a 1 TB L2ARC. Too large a L2ARC actually hurts performance by evicting ARC…

Can I ask also if a 11th gen motherboard P12R-E-10G-2T will work with zraid? It has also a dual 10Gbe on board but I don't think it will be supported under truenas...
ZFS does its magic in software, so if the hardware can run TrueNAS it will do raidz2.
I see no issue with this P12R board, and the X710 is a high-end NIC with full support in TrueNAS, but you'll need a Xeon E-2300 (10/11th gen. Core won't even work) and I suspect that the cost of P12R + E-2300 will be distinctly higher than C246 + Core i3 + Chelsio T520.

Raidz1/2/3 vdevs cannot be modified after creation: One can replace drives with larger ones, but it is currently not possible to widen a vdev and go from, say, 6 drives to 7 or 8. So if you have 8 SATA ports, it would be best to have 8 HDDs from the start.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
It's not exactly so, and the performance drop is not as drastic as you think. Cache also only helps with reads, not writes. If you absolutely seek performance, you seek flash memory. Alzo, it's RAIDZ and not ZRAID.
I really want to know where the term ZRAID came from. I've seen it in a few places, reddit included.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Converting a Define 7 to "storage layout" is a bit of a pain, compounded by screwing each drive to its tray and then fixing the trays…
I've found the Nanoxia Deep Silence 8 Pro, which is essentially a cheaper knock-off of the Define, to be much easier to work with for hosting drives. (There may be better cases around but I happen to own these two.)
Hmm, I've been toying on the idea of getting a Define 7 XL. I don't mind screwing each drive and fixing the trays as it's mostly a one-time thing, but is there another issue that makes that conversion a pain?

Raidz1/2/3 vdevs cannot be modified after creation: One can replace drives with larger ones, but it is currently not possible to widen a vdev and go from, say, 6 drives to 7 or 8. So if you have 8 SATA ports, it would be best to have 8 HDDs from the start.
This is really the reason why I'm a striped mirrors fan. It's easy to add devices, easy to upgrade (2 drives instead of whatever your vdev size is). Much better performance and drastically faster resilvers and very minimal load on the pool while resilvering. This is especially significant when you have very large drives (double-digit TB capacity) where upgrading a 6-drive vdev could potentially take weeks even if you're doing NOTHING else on the pool.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
@mariosk9gr I have been in IT for...25 years now directly, and I will admit, I have never used a static wrist guard in my life on any build. But i also do not shuffle around on carpets and always touch a ground source first. Ram has come a long way, just buy it from someone who appears to sell mostly server / enterprise related equipment, and they have actual pictures of said ram, not some google screen cap, and do not buy it from someone who is selling RAM, along side some summer dresses, car tires and a used lawn mower either....Most good sellers on Ebay do offer returns, if they list "no returns" avoid them.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Hmm, I've been toying on the idea of getting a Define 7 XL. I don't mind screwing each drive and fixing the trays as it's mostly a one-time thing, but is there another issue that makes that conversion a pain?
In my opinion you should go with the Meshify 2. It's basically the same but with a mesh front panel, giving you overall better temperatures (especially with all the bays full).
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Hmm, I've been toying on the idea of getting a Define 7 XL. I don't mind screwing each drive and fixing the trays as it's mostly a one-time thing, but is there another issue that makes that conversion a pain?
Four screws, plus four screws, plus four screws, plus… adds up to quite a pain. Then fiddling to secure the trays in the tiny, tiny notches.
Compared with the tool-less trays of the Nanoxia, it's night and day.
And to fill up a Define case with drives one has to purchase extra trays, which may end up costing more than the case itself.

A Define 7 XL can take up to 18 drives—if you buy 12 extra trays. At that point, you should consider a rack mount storage case. It will be noisy but there's no way 18 properly cooled (an extra hurdle to consider) drives in a big ATX tower are going to be quiet.

This is really the reason why I'm a striped mirrors fan. It's easy to add devices, easy to upgrade (2 drives instead of whatever your vdev size is). Much better performance and drastically faster resilvers and very minimal load on the pool while resilvering. This is especially significant when you have very large drives (double-digit TB capacity) where upgrading a 6-drive vdev could potentially take weeks even if you're doing NOTHING else on the pool.
Fair point, but with "double digit TB capacity" you're at a point where the maths of "raid 5 is dead" applies to a 2-way mirror: Upon loss of one drive, there's a significant probability of encountering an URE while resilvering.
3-way mirrors are possible for a flexible and secure design, but come at a high cost.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
In my opinion you should go with the Meshify 2. It's basically the same but with a mesh front panel, giving you overall better temperatures (especially with all the bays full).
Oh I didn't know that they are basically the same. I don't understand why they don't just keep one design but make an option for a mesh. Are acoustics the same also?
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
Four screws, plus four screws, plus four screws, plus… adds up to quite a pain. Then fiddling to secure the trays in the tiny, tiny notches.
Compared with the tool-less trays of the Nanoxia, it's night and day.
And to fill up a Define case with drives one has to purchase extra trays, which may end up costing more than the case itself.
Interesting, never heard of Nanoxia. Also, when I check out their site, only the world site works, the US site doesn't work for some reason.

A Define 7 XL can take up to 18 drives—if you buy 12 extra trays. At that point, you should consider a rack mount storage case. It will be noisy but there's no way 18 properly cooled (an extra hurdle to consider) drives in a big ATX tower are going to be quiet.
Probably not going to fill them up with 18. At most, I'm probably only going to do 12x 3.5" + 2x 2.5" and I'm going to try to use 140 mm fans everywhere I can.

Fair point, but with "double digit TB capacity" you're at a point where the maths of "raid 5 is dead" applies to a way mirror: Upon loss of one drive, there's a significant probability of encountering an URE while resilvering.
3-way mirrors are possible for a flexible and secure design, but come at a high cost.
This is not true. The math for RAID5 is in no way the same as mirrors. In a RAID5/RAIDZ1, your survivability for a second dead drive is 0%, whereas in a 6-drive striped mirrors, your chance for a second dead drive survivability is still 80% and the numbers get better with more drives, not to mention your surviving drives are not under an insane amount of load while resilvering. In a 6-drive RAIDZ1 configuration, you have FIVE times the I/O load on the pool vs striped mirrors even if you are doing NOTHING with the pool.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Are acoustics the same also?
It's hard to say, it depends on how you configure the system, but generally I would say worse since you have less phisical obstruction to the noise. However, you can draw in more air so you can lower your fans RPMs and potentially get less noise and the same temperatures.

Anyway, @mariosk9gr do you have any further questions?
 
Last edited:

mariosk9gr

Dabbler
Joined
Nov 13, 2022
Messages
12
Interesting, never heard of Nanoxia. Also, when I check out their site, only the world site works, the US site doesn't work for some reason.


Probably not going to fill them up with 18. At most, I'm probably only going to do 12x 3.5" + 2x 2.5" and I'm going to try to use 140 mm fans everywhere I can.


This is not true. The math for RAID5 is in no way the same as mirrors. In a RAID5/RAIDZ1, your survivability for a second dead drive is 0%, whereas in a 6-drive striped mirrors, your chance for a second dead drive survivability is still 80% and the numbers get better with more drives, not to mention your surviving drives are not under an insane amount of load while resilvering. In a 6-drive RAIDZ1 configuration, you have FIVE times the I/O load on the pool vs striped mirrors even if you are doing NOTHING with the pool.
So for example with stripped mirrors with 8 hdd drives, how is the performance? Also as you said the hard drives have much less work to do so that means lower percentage of failure right? Also stripped mirros is like Raid10 I imagine?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
So for example with stripped mirrors with 8 hdd drives, how is the performance?
Performance is very good. The more you can stripe, the better it gets. Striped mirrors have the best IOPS out of all the possible configuration. Here's what one of the ZFS core developers had to say about it. Pay attention to the last bolded part.

Also as you said the hard drives have much less work to do so that means lower percentage of failure right?
While resilvering, yes. The big reason why people run RAIDZ2 over RAIDZ1 is because when one of your disks fail, the risk of another drive in the pool going bad is increased due to the increased overall load on the pool while resilvering. Depending on how filled up your pool is and how big your drives are, this process or resilvering can take anywhere from a few hours (for small drives) to days (for large drives) even if you're doing NOTHING else on the pool. It'd be even slower if you're actually actively using the pool while it's resilvering. While it's resilvering, there's a chance in that long period of time that your remaining drives could start failing also due to the increased load placed on them while resilvering. So imagine that you got a RAIDZ pool of 6 drives, each at 10TB. I'd imagine resilvering probably will take a whole day at least, possibly more. The whole time it's doing so, you're sweating bullets cause the remaining 5 HDD are constantly chugging away resilvering the newly replaced drive. Now imagine if you're trying to upgrade the vdev, now you're looking at resilvering SIX 10TB drives.... now you're looking at constant resilvering for possibly a week or more. Now you see why people run RAIDZ2, for that extra assurance while you're resilvering. You don't have this problem with striped mirrors and in an upgrade, the most you ever have to resilver is 2 drives, that's it.

Furthermore, for each block written to the disk being resilvered, a block of data must be read from each of the surviving drives in a RAIDZ1. On the contrary, for a pool of striped mirrors, no such thing is required and you're just reading data from the 1 drive in the vdev you're replacing. The rest of the pool has no added load and your resilvering time is much much shorter because of this. In a 6-vdev pool, resilvering a RAIDZ1 drive would require FIVE reads from each of the surviving drive vs 1 for striped mirrors. That is FIVE times the amount of I/O load compared to striped mirrors. Performance is also barely impacted and you can probably even still use the pool while resilvering whereas in RAIDZ1, performance would degrade quite significantly and it is even worse for RAIDZ2 or 3 since the parity calculations get even hairier.

Also stripped mirros is like Raid10 I imagine?
ZFS Striped mirrors are not the same as RAID10. If you make a RAID10 of 2x 1TB + 2x 2TB disks, you're going to waste the upper 1TB of the larger disks and end up with only 2TB usable space. Conversely, in ZFS striped mirrors, it doesn't go to waste and you end up instead with 3TB usable space.

Hope that clears it up.
 
Last edited:

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
The thermal problem with Define R series cases is the sleds into which you have to first mount the disks. Those sleds cover the bottom of the disks completely, thus reducing the surface for cooling by half. I have 8 drives in an Define R6 and had to resort to high-pressure fans (in my case from Noctua's industrial series), which make a ton of noise. In fact my old 1U Supermicro servers are more quiet, except when under full load.

If you only populate every second slot, it is probably fine, though.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
Coming back to saturating two 10 Gbps lines:

It is critical to understand one thing. There is a huge difference between sequential and random access. A conventional/mechanical disk (aka HDD) has a relatively huge seek time penalty, because the read/write head needs to be moved. Therefore, random access transfer speeds are orders of magnitude slower than sequential ones.

Whenever a speed of 100-200 MB/s (today probably more towards 200) is mentioned as a drive's maximum speed, it was measured under optimal conditions. In other words: You will not see those speeds for most real-life workloads. (The main exception will be copying of large files.)

Editing videos, running a database server, etc. are not sequential workloads for the most part. This means that instead of 200 MB/s you might get 20 MB/s or even 1 MB/s, depending on the details. And that is assuming a single client/user working against the drive. Multiple concurrent users will make things worse.

Of course I am painting a worst-case scenario here. But it is important that sequential speeds are not relevant for most situations. Instead the number of I/O operations per second (IOPS) is key. And the latter is also the reason that SSDs (even SATA) feel so much faster, not their maximum sequential transfer speed.

Back to the use-case: Video editing (4k/6k) by two people simultaneously over 10 Gbps will likely require an all-SSD pool; or a huge number (probably 15+) mirror vdevs, which is more expensive today than going for SSDs (apart from the case size, noise, power consumption).
 

mariosk9gr

Dabbler
Joined
Nov 13, 2022
Messages
12
Coming back to saturating two 10 Gbps lines:

It is critical to understand one thing. There is a huge difference between sequential and random access. A conventional/mechanical disk (aka HDD) has a relatively huge seek time penalty, because the read/write head needs to be moved. Therefore, random access transfer speeds are orders of magnitude slower than sequential ones.

Whenever a speed of 100-200 MB/s (today probably more towards 200) is mentioned as a drive's maximum speed, it was measured under optimal conditions. In other words: You will not see those speeds for most real-life workloads. (The main exception will be copying of large files.)

Editing videos, running a database server, etc. are not sequential workloads for the most part. This means that instead of 200 MB/s you might get 20 MB/s or even 1 MB/s, depending on the details. And that is assuming a single client/user working against the drive. Multiple concurrent users will make things worse.

Of course I am painting a worst-case scenario here. But it is important that sequential speeds are not relevant for most situations. Instead the number of I/O operations per second (IOPS) is key. And the latter is also the reason that SSDs (even SATA) feel so much faster, not their maximum sequential transfer speed.

Back to the use-case: Video editing (4k/6k) by two people simultaneously over 10 Gbps will likely require an all-SSD pool; or a huge number (probably 15+) mirror vdevs, which is more expensive today than going for SSDs (apart from the case size, noise, power consumption).
ChrisRJ thank you so much for the deltailed explanation! I understand that I won't catch these numbers with only HDDs and their slow random access BUT a NVME 512Gb (for SSD caching with 64Gb Ram installed) won't take all this effort from HDDs and provide much faster access while editing? and how much (just an estimation by your experience) can it be? I cannot build so huge pool of HDDs, my budget doesn't allow it but you gave me the idea that I may later build a smaller NAS build with only SSDs to do editing from there and when finish it to send to my main NAS which I want to build now.
 

mariosk9gr

Dabbler
Joined
Nov 13, 2022
Messages
12
Thank you for this document! I finally think I can understand how ZFS works and what configuration is best for me.
So, I'm between 6 or 8 disks for my vdev.
If we say that one disk can provide sequential 150mb/s read/write (maybe Exos could provide a little more than 200mb?) then I'm thinking that the best configuration as all of you have suggested is with a mirrored vdev!

That translates with 6 disks for sequential read/writes a 3x2 way mirror:
READ: 150x3=450mb
WRITE: 150x1=150mb

and with 8 disks for sequential read/writes a 4x2 way mirror:
READ: 150x4=600mb
WRITE: 150x1=150mb

I don't want to use raidz1 or raidz2 and push the HDDs too hard especially with huge loads of time when a resilvering has to be done and also with the danger to lose another drive. BUT from the other hand I have to sacrifice the half disks and take nothing back regarding write speeds.
Crucial in this point is how much can 1 or 2 NVMEs provide regarding random access in reads and write speeds! And also Can I use one nvme for L2ARC and another one for SLOG so I can have high IOPS in both read n write?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Issue is: as far as I undertstand cache can do very little for writes, and SLOG is only useful if you always use syncwrites. You can read more about It in the following resource.
I have no experience with L2ARC and metadata vdevs, so I can't give you actual data to understand the impact they might have, but generally they are used in big systems with a lot of RAM.

Also, your numbers are incorrect.
If you take a single drive with a theoretical sequential read of 150 MegaBytes/s and you put it in a 2-way mirror, the vdev sequential read speed will be 300 MB/s; now if you stripe that mirror with other 2, for a total of 3 vdevs, you get 900 MB/s for your pool sequential reads.
Similarly, with 4x2 mirrors you would get 4x300 MB/s = 1200 MB/s of sequential read performance for your pool.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The big reason why people run RAIDZ2 over RAIDZ1 is because when one of your disks fail, the risk of another drive in the pool going bad is increased due to the increased overall load on the pool while resilvering.
For some, perhaps. Many of us are, to say the least, skeptical of the "oh noez the extra load" argument (given that the "extra load" is identical to the scrub that runs every few weeks on the pool), but extra load or no, any read error on resilver in a RAIDZ1 configuration means near-certain data loss.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I don't want to use raidz1 or raidz2 and push the HDDs too hard especially with huge loads of time when a resilvering has to be done and also with the danger to lose another drive. BUT from the other hand I have to sacrifice the half disks and take nothing back regarding write speeds.
You do get improvements in read, writes and IOPS as you add further vdevs (raidz# or mirrors).
So with mirrors you'd get (putatively) 450 MB/s read/write and IOPS of 3 drives, or 600 MB/s and IOPS of 4 drives with 3 or 4 mirrors.
With raidz2 you'd get (putatively) 600 MB/s read/write with 6 drives or 900 MB/s with 8 drives, but IOPS of one drive in either case because there's one single vdev. RaidZ2 is arguably safer than 2-way mirrors for long term archival.
In all case with @ChrisRJ's caveat that you won't achieve these rates in real life. You may get closer to it for writes because ZFS combines writes in "transaction groups", which make writes more sequential.
Do not hope that enterprise drives will do better than others. If they come cheaper because of a different pricing model that's already one benefit, asking for more would be greedy.

Crucial in this point is how much can 1 or 2 NVMEs provide regarding random access in reads and write speeds! And also Can I use one nvme for L2ARC and another one for SLOG so I can have high IOPS in both read n write?
L2ARC provides caching for reads. Once the data has been identified as "hot" and loaded in L2ARC, it will be served as fast as possible—meaning that the 10 GbE network will likely be the bottleneck rather than a NVMe L2ARC drive.
A SLOG is NOT a write cache. The sole write cache in ZFS is RAM, and ZFS will only keep a limited amount of "dirty" data in cache because it is designed to give priority to data safety. As pointed out by @Davvo , a SLOG is only used for sync writes and sync writes are a performance killer compared with async writes. For write performance, you want async writes, no SLOG (no purpose for it anyway), and many drives—and much preferably an all-SSD pool.
 
Top