Help needed in validating the hardware configurations

tbw

Dabbler
Joined
May 6, 2022
Messages
12
Hello everyone!

As mentioned in my introduction post, I am building a novel Data Center (DC) and Backup (BK) servers for which I have the following configurations (not yet purchased):

Data Center (DC)
CPU: Intel Xeon Gold 6326 16C/32T 2.9GHz 24MB
RAM: DIMM 64GB DDR4-3200 ECC (2x 32GB)
MB: Supermicro UP LGA4189 DDR4 ATX M.2 2X1GBE (8x DIMM slots)
HBA controller: Supermicro S3808-8I PCI-E 4.0 SATA/SAS3 / AOC-S3808L-L8iT
CHASSIS: Supermicro 3U 16XHSWAP SAS3

BOOT (mirror vdev):
Samsung PM893 240GB SATA3 2.5’’ 1.3DWPD
Samsung SM883 240GB SATA3 2.5" 3.6DWPD

STORAGE (2x dRAID2: 3d+2p+1s):
6x Seagate Exos 7e8 6TB SATA3 7.2rpm 256MB 512E 3.5" 24/7 4KN
6x Seagate Exos 7e10 6TB SATA3 7.2rpm 256MB 512E 3.5" 24/7 4KN

Backup (BK):
CPU: Intel Xeon E-2314 4C4T 2.8-4.5Ghz
RAM: DIMM 32GB DDR4-3200 ECC (1x 32GB)
MB: Supermicro C252 UP LGA1200 m-ATX M.2 2XLAN (2x DIMM slots)
HBA controller: Broadcom 9341-8i SATA/SAS3 - ZERO MEM
CHASSIS: Supermicro 2U 8xHSWAP SAS3

BOOT:
SSD Kioxia XG6 256GB NVME PCI-E 3.0 M.2 1DWPD

STORAGE (1x dRAID2: 3d+2p+1s):
3x Seagate Exos 7e8 6TB SATA3 7.2rpm 256MB 512E 3.5" 24/7 4KN
3x Seagate Exos 7e10 6TB SATA3 7.2rpm 256MB 512E 3.5" 24/7 4KN

Both systems are intended to work with TrueNAS SCALE, enabling a Linux environment and mainly the possibility to scale the systems to increased storage (HDDs, JBODs and RAM; possibly other DCs) which is already taken into account in the configuration (not shown here). With this configuration, I am planning to have two OpenZFS dRAID pools with 6x HDDs per pool enabling 18TB free space (3 data, 2 parity and 1 hot spare). The reason for the independent pools is due to the types of data (1 pool for read-only data and 1 pool for user accounts data – this is the only one targeted for backup since the other pool has backups in offline external drives). This way I think I can make the DC more resilient without one pool affecting the other. The BK is configured to mimic the DC storage, here with a single pool for the user accounts data only.

I am looking for help in the following questions:
1) Are these configurations valid, considering the OS and described (dRAID) functionality? Otherwise, what changes would be recommended? More RAM :)?
2) Assuming I want to use hot spare drives, if I add another vdev to one of the pools, do I need another hot spare drive or the one in the initial vdev is enough? Or better said, is it necessary to have a hot spare drive for every new vdev?
3) The data stored in both pools will in principle contain also large files (~200MB to ~3-5GB or more), does it make sense or required to have all the HDDs formatted in 4KiB blocks? Should it be done in all pools in both DC and BK?
3.1) I read somewhere that for smaller files an additional configuration or hardware would be necessary, where the word “mirroring” was mentioned. What does this mean? Would this be necessary? (sorry, I am currently unable to cite this information).
4) If I understand well, OpenZFS by default enables compression (by LZ4 ?), what pros and cons are derived? Would I get more space available? Would I also get more difficulties in recovering the data or disks in a worse case scenario?
5) When establishing the user accounts in the DC, will users have access also to the read-only pool (with differing permissions and quotas)?
6) What would be the best RAID scheme for the BK, is this (dRAID2) ok? Or is this too much? Would it be required (or beneficial?) to use the same scheme as in the DC?
7) Still not sure, but I am planning to have the BK shutdown most of the time, and perform the backups only at specific times or dates (like 1 – 2 times a month) possibly using automated WoL and maybe cron jobs (from the DC), is this possible and acceptable?
8) With TrueNAS I came to know that it is possible to perform replication and/or synchronization; how would these work exactly? Do I need both operations? The idea here and most important, is to have the latest copies of the DC pool, and secondly if possible, recovery in case files are accidentally deleted or corrupted in DC pool.

If you have any other suggestions regarding the configurations, please let me know.

Thanks in advance for all your time and help.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,949
I thought draid was for big installs, lots and lots of disks. Not just a few. Not that I know anything about draid
BK has a MegaRAID card - not an HBA
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
I thought draid was for big installs, lots and lots of disks. Not just a few. Not that I know anything about draid
BK has a MegaRAID card - not an HBA

Thank you for your reply.
I have not seen this discrimination anywhere about dRAID. It seems a bit odd, but if there is any limitation please let me know (and any URL or document where this is found). Anyway, it is very likely that the pools will increase in number of disks, currently the maximum is 4 in total (4 drive slots available, but it is possible with added JBODs). The best about dRAID is the r/w performance in the case of resilvering, which spares the disks from exacerbated IO when compared to traditional RAID schemes.

The MegaRAID card is mentioned in forum documents here and for what I understand it is possible to be configured/flashed in "IT mode", which is the same as HBA.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
I can't speak to all of them, but here's what I can help you with.

1: The config looks fairly good. For what it appears you are doing, especially with SATA drives, the DC server is likely overbuilt from a CPU perspective. However, it's better to have too much CPU than not enough. I would add additional RAM to both servers, likely 2x. I can't remember where I read it but a good config was supposed to be 1GB of RAM for every 1TB of usable disk capacity. The HBA is usually what kills a good build. As near as I can tell, yours is based on a Broadcom chip which is usually fine for most uses.

2: Hot spares are done at the pool level. Adding a hot spare when adding another vdev is an engineering choice more than anything else. In my case, I manage a considerable number of high density 60 and 102 drive JBODs. Depending on the pool config, I typically reserve 2 bays in each enclosure for hot spares but my pool designs do not usually span multiple enclosures.

4: There is a performance impact for using compression. If you are going for max performance, you should disable it. The other consideration is that different types of data compress differently. I have databases that live on TrueNAS storage with compression ratios of nearly 4 to 1. On the other end, I have virtual machine storage that only sees a compression ratio of 1.39 to 1. I don't know what the exact performance impact is, I doubt it's very high. We use the lz4 compression on a pool with 24 x 15.36TB Micron 9300 NVME drives and, in testing, still saw more than enough throughput to saturate 25Gbe interfaces without the CPU breaking a sweat.

6: This is another engineering decision that depends on your use case. In my case, I usually advise people to use at least RAIDZ2. As your drives age and you get past the initial drive failures that might be caused by simple manufacturing oddities, you start to get into the place where your drives begin to fail from just old age and general use. When you reach that state your chance of multiple drive failures increase. This is because the drives are likely identical and pulled from the assembly line at the same time so their usage and fatigue lifetimes are likely similar. Additionally, most systems experience their highest sustained activity during a resilver which means you are most likely to see additional drive failures during a resilver. For that reason, I don't recommend RAIDZ1 and haven't for quite some time.

7: I don't have any servers that are set this way but, assuming that the shutdown is performed properly, I don't see any reason why this couldn't work long term.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I don't know many of the answers, but;

2) Assuming I want to use hot spare drives, if I add another vdev to one of the pools, do I need another hot spare drive or the one in the initial vdev is enough? Or better said, is it necessary to have a hot spare drive for every new vdev?
dRAID has dedicated hot spares, PER vDev. Meaning if you add another vDev of dRAID, it would need it's own dedicated hot spare.

This is because of the REASON for dRAID, integrated hot spare(s) for reduced re-build after disk failure.

Unless you have ZFS experience AND clearly understand dRAID, you probably want to stick with more conventional RAID-Z2.

4) If I understand well, OpenZFS by default enables compression (by LZ4 ?), what pros and cons are derived? Would I get more space available? Would I also get more difficulties in recovering the data or disks in a worse case scenario?
Unless you have a reason not to, compression is always a good feature to enable on ZFS datasets. A little slower on writes, but sometimes faster reads, as you would be reading less data from slow disks, and using fast CPUs to uncompress the data.

Yes, you can potentially get more space available, depending on the compression ratio of the data.

No, recoverability is not impacted unless you use the latest compression, like ZSTD. Then, you have to be careful to use newer ZFS for this pool, that includes ZSTD. The old, LZ4 has been around for so long, it's more or less a standard feature of OpenZFS.
 
  • Like
Reactions: tbw

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I can't speak to all of them, but here's what I can help you with.
...

2: Hot spares are done at the pool level. Adding a hot spare when adding another vdev is an engineering choice more than anything else. In my case, I manage a considerable number of high density 60 and 102 drive JBODs. Depending on the pool config, I typically reserve 2 bays in each enclosure for hot spares but my pool designs do not usually span multiple enclosures.

...
dRAID handles hot spares differently. They are integrated into the dRAID vDev, and thus are not pool wide.
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
I can't speak to all of them, but here's what I can help you with.

1: The config looks fairly good. For what it appears you are doing, especially with SATA drives, the DC server is likely overbuilt from a CPU perspective. However, it's better to have too much CPU than not enough. I would add additional RAM to both servers, likely 2x. I can't remember where I read it but a good config was supposed to be 1GB of RAM for every 1TB of usable disk capacity. The HBA is usually what kills a good build. As near as I can tell, yours is based on a Broadcom chip which is usually fine for most uses.

2: Hot spares are done at the pool level. Adding a hot spare when adding another vdev is an engineering choice more than anything else. In my case, I manage a considerable number of high density 60 and 102 drive JBODs. Depending on the pool config, I typically reserve 2 bays in each enclosure for hot spares but my pool designs do not usually span multiple enclosures.

4: There is a performance impact for using compression. If you are going for max performance, you should disable it. The other consideration is that different types of data compress differently. I have databases that live on TrueNAS storage with compression ratios of nearly 4 to 1. On the other end, I have virtual machine storage that only sees a compression ratio of 1.39 to 1. I don't know what the exact performance impact is, I doubt it's very high. We use the lz4 compression on a pool with 24 x 15.36TB Micron 9300 NVME drives and, in testing, still saw more than enough throughput to saturate 25Gbe interfaces without the CPU breaking a sweat.

6: This is another engineering decision that depends on your use case. In my case, I usually advise people to use at least RAIDZ2. As your drives age and you get past the initial drive failures that might be caused by simple manufacturing oddities, you start to get into the place where your drives begin to fail from just old age and general use. When you reach that state your chance of multiple drive failures increase. This is because the drives are likely identical and pulled from the assembly line at the same time so their usage and fatigue lifetimes are likely similar. Additionally, most systems experience their highest sustained activity during a resilver which means you are most likely to see additional drive failures during a resilver. For that reason, I don't recommend RAIDZ1 and haven't for quite some time.

7: I don't have any servers that are set this way but, assuming that the shutdown is performed properly, I don't see any reason why this couldn't work long term.

Thanks for your reply.
Your answers confirm my choices.
Regarding the CPU and RAM, that is what I thought. For now the capacity will not be completely filled, so I figure that RAM could be increased later on. Following that rule (I have also seen somewhere), I would need 36GB RAM (18 TB per pool), is this correct?
Regarding the compression, I read that on the contrary, the compression could increase performance (slides), although possibly at the cost of more processing. Does, this apply here to the dRAID or is this information not up to date?
Regarding the drives yes at least RAIDZ2. That was the reason for opting for different Exos drives, but still I am not sure if it will be enough. Do you have any idea if these drives are sufficiently reliable or if people prefer other more reliable brands and/or models? If I may ask, what drives are you using in your servers? Are they all SSDs?
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
I don't know many of the answers, but;


dRAID has dedicated hot spares, PER vDev. Meaning if you add another vDev of dRAID, it would need it's own dedicated hot spare.

This is because of the REASON for dRAID, integrated hot spare(s) for reduced re-build after disk failure.

Unless you have ZFS experience AND clearly understand dRAID, you probably want to stick with more conventional RAID-Z2.


Unless you have a reason not to, compression is always a good feature to enable on ZFS datasets. A little slower on writes, but sometimes faster reads, as you would be reading less data from slow disks, and using fast CPUs to uncompress the data.

Yes, you can potentially get more space available, depending on the compression ratio of the data.

No, recoverability is not impacted unless you use the latest compression, like ZSTD. Then, you have to be careful to use newer ZFS for this pool, that includes ZSTD. The old, LZ4 has been around for so long, it's more or less a standard feature of OpenZFS.

Thanks for your reply.
I agree with all said, thanks.
Is this CPU sufficiently fast? And is LZ4, still a good option?
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
Thanks for your reply.
Your answers confirm my choices.
Regarding the CPU and RAM, that is what I thought. For now the capacity will not be completely filled, so I figure that RAM could be increased later on. Following that rule (I have also seen somewhere), I would need 36GB RAM (18 TB per pool), is this correct?
Regarding the compression, I read that on the contrary, the compression could increase performance (slides), although possibly at the cost of more processing. Does, this apply here to the dRAID or is this information not up to date?
Regarding the drives yes at least RAIDZ2. That was the reason for opting for different Exos drives, but still I am not sure if it will be enough. Do you have any idea if these drives are sufficiently reliable or if people prefer other more reliable brands and/or models? If I may ask, what drives are you using in your servers? Are they all SSDs?
We had serious reliability issues with the X12 model of Exos drives, something that appears to be quite widespread, unfortunately. We do have a considerable number of X16 Exos drives and have not seen any issues with those.
 
  • Like
Reactions: tbw

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Thanks for your reply.
I agree with all said, thanks.
Is this CPU sufficiently fast? And is LZ4, still a good option?
I have no comment for the Data Center CPU.

But, the backup one, Intel Xeon E-2314 4C4T 2.8-4.5Ghz, is a bit lite on power. I'd suggest a hyper threaded one. Or one with 6 cores. This is not because backups take up a lot of CPU power. But, ZFS scrubs that are run every 2/3 weeks, (depending on your choice), do take up CPU power to verify checksums. Plus, you want enough CPU power to run the scrub AND backup since a scrub can take many hours, (even days on larger disks in fullish pools).


Yes, LZ4 is a good, default choice. If you know your data better, their is GZip which has 0-9 levels for various efforts at compression. And ZSTD has various levels too.

In general, no one can make the final decision about which compression algorithm to use except you. It's data specific. Plus, you can vary it per ZFS Dataset.

It is even possible to change algorithm in an existing ZFS Dataset for any new writes. For example, let us say you use GZip level 5 and find that gives you okay level of compression, with writes being fast enough. And reads are just fine. You can then try out GZip level 9 for new data.

Note that changing the compression algorithm on a ZFS Dataset does not change any previously written data. That data remains with what ever compression algorithm was used, (and any associated level).
 
  • Like
Reactions: tbw

tbw

Dabbler
Joined
May 6, 2022
Messages
12
I have no comment for the Data Center CPU.

But, the backup one, Intel Xeon E-2314 4C4T 2.8-4.5Ghz, is a bit lite on power. I'd suggest a hyper threaded one. Or one with 6 cores. This is not because backups take up a lot of CPU power. But, ZFS scrubs that are run every 2/3 weeks, (depending on your choice), do take up CPU power to verify checksums. Plus, you want enough CPU power to run the scrub AND backup since a scrub can take many hours, (even days on larger disks in fullish pools).


Yes, LZ4 is a good, default choice. If you know your data better, their is GZip which has 0-9 levels for various efforts at compression. And ZSTD has various levels too.

In general, no one can make the final decision about which compression algorithm to use except you. It's data specific. Plus, you can vary it per ZFS Dataset.

It is even possible to change algorithm in an existing ZFS Dataset for any new writes. For example, let us say you use GZip level 5 and find that gives you okay level of compression, with writes being fast enough. And reads are just fine. You can then try out GZip level 9 for new data.

Note that changing the compression algorithm on a ZFS Dataset does not change any previously written data. That data remains with what ever compression algorithm was used, (and any associated level).

Thank you for your suggestions.
I was not really thinking of scrubs in the BK, but yes it makes sense. So I will go for the Xeon E-2336 with 6C12T. So with this CPU the scrub and backup can run at same time?
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
I have two more questions:

1) Since it is a single drive, how should the BK BOOT drive be formatted? Should it use ZFS and/or even a RAID scheme?

2) In case in future we need to expand one or both pools, and acquire a chassis like a JBOD with similar slots, is it possible to move the HDD drives from one of the pools (on existing chassis) to the new JBOD and thus enable expansion of both pools? Put in another way, the drives are not fixed or tied to the initial chassis/configuration, are they?
This question arose from the point made by @firesyde424 regarding pools spanning multiple enclosures. Could this be really worth doing?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Thank you for your suggestions.
I was not really thinking of scrubs in the BK, but yes it makes sense. So I will go for the Xeon E-2336 with 6C12T. So with this CPU the scrub and backup can run at same time?
Even the old Intel Xeon E-2314 4C4T 2.8-4.5Ghz could run ZFS scrubs and backups at the same time. It is a mater of how fast. Note that at some point a NAS can be network I/O bound, disk I/O bound or even memory bound.

However, a little more horse power with something like the Xeon E-2336 with 6C12T is warranted in this case, in my opinion.

I have two more questions:

1) Since it is a single drive, how should the BK BOOT drive be formatted? Should it use ZFS and/or even a RAID scheme?

2) In case in future we need to expand one or both pools, and acquire a chassis like a JBOD with similar slots, is it possible to move the HDD drives from one of the pools (on existing chassis) to the new JBOD and thus enable expansion of both pools? Put in another way, the drives are not fixed or tied to the initial chassis/configuration, are they?
This question arose from the point made by @firesyde424 regarding pools spanning multiple enclosures. Could this be really worth doing?
1) Current versions of TrueNAS, (Core or SCALE), always uses ZFS for the boot drive. Partly because this gives the option to mirror the boot drive at install, or later, easily. Then because your can scrub / sanity check the boot drive files. Last, we get alternate boot environments so that patching can be backed out with a simple reboot.

It is generally fine to have a single boot device. Just make sure you backup your configuration file after any changes. (Plus, an NVMe M.2 should be pretty reliable compared to other boot devices, like USB.)


2) Correct. You can move any of the drives around to different HBAs or builtin SATA ports, (while the server is off). ZFS does not care WHERE the drive is, internal, external, external through a SAS Expander.

As for if it is worth while to move the disks to an external JBOD, (possibly with SAS Expander), connected by SAS HBA, only you can make that decision. Without details, we can't really advise now. External JBODs can have direct connections or SAS Expander(s), (which do support SATA disks just fine).
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
Even the old Intel Xeon E-2314 4C4T 2.8-4.5Ghz could run ZFS scrubs and backups at the same time. It is a mater of how fast. Note that at some point a NAS can be network I/O bound, disk I/O bound or even memory bound.

However, a little more horse power with something like the Xeon E-2336 with 6C12T is warranted in this case, in my opinion.

Yes, I agree. My point with this post is exactly in this direction, the forum users have experience with the hardware requirements working with the OS, ZFS and functionalities, which I currently don't. Hence, if I have the hardware well prepared and established, I will avoid some problems.

1) Current versions of TrueNAS, (Core or SCALE), always uses ZFS for the boot drive. Partly because this gives the option to mirror the boot drive at install, or later, easily. Then because your can scrub / sanity check the boot drive files. Last, we get alternate boot environments so that patching can be backed out with a simple reboot.

It is generally fine to have a single boot device. Just make sure you backup your configuration file after any changes. (Plus, an NVMe M.2 should be pretty reliable compared to other boot devices, like USB.)


2) Correct. You can move any of the drives around to different HBAs or builtin SATA ports, (while the server is off). ZFS does not care WHERE the drive is, internal, external, external through a SAS Expander.

As for if it is worth while to move the disks to an external JBOD, (possibly with SAS Expander), connected by SAS HBA, only you can make that decision. Without details, we can't really advise now. External JBODs can have direct connections or SAS Expander(s), (which do support SATA disks just fine).

I have this adapter AOM-SAS3-8I8E-LP included which I believe is meant to enable connection to an additional JBOD. Is this suitable?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
I have this adapter AOM-SAS3-8I8E-LP included which I believe is meant to enable connection to an additional JBOD. Is this suitable?
If you have extra SAS ports internally and want to expose them externally, that adapter may work. Just make sure you use the correct cables.

Please note that if you attach a JBOD with SATA disks, cable length MATERS!

SAS controllers can automatically change a SAS lane into SATA compatible disk port. The caveat is that cable distance is then limited. So if you have an external JBOD and intend to use SATA disks, either keep the cables as short as possible, (less than 1m total, including any internal cable runs and taking extra connectors into consideration). Or use a SAS Expander in the JBOD enclosure.

SAS expanders allow additional disks to be attached to a SAS controller. What this means is that the SAS connection between the SAS controller and expander can be as long as 2m if I remember correctly. Plus, you can still use SATA disks attached to the expander. The SAS expander changes it's disk ports into a SATA disk port as needed. Then tunnels the SATA commands and data over SAS protocol, which is part of the SAS standard's backward compatibility.
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
If you have extra SAS ports internally and want to expose them externally, that adapter may work. Just make sure you use the correct cables.

Please note that if you attach a JBOD with SATA disks, cable length MATERS!

SAS controllers can automatically change a SAS lane into SATA compatible disk port. The caveat is that cable distance is then limited. So if you have an external JBOD and intend to use SATA disks, either keep the cables as short as possible, (less than 1m total, including any internal cable runs and taking extra connectors into consideration). Or use a SAS Expander in the JBOD enclosure.

SAS expanders allow additional disks to be attached to a SAS controller. What this means is that the SAS connection between the SAS controller and expander can be as long as 2m if I remember correctly. Plus, you can still use SATA disks attached to the expander. The SAS expander changes it's disk ports into a SATA disk port as needed. Then tunnels the SATA commands and data over SAS protocol, which is part of the SAS standard's backward compatibility.

Ok, thank you, good to know about these details.
I have cables with 80cm (should be something like this), it appears they are suitable, right?
However, a SAS expander seems to be a better option. I will check if I can have this option.
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
Regarding the DC CPU could anyone tell me how many HDD drives or total capacity can it handle?
Following the previous reply from @Arwen regarding the BK CPU being modified to the E-2336, I would say that this DC CPU could handle at most nearly 20 HDDs.
If in case I need to handle aprox. 40 drives, how many CPU cores would be necessary, double the Xeon Gold 6326?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
For the cable, I can't say. Too much research & reading for a Friday before a holiday weekend :smile:.

As for the DC CPU and 20 HDDs, it's not about number of disks, but how they are connected, configured and used. ZFS writes to vDevs as a group. So 2 x 6 disk RAID-Z2 would write to 6 disks for most writes. Alternating between the 2 vDevs as to keep the amount of used space even between the vDevs.

A simple dual core slow CPU can handle 50 disks, just slowly.

Everything is trade off. If you NEED fast performance, then you get fast CPUs with higher single threaded clock speed. SMB protocol tends to be single threaded so people get CPUs with faster clock speeds, over more cores / threads at slower speeds.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello,
I've not come across discussions on where TN has been implemented 'stand alone' in a DC. I'd expect some sort of FW shenanigans being in your plans, and/or VPN solution for clients?
Or how do you plan on providing access securely?
 

tbw

Dabbler
Joined
May 6, 2022
Messages
12
For the cable, I can't say. Too much research & reading for a Friday before a holiday weekend :smile:.

As for the DC CPU and 20 HDDs, it's not about number of disks, but how they are connected, configured and used. ZFS writes to vDevs as a group. So 2 x 6 disk RAID-Z2 would write to 6 disks for most writes. Alternating between the 2 vDevs as to keep the amount of used space even between the vDevs.

A simple dual core slow CPU can handle 50 disks, just slowly.

Everything is trade off. If you NEED fast performance, then you get fast CPUs with higher single threaded clock speed. SMB protocol tends to be single threaded so people get CPUs with faster clock speeds, over more cores / threads at slower speeds.
Well, thank you, no need for that. :)
In terms of speed, I (re)searched for 3rd Gen Xeon alternatives but it seems this one is already one of the best (3rd-4th) (without going for less cores and keeping things in 10nm).
There is not much improvement in speed available they all are between (base) 2.8 - (max) 3.7 GHz, except those with 8 cores which can achieve 4.4GHz and steep prices.
Based on your (helpful) input, I am assuming that the Xeon 6326 is OK for that amount of disks, usage and considering the available alternatives.
 
Top