Architectural limitations ~> New box

DurrltoneGuy

Cadet
Joined
Dec 20, 2021
Messages
5
I remember seemingly once upon a time someone saying that on a per pool basis it wasn't advisable to have more than 12 spindles...?

As I'm Hmm'ing & Haw'ing on building my next (presumably last, at least for QUITE some time) box, capacities & limitations pop into mind.

With that being said, is a dozen spindles still the high water mark on a pool by pool basis? What reason & logic goes into this?

Along that same line, are there any other capacity/limitations to be aware of when building what someone would call their last? What reason & logic goes into these assertions?

Are there any deltas in these figures between core & scale?


FWIW, I'm thinking of a 2S board with a couple e5-2683's, ~256gb ram to start with, undecided on the HBA at the moment, using SAS3 spindles, 10gbe to start & maybe 100gbe in the not too distant future?
Workload will be mainly containers local to the freenas box along with CIFS & a dash of nfs or iscsi for virtualization

Thanks
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Its more a dozen spindles per vdev, not per pool to be considered as a sensible limit
 

DurrltoneGuy

Cadet
Joined
Dec 20, 2021
Messages
5
Ooooooh, nice....

I've never been in a situation where I concatenated a new vdev onto an existing pool, I guess with that being said

Are there any upper limits on:

Pools ?
Vdevs/ pool ?
Tb/spindle ?
Datasets/ pool ?

Does that dozen count change if you go flash as opposed to spindles? What made it arbitrarily land at a dozen?

Thanks
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
What made it arbitrarily land at a dozen?
There is an architectural practical limit in the amount of storage devices in a RAID-Zx vDev, (Virtual Device). This has been determined to be about 12. Going to 13 won't be too bad, but going to say 36 can end up as a performance disaster. (By the way, we just had someone with a 36 drive RAID-Zx and wondered why his performance went down the toilet after a while...)

The reason to avoid too wide RAID-Zx vDevs is two fold. First, if you use small files they will still use up parity space. For example, a 16KByte file may use exactly 1 data disk, BUT, with RAID-Z3 it would have 3 PARITY blocks / disks. Using a narrower RAID-Zx stripe means you may be able to drop a parity. Like a 10 wide RAID-Z2 would be more efficient for smaller files because only 2 parity blocks / disks would be used.

Next, on disk loss, a too wide RAID-Zx which uses large files would likely have to read all the other disks, (except extra parity), to re-create the failed disk. For example, a 20 disk with RAID-Z2 would have 18 data blocks per stripe. So on loss, you would have to read 18 disks to populate a replacement disk. That's more time consuming than a 10 disk wide RAID-Z2, which only has 8 data blocks per stripe.

Last, little knowledge is available about too wide RAID-Zx stripes. And what usage affects the performance. Like:
  • Do small files make a too wide RAID-Zx perform worse than large files?
  • Does it mater about fragmentation?
  • Would the 80% full rule be the same. Or even lower? Or can it be higher?
  • Why does an extra wide RAID-Zx vDev start to suck performance wise, after a while? But worked well in the beginning?
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
FWIW, I'm thinking of a 2S board with a couple e5-2683's, ~256gb ram to start with, undecided on the HBA at the moment, using SAS3 spindles, 10gbe to start & maybe 100gbe in the not too distant future?
Workload will be mainly containers local to the freenas box along with CIFS & a dash of nfs or iscsi for virtualization
The workload as well as the NIC speeds strongly suggest that you want to use SSDs. And you also want to use mirrors only, probably even for SSDs with 100 Gbps network speed.
 
Top