24 Disks Supermicro-based 4U Monster

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
A relative of mine recently became aware of the advantages of ZFS over the RAID arrays he has been using for the last twenty years. After reading about ZFS and checking the user interface and features, he decides to build a 24 disks array in those massive 4U Supermicro chassis. While I am not sure of the exact application he has in mind (backing up some VMs through iSCSI?), here is the hardware list he is considering:
Any thought at this point apart that it's a hell of a machine? Any mistake?
I find it weird that the chassis is being advertised for X10 mobos while he goes for a X11 but I guess it does not matter. The capacity of the mirrored boot drives seems overkill to me but what do I know? I understand he does not want to buy a terabyte of RAM as this is pricey and would prefer to use L2ARC (yes I know you still need RAM just a little less).

Without knowing the exact application, I can't really think of a pool layout that would make sense though. Should he mirror his SLOG or his L2ARC though?
I believe I remember that mirroring one of the two is recommended. If so, he could probably go with a single pool made of three 6-wide RAIDZ2 vdevs with a spare each ( (6+1)*3 + 2 + 1 = 24)?

What are you thoughts?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Might I ask why you are making such enquiries for a relative?
I don't think it's appropiate to discuss the hardware other people want to use, nor do I think it's of use for anyone if you're not the owner.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Might I ask why you are making such enquiries for a relative?
I don't think it's appropiate to discuss the hardware other people want to use, nor do I think it's of use for anyone if you're not the owner.
This is one of the things we do here. We post our build plans to get other people to look them over and offer advice. It sometimes influences the build for the better or catches a mistake that might have been costly.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
While I am not sure of the exact application he has in mind (backing up some VMs through iSCSI?), here is the hardware list he is considering:
Won't you invite your relative to come and join the fray here?
a bunch of ugly Mini-SAS SFF8643 to Mini-SAS SFF8643 HD and SFF-8643 to 4x SATA cables...
That chassis has an expander backplane, so you won't need a lot of cables, just one... The four lanes in the SAS cable will be multiplexed by the expander chip on the backplane. You just need to make sure you have the correct cables.
The capacity of the mirrored boot drives seems overkill to me but what do I know?
It might be the smallest drive they can get new. The important factor is that it is a quality drive. I have servers at work that have 1TB boot drives (mirrored) because that was the smallest drive I could order from the hardware vendor. Sure don't need that much space but it doesn't hurt anything.
Should he mirror his SLOG or his L2ARC though?
Not knowing the application, it is impossible to say that he even needs SLOG or L2ARC. Those things are not generic, they depend entirely on the use the system will be put to. Maybe you should get them to come talk about it?
What are you thoughts?
Depends on what it will be used for. If it is being used to do iSCSI for storing VMs, it should be all mirrored pairs to give the most vdevs as that gives the highest IOPS. We need to know more to give good advice.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
This is one of the things we do here. We post our build plans to get other people to look them over and offer advice. It sometimes influences the build for the better or catches a mistake that might have been costly.
That wasn't the point at all.
I know we do this, I don't think it's appropiate to give feedback on anonymous third parties (or post that hardware for that mater), that might or might not consent to this. I find it very intrusive.

It's also very impractical, because we are limited by an extra translation layer and possible lack-of-knowhow about the practical implication by said translation layer.

It is'nt against the rules, but it isn't good form imho.
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87
Might I ask why you are making such enquiries for a relative?
Because I'm too kind and too happy to help when I have an intellectual interest in the matter. I know, it's a bad habit I have a hard time refraining.

I don't think it's appropriate to discuss the hardware other people want to use, nor do I think it's of use for anyone if you're not the owner.
[...] I don't think it's appropriate to give feedback on anonymous third parties (or post that hardware for that matter), that might or might not consent to this. I find it very intrusive.
I did get the third party approval for such a disclosure whose risks and benefits the third party understands.

It's also very impractical, because we are limited by an extra translation layer and possible lack of know-how about the practical implication by said translation layer.
Yes, I agree with you. I know that on top of that, the fact that I am not aware of the exact application targeted is very limiting. I moved quick with what I had.

Won't you invite your relative to come and join the fray here?
Thanks for your time and interest Chris. I am having the third party creating an account and coming here to explaining you what he has in mind exactly. I understand how limiting it is not to know the specifics of the application. Will hit you up as soon as he gets here.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
I did get the third party approval for such a disclosure whose risks and benefits the third party understands.
In that case I rest my case, you know what the risks and benefids are it seems both you and the third party made an educated decision in the mater. Great! :)
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

benwar

Cadet
Joined
Oct 15, 2020
Messages
1
Hello,

I am extremely happy to benefit from your help, I particularly thanks LimeCrusher for having worked for a long time at convincing me about the qualities of ZSF and the possibilities of FreeNAS.

For some time now, he has been feeding me with very enriching set of documentations which convinced me of the possibility of using FreeNAS (TrueNAS) in a profesionnal environment.
I needed multisite backups, VMs, files, Snaphots (Promox and VMware) etc. I do not stand Synology and Qnap interfaces and NetApp products are too "complex" for me and, particularly, freesoftware is fundamental for me.

At the beginning, my need was to have a reliable storage area of around 150TB to be able to host file type backups via SMB, NFS or even via IScsi, to use rsync, Veeam and Bacula. Now I think that I could also use this NAS to host the disks of my production VMs. I like the idea of dedicating servers.

My first question is to know if, in your opinion, it is relevant to mix our small production (3 x VM not greedy 30To) with backups of 150To.
My second question is about the hardware configuration, do you think that is correct?
My third question, what would you suggest to me as RAM, POOL, HD etc optimization rules?

THX.

Have a good day.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Now I think that I could also use this NAS to host the disks of my production VMs.
If that is the goal of this server, there are some build considerations to consider. I am going to throw some terms at you that you may not be familiar with as some are unique to FreeNAS / ZFS so here is a link to a reference:

Terminology and Abbreviations Primer
https://www.ixsystems.com/community/threads/terminology-and-abbreviations-primer.28174/

Generally speaking, more vdevs in a storage pool allow the pool to provide higher IOPS because each vdev is IO constrained by the slowest single drive in the vdev. For this reason, mirror vdevs give the best performance for the least number of drives. The more vdevs you can supply to the pool, the more IOPS the pool can support and with a 24 bay chassis, you could have 12 vdevs. I would suggest using a PCIe card or two for any SLOG/L2ARC to keep the drive bays available for drives. Here are some links to guidance previously provided on the forum that would be useful:


Why iSCSI often requires more resources for the same result (block storage)
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage (iSCSI)
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/

The ZFS ZIL and SLOG Demystified
https://www.ixsystems.com/blog/zfs-zil-and-slog-demystified/

Some insights into SLOG/ZIL with ZFS on FreeNAS
https://www.ixsystems.com/community/threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Testing the benefits of SLOG using a RAM disk!
https://www.ixsystems.com/community/threads/testing-the-benefits-of-slog-using-a-ram-disk.56561/

SLOG benchmarking and finding the best SLOG
https://www.ixsystems.com/community/threads/slog-benchmarking-and-finding-the-best-slog.63521/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My third question, what would you suggest to me as RAM, POOL, HD etc optimization rules?
More RAM is always good because ZFS uses RAM as cache
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Take a look at this 60 drive system:

1602777803324.png


24 drives is just not enough...
 

LimeCrusher

Explorer
Joined
Nov 25, 2018
Messages
87

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Top