Disclaimer; I don't work for iX. ;)
There's no magic - to build the right solution you'll want to understand not just how much storage you need, but also the bandwidth/IOPS/latency performance you want. If you have an existing environment you're looking to upgrade there there's tools that can give you a good perspective; or if it's a home setup where the only person accountable is you, you can make estimates a little more safely. ;)
Right, I am trying to wrap my brain around this. In reading about ZFS, there are a few things that I think really make it difficult for me when you put them together: (let me know if any of these are not correct)
- A RAID-Z2 VDEV will have the performance of the slowest single disk, yes? (file based not iSCSI)
- This being said, a few months ago I did a test using 10x of the 4TB 7.2K SAS disks in RAID-Z2
- Anecdotally, performance seemed OK, it did not take that long to copy 20TB, and read speed over 1Gig was decent
- You want to restrict the number of disks in a VDEV, seems much past 12 or so and you need to start thinking about it
- Of course, if you add another VDEV, you double the number of parity disks when you add the 2nd RAID-Z2 VDEV
- But, the two VDEVs would be striped, so you would get the performance of the two slowest disks?
- The size of a VDEV cannot later be increased by adding disks
- You want an abundance of free space in a VDEV to maintain performance due to COW, etc.
- I do have to be mindful of power, heat, and noise as much as I can
Today, the size of all my data is around 20TB or so, and the growth has slowed down to less than 1TB a year.
One of my solutions to all of this is to have the three servers. When it comes time that I am out of space and need to expand, or performance has eroded, use the three servers to maintain a safe number of copies of the data while one of the servers is re-done with newer / more disks and then rinse and repeat. Only one of the servers would be normally on, the 2nd would be for backup of the first say weekly or monthly (malware or cryptoware can't get to it if it is off, yes I know, snapshots), and the third would be spare, or maybe containing a 3rd copy of the data that is only update every 6 months or more.
All three servers follow as many of the recommendations as possible - ECC RAM, eSSDs, eHDD or NAS drives, hardware that is on the compatibility list, etc.
Sixteen drives gives you the option to do one or two pools with decent results.
Single pool, sixteen drives, pure capacity storage with absolutely no block (and you guarantee it will never get involved) I would be tempted to do as 2x 8-drive Z2 if you need the 12 drives of usable capacity. Otherwise, I would say do mirrors anyways for the better performance (8 vdevs vs. 2) and 8 drives of usable space. Use a pair of (non)mirrored SSDs for an L2ARC if it's only data - I don't think SLOG would be needed in this use case.
- Easy to be sure there will be no block, because you would have to re-format to do that anyways
- So, 2x8x Z2 would be approx 24TB each / 48TB total with four parity drives
- So a mirror would be 8x 2-way mirrors, 32TB usable? I imagine that would be quite fast
- One advantage of this is that it could do block or traditional storage, or maybe both?
- If they are not mirrored, why do you need two drives for L2ARC?
I have constraints to consider unless I make some changes: (still thinking about it)
- Feel free to suggest alternate configurations
- For the first (primary) server, I was planning to use the 7x 10TB SATA drives (low power and heat)
- I could put these in the regular tower case that can hold 11x drives, because they don't put off as much heat
- I don't see how else you could configure these other than RAID-Z2
- I was thinking of filling it up with 4TB drives to see how bad the heat problem is or if it is OK
- For the 2nd server, this would use the Supermicro 16x chassis (seems a shame to not use it as primary, still thinking)
- This is where I would put the 4TB drives because it is designed for that
- So this would be to backup the first
- I was going to buy a 2nd one of these before CV19 hit, still considering that
- The 3rd server is the other tower case with a capacity of 7x drives
- This is a much lower spec server but still good
- I do have the JBOD which can handle 10x enterprise or NAS drives, not sure how I am going to use this yet
- Not sure I want an array that consists of internal and external drives
- Worried what would happen if I accidentally didn't power on the JBOD, or it lost power at some point
- Thinking whatever I do with it, should be restricted to what can fully fit
- I can also fit 2x 2.5" drives, the expander can handle way more than that if I need to do something else someday
- I have two nice LSI RAID controllers on the way to handle VMFS duty, which will be all flash
iSCSI config is absolutely mirrors. SSDs for L2ARC if you don't have an all-flash pool, and an NVMe/Optane or NVDIMM SLOG, depending on how far your dollar stretches. If I recall though, you've got some HGST SSDs that would do a pretty decent job of SLOG duties, although they won't handle 10Gbps.
- I do have some 400Gig NVMe, but they are mixed-use, so not sure they can take that many writes
- Yes, the 400Gig and 200Gig disks are write-intensive, so they are a good choice four sure
- The flash drives could mainly be direct-connect vs. on the expander so that they get the best bandwidth
- Right now I am leaning towards not doing iSCSI for data, if I do maybe I take a couple of disks and just do it as a test
- I have 10Gig cards, but I was planning to only use them to connect servers to each other for backups
- It does increase the speed by 2-4x over 1Gig, limited by the disk speed
If you're considering having one system pull double duty, then you lean towards which you need more of. 6-drive Z2 for file storage, 5x 2-way mirrors for iSCSI, and a few SSDs for L2ARC/SLOG duties is absolutely valid.
- This is an interesting setup
- In that case, I could just put the 7x 10TB SATA drives in RAID-Z2, and use the remaining 9 slots for mirrors
- This does not include internal non-hot-swap bays, I think I can fit around 8x 2.5" disks this way
- This would give me 50TB of data and 4x mirrors would give 16TB
- I could use the external jbod to test a mirror setup with iSCSI
- 2nd server with enough data to back this up, buy a new case if it can't handle the heat
Another side note is that TrueNAS 12 might throw some unexpected wrenches into the works that doing a "future-proof design" for could make sense. I can say that special vdevs for metadata are going to be very popular for certain workloads; "block storage on spindles" will almost certainly be one of them.
Ah, I don't know anything about this. I know that new version is coming, but I don't know when and I don't know much about features, I thought they were mostly merging FreeNAS with TrueNAS. But, this again is why I will have three servers, so that I can afford to rebuild from time to time when I need to blow something away to take advantage of something new.
Interesting stuff
-JCL