I misunderstood your specifications! I thought you were going to set up 2 RAIDZ1 vdevs, each comprising 6 drives, for a total of ~10TB of usable space. But you were planning on using mirrors instead. Ooops!
I apologize for the confusion. Mirrors are the preferred topology for iSCSI block storage and you designed your system to use 6 of them, for a total capacity of ~6TB. Still, everything I said about space utilization remains true. When I read your original post, I also assumed that you need ~5-6TB of storage capacity for your VM images. Is this true? Or did I misunderstand
that as well? If you
do need that much space, then your design won't work as it stands because you will be using 100% of your storage capacity. However, if your VM images only occupy 2-3TB then your design will be fine. The important thing is to design your system so that you only use roughly half (or less) of the total available storage capacity.
It used to be true that you would lose your pool if the SLOG device failed, but that is no longer the case. Because of this, you can use a single P3700 if you prefer. The only reason to mirror your SLOG devices is if you're concerned about the drop in performance that would result if the SLOG failed.
You're welcome!
Okay good, just to be clear, we have in our current 'SAN':
530GB 15k SAS - 606GB provisioned - 30GB free
1730GB 7.2 NL-SAS - 1670GB provisioned - 127GB free
440GB SATA - 482GB provisioned - 202GB free
Total: 2700GB - 2758GB provisioned - 359GB free
Most VM's (like Exchange and SQL) are thick provisioned, but as you can see there are some thin provisioned VM as well (i'm 70GB short on the fast pool, with just 30GB free that's not good to say the least)
So with maximum utilization and no compression i'm still well under the 3TB best practice/rule of thumb mark.
But of course i'm gonna keep an eye on the performance and just go 'oh you wan't another VM, then we just have to buy 2 more SSD's so we don't compromise performance' giving me more IOP's for the entire array and more space for VM's.
Also, if management decides the initial price is to high I can easily reduce the number of SSD's to 11 or 9, giving 5TB or 4TB of storage, not ideal but still within safety limit's.
The ESXi hosts have some local disk's that are just used for booting ESXi, I might repurpose them into a small 'slow storage' RAIDZ1 pool for temporary backup locations or unimportant VM's.
I'm not sure, which is why I hedged and said "these rules-of-thumb may not apply". But, given the same model of SSD, it's a certainty that mirroring them will provide more IOPS than putting them into any RAIDZ
n configuration.
Right. And you may be correct about a higher utilization rate being okay for SSD-based block storage.
Understood. We are in fairly new territory here; there are very few all-flash systems to compare.
It may be that
@JeroenvdBerg would be better served using HDDs instead of SSDs for his pool and maxing out his memory instead. After all, the board he's chosen can support up to 500GB or 2000GB of RAM, depending on CPU & RAM type. But I can't recommend this, in good conscience, because I don't know for
certain whether it's true.
That's the main problem, all-flash makes not much sense in a homelab / mass storage array's (yet), so nobody is really doing it except businesses like TrueNAS (and I was not impressed with their prices) and EMC (we got a quote for about 40k all flash, and an array with 24 x 10k SAS would cost us 19k) we had a quote from DELL but they are out of their fucking mind (50k for hybrid).
I have been searching online for quite some time, but all-flash ZFS is not something that is very easy to find.
There have been some other users that try to do a High-performance homelab SAN:
https://forums.freenas.org/index.php?threads/high-performance-freenas-build.28820
https://forums.freenas.org/index.ph...-choice-and-the-rest-of-the-components.28671/
https://blog.pivotal.io/labs/labs/high-performing-mid-range-nas-server
http://www.hyperionepm.com/category/hyperion-home-lab/
All of them consider the price of SSD vs 7.2k spinning drives, that is not a compromise I want and have to make, especially since they are a lot cheaper than 10k/15k SAS drives (the drives I
would use if not using SSD) so I can just buy 2 SSD's for the price of 1 SAS drive, it might fail faster, but hey: I have 2 so just replace it and send the other back to the supplier, that why I wan't redundancy in the system, as much as possible, but without compromising performance.
But for our purposes EMC/TrueNAS/DELL/Nutanix/PureStorage/HP, cost to way too much and are overdesigned, let me explain: we are just a small IT shop so most of us know our stuff, but we have to deal with decisions made by our no longer employed predecessors, so our storage LAN is on the same network/switch as the rest of the machines (no VLAN), the 'SAN' hast just 1 power supply and has failed 2 times in the last 4 years, the switch that was used to connect the ESXi hosts/'SAN' and data network was 100Mbit (thank god it failed), the OS (openfiler) of the 'SAN' purple screens every year at least once and is no longer updated, the NIC's on the motherboard of the SAN are not working with the OS so a PCI card with 2 x 1000Mbit has been installed, but no multipathing is configured, jumbo frames off, all LUN's are 100% allocated to VMware, warranty 1 year (including drives), etc.
So just to fix a few of these issues would be a relief for me and colleagues. /rant
But given that the differences:
Superserver 6028R-E1CR12L 12x3,5" + 2x2,5" (Super X10DRH-iT) - €1,715.56
Intel Xeon E5-2637 v4 - €869.83
Samsung M393A4K40BB0-CPB 32GB 4x - €492.56
Intel DC S3710 200GB - 1x - €188.99
Intel DC P3700 PCIe 3.0 x4 400GB - 1x - €455.95
WD Black WD1003FZEX 2TB 7.2k SATA 12x - €1,391.26
MCP-220-82611-0N 1x - €30.00
Total: €5,037.13 / 12TB usable (with no spare) / 10TB (2 spare)
And this:
Superserver 2028R-E1CR24L - 24x2,5" + 2x2,5" (Super X10DRH-iT) - €1,865.74
Intel Xeon E5-2637 v4 - €869.83
Samsung M393A4K40BB0-CPB 32GB 4x - €492.56
Intel DC S3710 200GB - 1x - €188.99
Crucial MX300 2,5" 1TB - 9x/13x - €2,043.00
Total: €5,460.12 / 4TB usable
Total: €6,368.12 / 6TB usable
The price difference is between 300,- and 1200,-, of course with the first option utilizing 3,5" drives at 2TB you get double the capacity, but no room for expansion, and far less performance.
If using proper enterprise grade SAS drives: 13 x 10k 2,5" SAS drives (HP 718162-B21 SAS2 10k 1,2TB) total: €6,296.14 (7,2TB usable) / 13x15k 2,5" SAS, Total: €7,053.26 (3,6TB usable) (Seagate Enterprise Turbo SSHD ST600MX0052 600GB)
So a RAID (Redundant Array of Inexpensive Disks) of SSD's starts making more sense.
The prices above are just what I could find online, I expect the system itself + CPU and memory to be cheaper with one vendor that we recently contacted about a quote for a system. (including assembly and testing)