Okay, so I have tested all the logical scenarios of how I could set up my pool and I am looking for some input. Here is the list of reasonable options.
Speed tests were done using the dd write/read test outlined in the benchmarking thread.
Use of this NAS will be for ESXi VMs and Storage Pools, all file serving for the company and backup of other data. VMs will include 2+ Windows servers acting as Domain controllers, Terminal server etc, Zimbra email server, various other machines of Windows and Linux varieties for whatever I feel a hankering for.
Setup 1: 5 vdevs of 3 drives in 3-way mirror configuration, striped. Capacity 8.9TB
Write = 161 seconds 633MB/s 18-30% CPU | Read = 97-82 seconds 952MB/s-1.21GB/s 15-35% CPU
Setup 2: 2 vdevs of 8 drives in Raid Z2 striped. Capacity 21.4TB
Write = 112 seconds - 911MBs 30-60% CPU | Read = 105 seconds 970MBs 30-60% CPU utilization
I would like to only use 14 total drives so that I have a couple slots for a Spare and Cache but that leaves me breaking the "Power of 2" rule. How absolute is that rule anyway?
Setup 3: 1 vdev of 14 drives in Raid Z2, Capacity 21.2TB
Write = 133 secs 768MB/s 25-60% CPU | Read = 165 secs 620MB/s 25-50% CPU
Setup #2 is the most balanced on performance and the read and write is certainly more than adequate. Setup #1 has the highest data safety margin but is probably a bit overkill and total capacity takes a big hit. Not that 8.9TB isn't enough for my needs. Its advantage is it requires the least processor cycles. All of them will saturate my network capability with plenty of Mbits to spare. I guess I am checking to see if any experts see a flaw in any of them. I thought about valuing all the parameters and see which one scored the highest for my situation but that sounds like too much work. :)
Stephen
Speed tests were done using the dd write/read test outlined in the benchmarking thread.
Use of this NAS will be for ESXi VMs and Storage Pools, all file serving for the company and backup of other data. VMs will include 2+ Windows servers acting as Domain controllers, Terminal server etc, Zimbra email server, various other machines of Windows and Linux varieties for whatever I feel a hankering for.
Setup 1: 5 vdevs of 3 drives in 3-way mirror configuration, striped. Capacity 8.9TB
Write = 161 seconds 633MB/s 18-30% CPU | Read = 97-82 seconds 952MB/s-1.21GB/s 15-35% CPU
Setup 2: 2 vdevs of 8 drives in Raid Z2 striped. Capacity 21.4TB
Write = 112 seconds - 911MBs 30-60% CPU | Read = 105 seconds 970MBs 30-60% CPU utilization
I would like to only use 14 total drives so that I have a couple slots for a Spare and Cache but that leaves me breaking the "Power of 2" rule. How absolute is that rule anyway?
Setup 3: 1 vdev of 14 drives in Raid Z2, Capacity 21.2TB
Write = 133 secs 768MB/s 25-60% CPU | Read = 165 secs 620MB/s 25-50% CPU
Setup #2 is the most balanced on performance and the read and write is certainly more than adequate. Setup #1 has the highest data safety margin but is probably a bit overkill and total capacity takes a big hit. Not that 8.9TB isn't enough for my needs. Its advantage is it requires the least processor cycles. All of them will saturate my network capability with plenty of Mbits to spare. I guess I am checking to see if any experts see a flaw in any of them. I thought about valuing all the parameters and see which one scored the highest for my situation but that sounds like too much work. :)
Stephen