Now I have to decide on my pool design...

Status
Not open for further replies.

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
Okay, so I have tested all the logical scenarios of how I could set up my pool and I am looking for some input. Here is the list of reasonable options.
Speed tests were done using the dd write/read test outlined in the benchmarking thread.
Use of this NAS will be for ESXi VMs and Storage Pools, all file serving for the company and backup of other data. VMs will include 2+ Windows servers acting as Domain controllers, Terminal server etc, Zimbra email server, various other machines of Windows and Linux varieties for whatever I feel a hankering for.

Setup 1: 5 vdevs of 3 drives in 3-way mirror configuration, striped. Capacity 8.9TB
Write = 161 seconds 633MB/s 18-30% CPU | Read = 97-82 seconds 952MB/s-1.21GB/s 15-35% CPU

Setup 2: 2 vdevs of 8 drives in Raid Z2 striped. Capacity 21.4TB
Write = 112 seconds - 911MBs 30-60% CPU | Read = 105 seconds 970MBs 30-60% CPU utilization
I would like to only use 14 total drives so that I have a couple slots for a Spare and Cache but that leaves me breaking the "Power of 2" rule. How absolute is that rule anyway?

Setup 3: 1 vdev of 14 drives in Raid Z2, Capacity 21.2TB
Write = 133 secs 768MB/s 25-60% CPU | Read = 165 secs 620MB/s 25-50% CPU

Setup #2 is the most balanced on performance and the read and write is certainly more than adequate. Setup #1 has the highest data safety margin but is probably a bit overkill and total capacity takes a big hit. Not that 8.9TB isn't enough for my needs. Its advantage is it requires the least processor cycles. All of them will saturate my network capability with plenty of Mbits to spare. I guess I am checking to see if any experts see a flaw in any of them. I thought about valuing all the parameters and see which one scored the highest for my situation but that sounds like too much work. :)

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Its not recommended to use >8 spindles per vdev. Its also recommended to have vdevs with the number of spindles be a power of 2 (2,4,8).

I'm not sure I understand your idea of a 3 way mirror. What is the advantage over a traditional mirror? If you use 8 vdevs of 2 drives, what are not getting that you want?

I would consider doing 2 different pools. 1 with SAS storage for your VMs, and 1 for file storage. SATA does not provide good results for running VMs.
 

Stephen J

Dabbler
Joined
Feb 3, 2012
Messages
49
Its not recommended to use >8 spindles per vdev. Its also recommended to have vdevs with the number of spindles be a power of 2 (2,4,8).

I'm not sure I understand your idea of a 3 way mirror. What is the advantage over a traditional mirror? If you use 8 vdevs of 2 drives, what are not getting that you want?

I would consider doing 2 different pools. 1 with SAS storage for your VMs, and 1 for file storage. SATA does not provide good results for running VMs.

"If you need better data protection, a 3-way mirror has a significantly greater MTTDL than a 2-way mirror." (ZFS Best Practices Guide) My only reason is data protection.

So, what about using SSDs for the VM pool? I am not opposed to using SAS drives just curious.

Stephen
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
OK. I suspect that the difference in protection (in practice) will not be noticeable between the two, and cost will increase.

Couple of things with SSDs. 1) cost. 2) they work great, they are fast. with that speed comes bottlenecks in different places, such as network. To provide an example, at work, I have a flash array, with 24x256G SSDs in it. Its attached to 8G fibre. even with 8G, I can't bottleneck the flash (in terms of IOPS), even running 400 VMs, I can't bottle neck the flash.

I don't think that (from what I've heard so far) flash is justified. I think you'd be fine with 10k or 15k SAS. However, if this is work work, perhaps you could work with some vendors and do a PoC so you could compare your options. I frequently do that. Sometimes the results are surprising.
 
Status
Not open for further replies.
Top