Design for max performance

Status
Not open for further replies.

aaronlalonde

Cadet
Joined
Sep 4, 2012
Messages
2
I am looking to design an array to serve two esxi hosts with about 4-6TB usable space. so far I am looking at a supermicro 16 drive chassis, supermicro xeon MB, at least 16gb ram two LSI 9112-8i controllers and 16 sas 600 15k drives. I will install some ssd drives for cache and logs if necessary.

I am unsure what the thruput possibilities of this system may be as it is still on paper. Can someone help with drive groups or any other tips to gain max read/write speeds for this array?

thanks
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
ZFS RAID groups should always be 2 to an integer power plus the raid level in terms of number of drives. (so for RAID-Z, 3, 5, or 9 drives, for RAID-Z2 4, 6, or 10 drives). Higher RAID levels incur an increasing performance and capacity penalty but offer better data security. Consider striping RAID groups of ideal sizes and using leftover drives as spares.

On 8.2 ZIL devices A) only benefit synchronous writes (so primarily NFS) and B) should ONLY be used mirrored, because otherwise the ZIL device represents a single point of failure for the entire pool. This advice still holds on v28 pools (coming in FreeNAS 8.3), although a pool will at least theoretically be able to survive log device failure on v28. As for cache devices, the best benefit comes (of course) when the working set fits within the cache device+ARC (RAM) combined.
 

aaronlalonde

Cadet
Joined
Sep 4, 2012
Messages
2
so it looks like i should have 4- 4drive z2 arrays striped. also two 128gb mirrored cache drives and two mirrored laptop drives for the os installation?

1. does this sound resonable and
2. how can i determine the thruput (read/write) on this config before actually buying the hardware?
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
That setup will lose a full 50% of its capacity to parity and doesn't sound right for your resources. Instead, why not use two 6-drive RAID-Z2 vdevs, each with a spare drive attached in case of failures. Remember you need two drive bays for the flash caches. If you plan to use laptop drives instead of flash drives for OS installation, you won't even have space for the spares. (Laptop drives will have no performance bonus; the OS loads into RAM and rarely ever touches the boot disk. Internal USB is a viable choice if you don't want the boot drive hanging outside the box.)

As for throughput prediction, it depends very heavily on workload type, in your case particularly the ratio of cache size to working set size. Even absent that I don't know how to reliably predict throughput from theoretical data, so I won't make unfounded guesses.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,402
That setup will lose a full 50% of its capacity to parity and doesn't sound right for your resources.
He did say max performance. Which would probably be two or so separate pools setup as striped mirrors.

Since, spares aren't hot anyway I wouldn't worry about them not being in the case.
 

ben

FreeNAS GUI Developer
Joined
May 24, 2011
Messages
373
paleoN is correct - striped mirrors are (probably) better for performance than RAID-Z2. I don't have any particular recommendations for how to set up such a pool , though.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
If you go to 8.3 you will getting the added performance benefit of zfs pool version 28. this also allows you to use a single drive as zil because the zil can fail without failing the pool.

I will hopefully get an intel 313 one of these days and see the impact of a zil on nfs.
 
Status
Not open for further replies.
Top