12 Disk Config

Status
Not open for further replies.

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
Experts, how would you setup your RaidZ with 12 disks? more than enough ram (RAM>2GB/TB)

SAS HBA>SAS Expander>Backplane>Disks

Thanks
 

Krazypoloc

Cadet
Joined
Jan 14, 2014
Messages
6
Whats the intended purpose of the pool? If you don't need lots of IO I'd do 2 RAIDz2 vdevs of 6 disks each.
 

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
Ah, good question. I intend to back-end a medium load VMware environment. I prefer to present iSCSI targets to NFS, but whichever provides for the lower latency I/O is preferred.

Disks are 1TB midline SATA disks. Balance between capacity and I/O, with a preference for I/O (but not at a cost of 50% capacity).

Thanks!
 

Krazypoloc

Cadet
Joined
Jan 14, 2014
Messages
6
If its strictly for VMFS then I'd do striped mirrors. Your alternative would be 4 RAIDz1 vdevs of 3 disks each, or 3 RAIDz2 vdevs of 4 disks each. If its a production environment I'd stay away from RAIDz1.

With each vdev you roughly double the IO of one of your single disks.
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
There's also two RAIDZ2's of six disks each.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
RAIDZ is not highly recommended for a variety of reasons for VM backing store.

This includes the discussion on volblocksize specifically as it relates to zvol's here: http://forums.freenas.org/threads/raidz-with-4k-blocks.16339/

But in general VM backing store is a particularly nasty set of problems that all coincide. Fragmentation on these pools is a problem that could ultimately doom a poorly-designed pool; the best fix for that particular problem appears to be to drastically oversize the pool (4x-10x) in order to reduce the eventual performance hit.

The best option for VM backing store appears to be:

1) The use of striped mirror vdevs,
2) Possibly (area for research) a smaller pool blocksize,
3) Massive free space overprovisioning to allow ZFS flexibility in space allocation (absolute minimum pool should be sized 2x what you plan to use),
4) Large amounts of RAM, minimum of 2GB per TB of hard drive in the pool, maybe more like 4GB,
5) Sufficient ARC or L2ARC to hold the entire working set (blocks read in a given period of time, like 1 hour or 1 day),
6) Something for SLOG.

Turns into a kind-of large project.
 

Todd Hayward

Dabbler
Joined
Jan 10, 2014
Messages
22
Plan B:

Use an NFS export, mounted as a Datastore, to hold folders and vmdk files.

Would this simplify things ?

The more I read about iSCSI and ZFS on this product, the less I like (and feel like I can depend on) FreeNAS in general.
 

Krazypoloc

Cadet
Joined
Jan 14, 2014
Messages
6
Plan B:

Use an NFS export, mounted as a Datastore, to hold folders and vmdk files.

Would this simplify things ?

The more I read about iSCSI and ZFS on this product, the less I like (and feel like I can depend on) FreeNAS in general.
NFS will also work, but its not as fast and you can't do MPIO. It is quite a pit simpler though.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The more I read about iSCSI and ZFS on this product, the less I like (and feel like I can depend on) FreeNAS in general.

It has nothing to do with FreeNAS. If you want to use iSCSI, use UFS or devices and call it a day.... or give ZFS what it needs to do the job right. If you want to use ZFS, these problems exist not only on every platform that offers ZFS, but also on most other CoW filesystem variants. A lot of people think they're going to make a Ford Escort style fileserver, cheap and ... cheap. But if you want to do heavy hauling off-road, you need a nice pickup or maybe a SUV... and it is going to cost a lot more. FreeNAS is absolutely capable of spectacular storage feats. The problem is that you're not going to be doing it on a recycled 2005-era Pentium 4 with 1GB of RAM.
 

DJABE

Contributor
Joined
Jan 28, 2014
Messages
154
jgreco, would you then recommend UFS over ZFS for VMS (ESXi connected via iSCSI with the NAS box)? One could dedicate just one HDD and instead do a backup (RSYNC etc) ono ZFS pool on the same NAS box and use zpool for other stuff too.

What about ZFS mirrors - in my case vdev0 (2 x 3.0 TB RED) and vdev1 (same 2x 3.0 TB RED). That would max IOPS, but I wonder does that even matter if you're using exactly the same HDD's, model, brand etc...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
DJABE: Keep to one thread. If you want to have 2 threads, I'll delete one of them. There's no point in having 2 different discussions at the same time on the same topic.
 
Status
Not open for further replies.
Top