All flash zpool - unsure about vdev config

Status
Not open for further replies.

Dutchie30

Cadet
Joined
Dec 13, 2015
Messages
4
Hi all,

I'm looking to migrate an existing config away from Open-E to FreeNAS. The full config is:

- Intel Xeon E3-1270v3
- Supermicro X10SLM-F
- 32GB DDR3 ECC
- LSI 9271-4i
-- 24x 256GB Samsung 840 Pro (storage)
- Areca ARC-1200
-- 2x 128GB Samsung 840 Pro (boot)
- 2x 1Gbps multipath network connection

For starters because of ZFS/FreeNAS I'm looking to replace the 9271-4i with a 9207-8i HBA.

The current Open-E config is setup in RAID60 with two hot spares giving a total usable space of approximately 4.5TB. The config is used exclusively as an iscsi storage node for virtual machines in a XenServer pool. It's work will remain the same after the migration to FreeNAS.

My requirements are:

1) IOPS are more important than throughput
2) Approximately 2TB used space, but room to grow up to entire pool size (either temporary or permanent) if required.
3) Good redundancy

I'm left with one big question, and that is how to configure the 24 drives. After reading I've come up with the following two options:

1) 12x mirrored vdev - approximately 3TB usable space
2) 4x raidz2 of 6 drives - approximately 4TB usable space

I'm aware that FreeNAS suggests a maximum of 50% used (I believe in an iscsi configuration only, correct?) for performance reasons. Is this suggestion a general guideline or is real performance degradation seen after using more than 50% in an iscsi pool? And does this suggestion also apply to all-flash pools?

If the 50% suggestion is valid even for my all-flash pool I'm inclined to go with disk option 2. In that case, is the IOPS compareable to 4 striped SSD's?

Hope this all makes a bit sense. Look forward to your response!
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
My requirements are:

1) IOPS are more important than throughput
2) Approximately 2TB used space, but room to grow up to entire pool size (either temporary or permanent) if required.
3) Good redundancy
  1. This indicates striped mirrors.
  2. You would be able to grow your storage by adding more mirror vdevs, or by replacing each drive in one or more mirrors with a larger drive.
  3. 4x RAIDZ2 would giver you higher reliability and lower performance. With striped mirrors, resilvering is faster.
I have no personal experience of iSCSI, but with IOPS as the top priority, I would expect the 50% guideline to be important.
Areca ARC-1200
What's this for?
 

Dutchie30

Cadet
Joined
Dec 13, 2015
Messages
4
Thanks for the feedback Robert. I've gone with option #1 for disk layout and am very happy with the performance.

The Areca is used as a hardware RAID1 controller for the boot disk(s). It is currently used as the install disk of FreeNAS.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
The Areca is used as a hardware RAID1 controller for the boot disk(s)
Interesting choice, without the potential for catastrophe that using it for a data storage pool would bring.

Is there a reason you don't want to use it in JBOD mode and let ZFS take care of mirroring?
 

Dutchie30

Cadet
Joined
Dec 13, 2015
Messages
4
Interesting choice, without the potential for catastrophe that using it for a data storage pool would bring.

Is there a reason you don't want to use it in JBOD mode and let ZFS take care of mirroring?

I've had it in RAID1 during it's Open-E time and thought it wouldn't make a difference using FreeNAS since it's only used as a boot device. Removing/recabling it inside the chassis would have been more work.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
thought it wouldn't make a difference using FreeNAS since it's only used as a boot device
It will certainly make a difference. Just like with a data pool, FreeNAS will be unable to monitor the health of the drives, nor will it be able to fix corruption by scrubbing. However, as long as you have an up-to-date backup of your configuration, recovering from a lost boot pool is straightforward.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
It will certainly make a difference. Just like with a data pool, FreeNAS will be unable to monitor the health of the drives, nor will it be able to fix corruption by scrubbing. However, as long as you have an up-to-date backup of your configuration, recovering from a lost boot pool is straightforward.
However, it seems like this build is something relatively enterprisey... so knowing that the boot pool is degraded before it totally dies might be helpful and prevent unintended downtime.
 
Status
Not open for further replies.
Top