Hi all,
I'm looking to migrate an existing config away from Open-E to FreeNAS. The full config is:
- Intel Xeon E3-1270v3
- Supermicro X10SLM-F
- 32GB DDR3 ECC
- LSI 9271-4i
-- 24x 256GB Samsung 840 Pro (storage)
- Areca ARC-1200
-- 2x 128GB Samsung 840 Pro (boot)
- 2x 1Gbps multipath network connection
For starters because of ZFS/FreeNAS I'm looking to replace the 9271-4i with a 9207-8i HBA.
The current Open-E config is setup in RAID60 with two hot spares giving a total usable space of approximately 4.5TB. The config is used exclusively as an iscsi storage node for virtual machines in a XenServer pool. It's work will remain the same after the migration to FreeNAS.
My requirements are:
1) IOPS are more important than throughput
2) Approximately 2TB used space, but room to grow up to entire pool size (either temporary or permanent) if required.
3) Good redundancy
I'm left with one big question, and that is how to configure the 24 drives. After reading I've come up with the following two options:
1) 12x mirrored vdev - approximately 3TB usable space
2) 4x raidz2 of 6 drives - approximately 4TB usable space
I'm aware that FreeNAS suggests a maximum of 50% used (I believe in an iscsi configuration only, correct?) for performance reasons. Is this suggestion a general guideline or is real performance degradation seen after using more than 50% in an iscsi pool? And does this suggestion also apply to all-flash pools?
If the 50% suggestion is valid even for my all-flash pool I'm inclined to go with disk option 2. In that case, is the IOPS compareable to 4 striped SSD's?
Hope this all makes a bit sense. Look forward to your response!
I'm looking to migrate an existing config away from Open-E to FreeNAS. The full config is:
- Intel Xeon E3-1270v3
- Supermicro X10SLM-F
- 32GB DDR3 ECC
- LSI 9271-4i
-- 24x 256GB Samsung 840 Pro (storage)
- Areca ARC-1200
-- 2x 128GB Samsung 840 Pro (boot)
- 2x 1Gbps multipath network connection
For starters because of ZFS/FreeNAS I'm looking to replace the 9271-4i with a 9207-8i HBA.
The current Open-E config is setup in RAID60 with two hot spares giving a total usable space of approximately 4.5TB. The config is used exclusively as an iscsi storage node for virtual machines in a XenServer pool. It's work will remain the same after the migration to FreeNAS.
My requirements are:
1) IOPS are more important than throughput
2) Approximately 2TB used space, but room to grow up to entire pool size (either temporary or permanent) if required.
3) Good redundancy
I'm left with one big question, and that is how to configure the 24 drives. After reading I've come up with the following two options:
1) 12x mirrored vdev - approximately 3TB usable space
2) 4x raidz2 of 6 drives - approximately 4TB usable space
I'm aware that FreeNAS suggests a maximum of 50% used (I believe in an iscsi configuration only, correct?) for performance reasons. Is this suggestion a general guideline or is real performance degradation seen after using more than 50% in an iscsi pool? And does this suggestion also apply to all-flash pools?
If the 50% suggestion is valid even for my all-flash pool I'm inclined to go with disk option 2. In that case, is the IOPS compareable to 4 striped SSD's?
Hope this all makes a bit sense. Look forward to your response!