Pools and Ethernet

Miguel Nunes

Explorer
Joined
May 6, 2016
Messages
52
Hi everyone,

This is my first post of the year, so I would like to wish everyone a happy and fulfilling 2020.

Now I have a few questions to ask about Pools, HBAs and Ethernet cards.

I have a Xeon based FreeNAS that I would like to convert to a iSCSI and Regular file server for our internal network.
We have a secondary FreeNAS using a HP MicroServer NEO-34
We also have 2 Synologies that we want to use as a target rsync servers. We like diversity.

Pools for the Primary NAS:
- First option would be 24 drives assembled in 3 striped pools of 8 drives each. Each pool would be using a RAID-Z2/Z3 setup.
The idea behind this is that I figured out the other day that ixsystems recomends for iSCSI RaidZ1 pools that are then striped to increase the data throughput.

- Second Option would be 24 drives assembled in 2 striped pools each with 12 drives in RAID-Z3.

What would be the most secure option with some good balanced throughput? I currently have a setup with a pool with 8 x RAID-Z2 drives and I get a 300MB+ throughput.

The use case is basicaly for volumes using VMware filesystem. SO far we have a direct 10Gbit connection from a compute machine using ESXi (no driver problems) to the NAS. We plan having 2 to 4 more machines in a mix of compute/workstation and desktop.

Because we have a lot of macs we decided to abandon the dual role Melanox cards and focus on a switch that has 10Gbit uplinks to connect the NASes to it.


Pools and Ethernet:
On our secondary NAS we have a RAID-Z2 Pool with 4 x 500GB drives. We need to expand to 4x3TB or 4x4TB or 4x6TB or 4x8TB. We think 10TB to 16TB are away to unsecure. The choice will depend on the cost and on the limit of RAID-Z2 to operate reliably and securely with 4 drives.

Is it possible to update the drives one by one to larger capacities and in the end update the pool size?

The network card is a Melanox Connect-X card, using dual ethernet 10Gbit/s mode. The Infiniband speed is 20Gbit/s per port, but that isn't supported on FreeNAS.
We like the infinibands but we have driver issues with ESXi so our plan is to move to pure 10Gbit/s ethernet cards. So our 1st idea was to install a dual 10 Gbit/s pure ethernet card so we could later use a 10Gbit switch with dual 10Gbit/s. There would be a direct connection between the 2 FreeNAS servers for replication and data access from the internal 1Gbit card.

We want to use it to store TimeMachine data, a iSCSI volume to share between Linux machines an OCFS2 filesystem (experimental) and some files (some are rsynced from the primary NAS). This machine is low-load though.

Do you recommend this approach?

Thanks in advance for your help!

MAN
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You will not get good throughput for iSCSI with RAIDZanything, unless you have dozens of drives and many vdevs.


 

Miguel Nunes

Explorer
Joined
May 6, 2016
Messages
52
Thank you jgrfeco.

That was the most complete information I got so far.
My option is to create mirror vdevs of 3 or 4 drives, and stripe the vdevs. I have space for 24.
I am also going to expand from 64GB RAM to 128GB.

Thank you again,
 
Last edited:
Top