Initial Replication

Status
Not open for further replies.

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
Greetings! My next question...

I am working on replicating some data (About 5 TB) from FreeNAS server A to FreeNAS server B as an interim to reclaim some disks that I will be using in FreeNAS server C. I am having challenges in getting reasonable throughput between the systems. The systems are directly fibered to eachother on 10Gbps NICs. Iperf comes back with 9.38 Gbps. I have tried RSYNC with about 150 Mbps throughput, and I have tried replication with about 500 Mbps throughput. I know the disks and controllers are likely to bottleneck before the NICs but I feel like I should be able to get much closer to the controller speed of 6 Gbps. When mounting ISCSI shares from the pools I see read and writes in the 3+ Gbps range (450 MBps in crystal disk). Anything stand out that I am missing? I have a few other options that will work but are slower and I wanted to leverage the 10Gbps NICs for the initial replication.

Server A:
Build: FreeNAS-9.10.2-U4 (27ae72978)
Platform: Intel(R) Xeon(R) CPU L5630 @ 2.13GHz x 2
Memory: 16345MB
Disk: 12 x 1TB in Raid-Z3
Controller: 8 disks on LSI 2008-IT 4 disks on onboard SATA

Server B:
Build FreeNAS-9.10.2-U3 (e1497f269)
Platform Intel(R) Xeon(R) CPU E5-2450L 0 @ 1.80GHz
Memory 49088MB
Disk: 14 x 1TB in Raid-10
Controller: 14 disks on LSI 2008-IT
 

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
Not from a performance standpoint, no. I ended up using robocopy from the original source getting about 1200 Mbps throughput.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
What you're seeing is unfortunately normal. On a direct link connection, such as you have, you can turn off replication stream compression and disable the encryption cipher. The snapshot replication will still be sent via ssh which will most likely be your new-found bottleneck. When you kick off the replication task, open the "Display System Processes" and look at the CPU column. You should see the sshd pid and dd pid spike CPU use. It should still be faster than CIFS via robocopy but there are some unfortunate performance bottlenecks in freeNAS in regard to zfs replication. I get about 2.5Gbps on our direct 10G link with the current zfs replication.

There is a fix incoming to address the performance issues. Here's hoping that replication will move closer to line-rate but we will have to see. https://bugs.freenas.org/issues/24405
 

Daniel-A

Dabbler
Joined
Jan 17, 2017
Messages
22
Thanks for the feedback! That gives me a better understanding. Looking forward to testing after the fix!
 
Status
Not open for further replies.
Top