ZFS Send : FreeNAS 9.2.1.8 with 4 gig NICs > 9.2.1.8 with 4 gig NICs

Status
Not open for further replies.

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
I want to know how to get 4x the speed between 2 FreeNAS boxes when doing a ZFS send.

Do I make a LAG and round robin load balance? Do I have to have a static IP and 3 IP aliases in the same or different subnet? I am fairly sure I cannot use Etherchannel. I am trying to send over a NFS file system that has a LOT of small files(2.8TB) and I want to do this as fast as possible.

Thanks,
Joe
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You can't. LAGG doesn't work that way. You can only get the speed of a single network link with LAGG, even if you are doing LAGG in between and on both ends. LAGG is not multipath so you cannot boost the performance like you want. LAGG is designed to be a benefit because of failover and a large number of simultaneous clients connecting.

If you want "moar speed" you will need to get 10Gb hardware. I suspect you are just going to wait out the transfer though. ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Round robin will NOT improve your throughput in the way you want OP.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
It heavily depends on how your storage is structured, but if stars are aligned you could engineer multiple, as in two or three or four, simultaneous ZFS send processes each using its own wire.

Now..., what NFS has to do with your transfer performed using ZFS send / receive ?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It heavily depends on how your storage is structured, but if stars are aligned you could engineer multiple, as in two or three or four, simultaneous ZFS send processes each using its own wire.

Actually, that's not entirely correct. Most devices go by MAC address. Since the source and destination will always use the same, you'll always end up on the same cable. I learned this lesson the hard way. ;)
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Actually, that's not entirely correct. Most devices go by MAC address. Since the source and destination will always use the same, you'll always end up on the same cable. I learned this lesson the hard way. ;)
That is entirely true. I have envisioned that each interface would have its own unique IP and MAC and name. One could even try hard-wiring the extra three pairs of interfaces back to back.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You could ditch the LAGG completely and try to do some bastardized manual multi-ZFS send. But that's going to take significant work to setup and maintain properly. ;)

Bigger picture you're better off with KISS (keep it simple, stupid).
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
The real question is how do we devise a plan to steal some 10gig equipment.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
What is your current transfer speed? Are you maxing out you Gbit link?
 

Norleif

Dabbler
Joined
Apr 13, 2012
Messages
20
I used ebay to obtain some 10Gbit hardware. Chelsio S320E-CR with optics.
ssh is now the bottleneck in zfs send, consuming 99% CPU using the NONE cipher. Peaking at about 2.5GBit/s.
Using the GUI migration tools is slightly slower as it pulls in dd for buffering. ssh uses 80% CPU, and each instance of dd uses 10% each. Peaking at 2.1GBit/s
Perhaps netcat or mbuffer would be faster, but they aren't implemented on FreeNAS.
 

RegularJoe

Patron
Joined
Aug 19, 2013
Messages
330
Norleif,

So a faster way to get a dump of a ZFS file system would be to connect to the remote as an iSCSI client and dump the FS to the target. It would be nice to be able to get NFS files from host A to host B using ZFS but it sounds like that has some bottle necks. What is your CPU and RAM? On the two hosts I have they are HP DL 385 G5's that have two AMD 4 core processors and 12 gig of DDR2 RAM, so they are older hosts. And my black plane on that host is only SATA1 so I have another bottleneck.

Thanks,
Joe
 

Norleif

Dabbler
Joined
Apr 13, 2012
Messages
20
In my case, I wanted to copy all my data from my old low-budget FreeNAS (6x WD Green, Intel G620 CPU) to my new and shiny mid-range FreeNAS (10x WD Red, Intel E3-1225V2).
I ended up plugging the WD Greens into a 2nd M1015 board and doing the zfs send | zfs recv locally in the new FreeNAS box.
I'm just wondering if I should have done a cp -a instead, to get rid of any fragmentation that might have accumulated on the old zpool...
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
In my case, I wanted to copy all my data from my old low-budget FreeNAS (6x WD Green, Intel G620 CPU) to my new and shiny mid-range FreeNAS (10x WD Red, Intel E3-1225V2).
I ended up plugging the WD Greens into a 2nd M1015 board and doing the zfs send | zfs recv locally in the new FreeNAS box.
I'm just wondering if I should have done a cp -a instead, to get rid of any fragmentation that might have accumulated on the old zpool...
I am under the impression that if you do a single snapshot and zfs send / receive, then your ZFS filesystem on the receiving end will not have any fragmentation. Of course, only until you start using it :)
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
In my case, I wanted to copy all my data from my old low-budget FreeNAS (6x WD Green, Intel G620 CPU) to my new and shiny mid-range FreeNAS (10x WD Red, Intel E3-1225V2).
I ended up plugging the WD Greens into a 2nd M1015 board and doing the zfs send | zfs recv locally in the new FreeNAS box.
I'm just wondering if I should have done a cp -a instead, to get rid of any fragmentation that might have accumulated on the old zpool...

I agree with Solarisguy, I think on the receive end, ZFS realign all the blocks and may also redo compression and block sizing to match the incoming data. I think this is one of the reason why multi zfs send/receive on the same volume is not implemented as it would most likely fragment the data.

For you replication to take place, are you doing replication from the 6 WD RED disk array to the WD Green and then from the WD Green to the 10 WD RED, or are you doing 6 WD RED to 10 WD RED directly?
I fail to grasp the entire process here.
If you do the latter, you should have good performance if it is done on the Xeon platform.
 

Norleif

Dabbler
Joined
Apr 13, 2012
Messages
20
It was a one-off copy from the 6 WD Greens to the 10 WD Reds, so I just plugged the 6 WD Greens into the Xeon box already containing the 10 WD Reds, importing the Green pool through the GUI and then zfs send -R | zfs recv from the terminal on said Xeon box.

The idea is to use the new Xeon box with the Reds as the primary NAS and replicate the most important data to the older G620 box with the Greens for backup.
Before I built the Xeon box, my primary NAS was the G620 box with the 6 Greens in it so I was looking for the best way to copy 8++TB of data from the Greens to the Reds.
Before putting all the drives in a single box, I tried sending it over a 10GBit crossover fiber between the boxes but found SSH even using NONE cipher hit 99% CPU and limited the transfer rate to 2.5GBit/s, so I ended that attempt in favour of moving the drives.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
I would have just let it go. 2.5Gb/sec is hella fast.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
2.5Gb/sec isn't to bad but it's only 25% efficient.
How fast do you think your transfer was from one array to the next?
On my system, it seems I would max out around 190MB/s and I am fairly confident it was not HDD limited.
 
Status
Not open for further replies.
Top