Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Howto: migrate data from one pool to a bigger pool

nojohnny101

Neophyte Sage
Joined
Dec 3, 2015
Messages
1,474
Oh I didn't say you wrote it wrongly. I just am offering the suggestion for it to be a little clearer since you are renaming two different pools at the same time, it was a little confusing to me as to which one you were to referring to with the first command and what one you were referring to with the second command. Tutorial is great though! just my 2 cents.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,867
Ah, I see. I'll look into it. Thx!
 

nojohnny101

Neophyte Sage
Joined
Dec 3, 2015
Messages
1,474
well back again with my tail between my legs. I realized that I setup the wrong raidz type on my backup pool so i wiped everything and then tried replication again and now it is not working. both boxes are running the latest 9.10 stable versions. Both are on internal network.

Creating it through the GUI, it creates the replication task just fine and I have confirm it is pulling the correct SSH key. I have even checked the SSH connection through CL as per the guide.

I have tried sending this manual command and it never gives me an error

Code:
zfs send local/data@auto-20110922.1753-2h | ssh -i /data/ssh/replication 192.168.2.6 zfs receive -F local/data@auto-20110922.1753-2h


but I see no traffic on the either server and no spiked CPU processes.

any additional troubleshooting?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,867
Is there an error when using the GUI? And unless you really need some features in 9.10, I'd stick with 9.3 for a while.
 

nojohnny101

Neophyte Sage
Joined
Dec 3, 2015
Messages
1,474
Yes the GUI says "waiting" but I can't find documentation on what this encompasses or the reasons I displays that.

I had this same problem on stable 9.3 and I upgraded hoping for a miracle and surprise surprise, I didn't get one.
 

nojohnny101

Neophyte Sage
Joined
Dec 3, 2015
Messages
1,474
well i just restored to a config files from 2 nights ago and it is running right now. can't say I ever found out what was really causing it to hang. I did see in logs that it never got passed:
"performing sanity check on sshd"

backing up config files saves me again!
 

itw

Member
Joined
Aug 31, 2011
Messages
48
I have been waiting for something like the FreeNAS Mini XL for a long time.

I will be using this guide over the next week or so to migrate my data off my 4-drive RAIDZ2 pool onto a 6-drive RAIDZ2 pool.

Is there a technique to leverage LACP for the transfer? Maybe split it into two snaphot sync processes to two addresses on the receiving NAS?
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Is there a technique to leverage LACP for the transfer? Maybe split it into two snaphot sync processes to two addresses on the receiving NAS?

Nope. That's not how LACP works. :(
 

itw

Member
Joined
Aug 31, 2011
Messages
48
I guess I don't get it:

"LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address."

Seems like different destination IP addresses would yield different hashes.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I guess I don't get it:

"LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address."

Seems like different destination IP addresses would yield different hashes.

How would you have different IP addresses for a situation like this:

I have been waiting for something like the FreeNAS Mini XL for a long time.

I will be using this guide over the next week or so to migrate my data off my 4-drive RAIDZ2 pool onto a 6-drive RAIDZ2 pool.

Is there a technique to leverage LACP for the transfer? Maybe split it into two snaphot sync processes to two addresses on the receiving NAS?

One server to one server. If you're putting 2 IPs on the same subnet at the same time on the same computer you've broken other networking ettiquette.

So with one source server, one destination server, only 1 IP on each of those servers, there is nothing to leverage. You get the total throughput of a single link, regardless of if you have 2 or 1000 links in the LACP on both sides. *that* is why we continually tell people that LACP is pointless. ;)

For situations where you have one source server, one destination server, and only 1 IP for each, your only option to increase throughput is to go to 10Gb. Hence I bought 10Gb LAN for my house. :P
 

itw

Member
Joined
Aug 31, 2011
Messages
48
How would you have different IP addresses for a situation like this:
One server to one server. If you're putting 2 IPs on the same subnet at the same time on the same computer you've broken other networking ettiquette.
This is done literally every minute of every day on firewalls that have multiple IP addresses available on WAN. I don't see a problem with it. Same with web servers running virtual hosts. FreeBSD's been doing it for years (decades?)

Interfaces that have multiple subnets on them are wonky.
So with one source server, one destination server, only 1 IP on each of those servers, there is nothing to leverage. You get the total throughput of a single link, regardless of if you have 2 or 1000 links in the LACP on both sides. *that* is why we continually tell people that LACP is pointless. ;)

For situations where you have one source server, one destination server, and only 1 IP for each, your only option to increase throughput is to go to 10Gb. Hence I bought 10Gb LAN for my house. :p

If what you're saying is really a problem couldn't I temporarily route a subnet to the other NAS and use IP aliases on lo0 on the routed subnet? Destination MACs would be the same but destination IP addresses would be different.

I don't see a way to do it in the gui (on lo0) but for something temporary I'd deal with it.
 

titan_rw

Neophyte Sage
Joined
Sep 1, 2012
Messages
584
This is done literally every minute of every day on firewalls that have multiple IP addresses available on WAN. I don't see a problem with it. Same with web servers running virtual hosts. FreeBSD's been doing it for years (decades?)

Interfaces that have multiple subnets on them are wonky.


If what you're saying is really a problem couldn't I temporarily route a subnet to the other NAS and use IP aliases on lo0 on the routed subnet? Destination MACs would be the same but destination IP addresses would be different.

I don't see a way to do it in the gui (on lo0) but for something temporary I'd deal with it.

What cyberjock meant, is that you can't have multiple interfaces with ip's all on the same subnet. This is broken networking.

You can absolutely have multiple IP's on the same subnet on ONE interface. This is ip aliasing.

I'm pretty sure that's how LACP with NFS works. You have one LAGG interface (say 4 physical ports) with say 4 IP's bound to the single LAGG interface. You have 4 NFS exports available on these different IP's. Traffic to an individual NFS mount will only be the speed of a single interface, but each NFS mount can do interface speed. With enough distributed traffic, you can potentially see 4x interface speed.

This is basically the same thing as having many CIFS clients accessing one server. Instead, you could have one NFS client (esxi host) accessing 4 NFS server IPs all on one LAGG on one server.

@jgreco is our resident networking guru. Maybe I've got this wrong.
 

itw

Member
Joined
Aug 31, 2011
Messages
48
Right. It's not multiple interfaces. It's multiple IP addresses on a single interface: lagg0.

I have to wait until I get the new Mini XL to test it. I do hope I can speed it up.

Anyway, this is only a tangentially-related subject so this is probably enough of a hijack. Sorry.
 

titan_rw

Neophyte Sage
Joined
Sep 1, 2012
Messages
584
Replication is a single IP stream. And even if it were multiple streams, each stream would have to be from or two a different IP address in order to utilize a different physical link in LACP.

Being that you can't even resume an interrupted replication currently, I don't know how you would manage this.

If you need faster initial replication, I can see only two options. 1: 10 gbe. 2: Temporarily have both pools imported on a single server. Then do local "zfs send | zfs receive".
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,453
Nope. That's not how LACP works. :(

Incorrect. Sending to two destination addresses on the remote NAS could certainly result in this behaviour, but the switch may be using a different hashing algorithm so it may take things like the port number into account, which can result in some infuriating difficulties for an admin trying to precompute what behaviour will result.

I guess I don't get it:

Also incorrect!

"LACP balances outgoing traffic across the active ports based on hashed protocol header information and accepts incoming traffic from any active port. The hash includes the Ethernet source and destination address and, if available, the VLAN tag, and the IPv4 or IPv6 source and destination address."

Seems like different destination IP addresses would yield different hashes.

Because you do get it. This part is correct. The hash used by the switch, though, may be different, so you have a complex multivariable problem here, both "will the FreeBSD hash do the right thing" and then "will the switch hash also do the right thing."

Is there a technique to leverage LACP for the transfer? Maybe split it into two snaphot sync processes to two addresses on the receiving NAS?

But this, though, this is the problem I see. (Assuming you get the hash thing working. Which is a workable problem.)

Your idea isn't really possible without some sort of multiple snapshot scenario. If you have multiple datasets, and you snap each one, then yes, this is theoretically possible, but I hope it is obvious that this is something you'd have to be doing by hand.

Do you have existing snapshots or other ZFS-y things you're trying to preserve on the pool? Or are you just looking to move files from pool A to pool B?

If you're just looking to move files from A to B, you could also say "screw snapshots" and just do it the conventional way. Use tar and netcat. Do some legwork to determine, using du or whatever, a list of top level directories that's approximately half of all your data. Then you do:

# cd /mnt/mynewpool
# nc -d -l 5000 | tar xf - &
# nc -d -l 5001 | tar xf - &

on the receiving host, then

# cd /mnt/sadoldpool
# tar cf - Documents Pictures Downloads Music | nc 10.0.0.101 5000 &
# tar cf - Video | nc 10.0.0.102 5000 &

You can even do this a little bit at a time if you're not exactly sure about the balance, unlike the snapshot method where once you've started you are committed. Desperate admins been doing things like this for many years. :smile:
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,453
Interfaces that have multiple subnets on them are wonky.

You've misspelled "a horribly bad design that makes you want to claw your eyeballs out", because they do work, correctly even, and we usually use the word "wonky" to refer to something that doesn't work correctly.

Currently looking at a client's FreeBSD box in the other window that has no less than nine(!) inet* entries for em0. Bleh! Heh.
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
14,453
What cyberjock meant, is that you can't have multiple interfaces with ip's all on the same subnet. This is broken networking.

Absolutely correct; it'll "work" but not the way you think. Further discussion: https://forums.freenas.org/index.php?threads/multiple-network-interfaces-on-a-single-subnet.20204/

You can absolutely have multiple IP's on the same subnet on ONE interface. This is ip aliasing.

Yup!

I'm pretty sure that's how LACP with NFS works.

(eyebrows raise a bit)

You have one LAGG interface (say 4 physical ports) with say 4 IP's bound to the single LAGG interface. You have 4 NFS exports available on these different IP's. Traffic to an individual NFS mount will only be the speed of a single interface, but each NFS mount can do interface speed. With enough distributed traffic, you can potentially see 4x interface speed.

That's certainly possible. It buys you the possibility for a client to fish around for a less-busy LAGG port, by trying different server IP addresses, but that's only going to affect data flowing from the client to the server. You also need multiple IP addresses on the client, in order to affect the port the server selects to send traffic to the client. And you have to remember that you're still very likely for traffic with that client will ingress on one physical interface while egressing on another.

In general, it's like trying to nail jello to a wall. You can do it with enough nails, sure, but it seems to lack the ease of use NFS was supposed to offer.

This is basically the same thing as having many CIFS clients accessing one server. Instead, you could have one NFS client (esxi host) accessing 4 NFS server IPs all on one LAGG on one server.

I guess perhaps if you had several hypervisors (as in not enough to get a nice balance just through LACP alone) you could probably double your chances of the right thing happening, but I think you're going to have to have multiple datastore entries in ESXi to make that "work" (fsvo "work") and it just sounds like a clevering-your-way-to-disaster train wreck.

@jgreco is our resident networking guru. Maybe I've got this wrong.

You've probably got that wrong. ;-)
 

-fun-

Member
Joined
Oct 27, 2015
Messages
164
Hi depasseg, I have read your description with great interest and plan on using this when I build a new pool. I have a question regarding step 1 in your description from the first post in this thread:

1. the system dataset needs to be moved off of TANK. Use the GUI to select a new location other than tank or temp-tank.

I use a mirrored USB boot device with about 13GiB free space. I assume a USB boot device is not a good place for the system dataset on a permanent basis. But would you consider it viable to move the system dataset to the boot pool temporarily until the migration has been finished?

Where exactly can I locate the system dataset to determine its size? Is this whatever gets mounted in /var/db/?

-flo-
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,867
I use a mirrored USB boot device with about 13GiB free space. I assume a USB boot device is not a good place for the system dataset on a permanent basis. But would you consider it viable to move the system dataset to the boot pool temporarily until the migration has been finished?
That's what I did, and I believe lots of folks run it on the boot pool.
Where exactly can I locate the system dataset to determine its size? Is this whatever gets mounted in /var/db/?
At the CLI, run zfs list and look for the tank/.system line.
 

-fun-

Member
Joined
Oct 27, 2015
Messages
164
Ok thank you! The tank/.systems is actually what is mapped to /var/db so my guess was right. This fits easily into the boot pool.
 
Top