Migrating to completely new hardware

Status
Not open for further replies.
Joined
Feb 8, 2018
Messages
3
I've had a FreeNAS box running for a few years now on some old consumer grade hardware and finally upgraded to something actually server class. I have about 7TB of data on my old box that I want to move over to my new box across a segregated VLAN on a 10GbE network. My storage looks like this:

Old box:
Pool_1 (raw ~11TB)
...Pool_1 (ZFS ~7.5TB)
......FreeNAS (7TB used)
......jails (7GB, just plex)

New box:
V1 (raw ~22TB)
...V1 (ZFS ~18TB)
......D1 (empty, can be deleted)

Networking looks like this:
Old box:
VLAN0001: 192.168.1.160/24 (webui, ssh)
VLAN0002: 192.168.2.160/24 (just for this transfer)

New box:
VLAN0001: 192.168.1.161/24 (webui, ssh)
VLAN0002: 192.168.2.161/24 (just for this transfer)

I get that I can take a snap of Pool_1 recursively and zfs send the individual snaps, but the syntax / permissions / whatnot seems to be eluding me. I've been trying to make it happen for a few days now and I just can't make it happen for some reason. The one time I got something working, the throughput was so bad I'd be waiting for like a year. When I had a 10GbE direct connection from Old Box to my desktop, I could easily read at 250-300MB/s. iPerf3 now between Old Box and New Box on .2.0/24 is 3-5Gb/s, which I understand isn't 10Gb/s, but even 3Gb/s is faster than the storage on Old Box can read. I've tried mounting an NFS share and using cp -a, but I get a bunch of "failed to set acl entries" and it's also slow. Rsync was maxing out around 80MB/s and it was bursty. I tried zfs send with mbuffer and got nowhere because nothing I find anywhere online seems to actually work. So here I am. I want to make a one time transfer, then reconfigure networking and decommission Old Box.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
If you could mount the old volume to the new system you will be better off.
You need to create a recursive snapshot of the volume (top dataset) that you might call For_replication or give it a name.
That should give you the top dataset snapshot like:

Pool_1@For_replication

Over the network you need to create a ssh connection between both computers and use screen or tmux as not to interrupt the process in the event the SSH CLI gets closed or disconnected.

Then you would run the following command from the old computer to the new one:
You want to have root permission:

ssh -i /data/ssh/replication root@192.168.2.161 zfs send -v -R Pool_1@For_replication | zfs receive -v -F V1

Make sure you have proper case for the command.

If you were to do it on the server, I would just do as follow:

zfs send -v -R Pool_1@For_replication | zfs receive -v -F V1
 
Joined
Feb 8, 2018
Messages
3
Thanks for all that. I forgot to mention... I do have local access to both machines, and do have direct console access, so I could run commands directly on the hardware to avoid complications from a closed ssh cli session.

I did try mounting the old storage to the new box with nfs, but could only see the folders in the root of the share, even though “All directories” is checked in the webui. I can successfully mount the new storage to the old box with nfs, but I could only think to use cp and that’s when I got all the acl errors. If I figure out how to mount the old storage to the new box successfully, can I then use your last command and run zfs send from the receiving server because the file system is mounted to it? Sorry, I still have a hard time wrapping my head around some of the concepts.

As for using ssh as a transport... why encrypt when it’s on a local, private network on a segregated VLAN? That’s why I was trying to use mbuffer, but I could never seem to get it to connect.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Seems the simple answer would be to use the replication feature in the GUI. Is there a reason you aren't doing that?
 
Joined
Feb 2, 2016
Messages
574
My typical migration path...

1. Build new server.
2. Start replicating from old server to new server using the built-in snapshot and replication functions available from the web interface.
3. Replicate for a few days/weeks to prove the new configuration is robust. (Against common sense and formal recommendations, I do this instead of dedicated burn in testing.)
4. Turn everything off.
5. Turn on new server, change IP on new server to match old server, leave old server off.
7. Celebrate.
8. Hold on to old server until confident new server is fully-functional with all necessary configuration (you created all the user accounts at step three, right?) and data.
9. Wipe old server, turn into backup replication host for new server.

is 3-5Gb/s, which I understand isn't 10Gb/s,

Sounds about right. We top out around 6G on our 10G cards. I've read a lot about tuning but am not knowledgeable enough to know if I'd make it better or worse by flipping switches. So, since the 6G we're getting with the untweaked stock configuration is far better than the 4x1G we used to have, we let it ride.

Cheers,
Matt
 
Last edited:

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Thanks for all that. I forgot to mention... I do have local access to both machines, and do have direct console access, so I could run commands directly on the hardware to avoid complications from a closed ssh cli session.

I did try mounting the old storage to the new box with nfs, but could only see the folders in the root of the share, even though “All directories” is checked in the webui. I can successfully mount the new storage to the old box with nfs, but I could only think to use cp and that’s when I got all the acl errors. If I figure out how to mount the old storage to the new box successfully, can I then use your last command and run zfs send from the receiving server because the file system is mounted to it? Sorry, I still have a hard time wrapping my head around some of the concepts.

As for using ssh as a transport... why encrypt when it’s on a local, private network on a segregated VLAN? That’s why I was trying to use mbuffer, but I could never seem to get it to connect.
What I meant by mounting the volume on your new server is to actually physically install the drives onto the new server, import the pool/volume and run local replication.
It can be a hit or miss and for people not experienced, this can lead to loss of data. Even for the more experienced, it is not without risks.

You best bet is to initiate SSH from the old server to the new server. You fix the ssh connection issue and you will have done the most difficult task.
When you run SSH on the older server, run:
I must have added the folowing point:

When I do replication, exactly as I have described the intention, I am actually doing it from my Windows desktop.
I am using Bitvise but you can use Putty. You create a SSH connection to the old server and proceed as follow:
sudo screen
Then:
ssh -i /data/ssh/replication root@192.168.2.161 zfs send -v -R Pool_1@For_replication | zfs receive -v -F V1
At that point, you can close your SSH connection on the Windows side.

Of course, as mentioned by Matt, you need to configure your desktop (or laptop) to SSH to the old server, and also setup old and new server account and place proper SSH key.

I have a step through to help with SSH configuration for Bitvise I use to connect from my PC to the server:
Code:
Creating SSH certificate in order to access FreeNAS
remotely with Bitvise:
- Open Bitvise and click on "Client key manager"
- Select "Generate New": If profile exists, it will add it to the list of profiles.
- Create "Passphrase" and add relevant comment:
- Press "Generate".
- To set up FreeNAS with the key:
• Export key as OpenSSH and save it on the Client PC.
• Open the saved key with text editor, copy its content and paste it in Frrenas User "SSH Public
key" window, under FreeNAS => Account => Users => name of user.
- Make sure SSH service is enabled.
- Back on Bitvise on Client machine, Set up corresponding fields
1. Server => Host: IP address of the remote machine (FreeNAS: 192.168.1.XXX)
2. Authentication =>
• Username: The name of the User for which the SSH key was assigned.
• Initial Method: Publickey
• Client Key: Select from the list of profiles during SSH key generation.
• Passphrase: Enter the Passphrase used during creation of the key.
3. Press Login: CLI should appear
 
Last edited by a moderator:
Joined
Feb 8, 2018
Messages
3
Seems the simple answer would be to use the replication feature in the GUI. Is there a reason you aren't doing that?

So, after my old box's motherboard decided to not work any longer, this endeavor went on a hiatus. Now it's back up and I'm using this strategy. My transfer is topping out at about 400Mb/s though, so a 7.5TB snap is taking about 50 hrs to replicate. Why hasn't mbuffer been incorporated into the GUI replication task? It seems that many people have had very good luck with it drastically improving their replication throughput.
 
Status
Not open for further replies.
Top