Howto: migrate data from one pool to a bigger pool

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
Because you do get it. This part is correct. The hash used by the switch, though, may be different, so you have a complex multivariable problem here, both "will the FreeBSD hash do the right thing" and then "will the switch hash also do the right thing."

But this, though, this is the problem I see. (Assuming you get the hash thing working. Which is a workable problem.)

Your idea isn't really possible without some sort of multiple snapshot scenario. If you have multiple datasets, and you snap each one, then yes, this is theoretically possible, but I hope it is obvious that this is something you'd have to be doing by hand.

I realize the switch might do something different. I have a Cisco SG300 and a Brocade ICX6450 at my disposal. Haven't really looked at that yet since I don't have the destination NAS yet, but...

Finally got word my FreeNAS Mini XL shipped. I should have it tomorrow.

Yes, I think I can get the bulk of my pool into two or three large snapshots. I was going to try to send them all over and wanted to try to leverage LACP to trim the number of hours/days required to ship my data over.

I'll try two or three different snapshot sync destination IP addresses and see what happens.

Do you have existing snapshots or other ZFS-y things you're trying to preserve on the pool? Or are you just looking to move files from pool A to pool B?

More of an exercise.
 

itw

Dabbler
Joined
Aug 31, 2011
Messages
48
It does work, but my sending system can't send more than about a gigabit. I have four zfs sends running, two to each IP address and both NICs in the LACP lagg are sending but not doing much more than a Gb total. The sending system's disks look like they're running a scrub. :) It is, however, faster than just one send.
 

BBarker

Contributor
Joined
Aug 7, 2015
Messages
120
I am upgrading my system this week and would like some input as to the best way to transfer the data that is on my current system which is in my signature. I am running FreeNAS-9.10-STABLE-201605021851 (35c85f7) and everything is working well.

I will be changing enough of the system where if I want, I can bench build the new system with the new drives and do a complete new install of FreeNAS 9.10 with two 6TB drives in a mirrored pool and an SSD for my jails and .system files. I saw an online tutorial regarding setting up FreeNAS 9.10 in this fashion and if anyone has the link handy, I'd appreciate if you could post it here as I am having some trouble locating it again.

Once it is up and running, is it possible to install the current pool of two 3TB mirrored drives from my current system into the new system and then transfer just the data? Also, once it is done, I'd like to wipe the old drives and create a new pool with them for additional storage. My original plan was to just swap the old drives with the new one at a time and then replace the motherboard with the new components when the drive swap was complete but the idea of a new install seems appealing and maybe less problematic.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You might get better answers by starting separate threads.
 
Last edited by a moderator:

SebbaG

Dabbler
Joined
Oct 12, 2014
Messages
25
hi depasseg,

does your method also work with an encrypted zpool?

thx
SebbaG
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I don't have one but I can't think of a reason it wouldn't work any differently.
 

lismail

Cadet
Joined
Dec 9, 2016
Messages
2
Great write-up,

I was wondering if this is still applicable to FreenNAS 11.1 or if there are any changes or recommendations for performing such a move on 11.1?

Thanks,
Lawrence
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The same procedure works on 11.1u4 (I just did it).
 

Antichristal

Cadet
Joined
May 25, 2018
Messages
1
Hello, I am very new here and barely grasped the freenas system and have successfully setup plex, sonarr, couchpotato, plexpy, nextcloud, openvpn. I would like to ask a question. I have a 3tb server at the moment (1tb drive and 2tb drive in raid0, I know - Bad). And have only 900gb used. I would like to backup my jails(and plugins) and my data to a usb 1tb drive. And later restore the backup. I have a windows computer on LAN if need be. The thing I don't understand in OP's post is that it seems that he is creating the temp-tank on the same computer on another volume? Which I cannot do as I am upgrading to 4x3tb(raid5) using all my sata ports on the MB. I'd reaally appreciate and be grateful for some help. Thank you
 

Jayos

Dabbler
Joined
Feb 27, 2013
Messages
11
Very useful thread thanks @depasseg
Set up my FreeNas box an age ago with 3x2TB hdd's system booting from 8GB usb. About to purchase 5x4TB hdd's and go z2 and follow your steps.
Before I take the plunge you mention your system dataset is there any advantage not having it on USB drive?
 

alexr

Explorer
Joined
Apr 14, 2016
Messages
59
I ran through this procedure to migrate my pool onto different drives.

A very important missing step is that the FreeNAS GUI and middleware layer will not be able to import the old-tank pool to let you detach/wipe it. This is because the middleware layer has a database entry that has the name "tank" with the GUID for the old-tank pool. It will then refuse to show the old-tank pool because it thinks it's already imported.

The ugly and dangerous fix for this is:

Code:
$ zpool get guid tank
$ zpool get guid old-tank

$ sudo sqlite3fn /data/freenas-v1.db

SQLite version 3.23.0 2018-04-02 11:04:16
Enter ".help" for usage hints.
sqlite> .mode column
sqlite> .headers on
sqlite> SELECT * FROM storage_volume;

Determine from the output which row id your 'tank' pool is and what the new GUID for it is, substituting below:

sqlite> UPDATE storage_volume SET vol_guid = 12345678 WHERE id = 1;
sqlite> .quit
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Had a Problem with step three.
3. Use the GUI to create a snapshot of the dataset you want to move. If you want to move everything, select the root dataset. For flexibility in the future, I'd suggest checking the "recursive" option. Also, minimize use of tank. You will want to pick a time where nothing is changing, then ensure you have a snapshot, and then wait for replication to finish. The amount of time this will take depends on how much storage and the speed of your machine. It took ~36 hours to move 20TB locally for me. [ alternatively, you can use the CLI to create the snapshot and then replicate manually. "zfs snapshot -r tank@migrate" and then "zfs send -R tank@migrate | zfs receive temp-tank"]
Code:
zfs send -R ZFSRZ@migrate | zfs receive ZFSRZ2
zfs send -R ZFSRZ@migrate | zfs receive -F ZFSRZ2

It would not allow me to do the 1st command, It forced me to do the 2nd command should this be fine?
It seems to be copying every thing over to the new pool. Is it ok to cloes putty in this state?
 
Last edited:

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Code:
root@freenas:~ # zfs snapshot -r ZFSRZ@migrate1
root@freenas:~ # zfs send -R ZFSRZ@migrate1 | zfs receive ZFSRZ2
cannot receive new filesystem stream: destination 'ZFSRZ2' exists
must specify -F to overwrite it
warning: cannot send 'ZFSRZ@ZFSRZ2move': signal received
warning: cannot send 'ZFSRZ@zfsrzdata19': Broken pipe
warning: cannot send 'ZFSRZ@migrate': Broken pipe
warning: cannot send 'ZFSRZ@auto-20190211.0900-2m': Broken pipe
warning: cannot send 'ZFSRZ@migrate1': Broken pipe

What am I doing wrong here ?
 

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Well This Guide did Work for moving the files over. I end up using a little different commands.
Code:
 zfs send -Rv Pool1@SnapshotName | pv | zfs receive -Fdu Pool2

This worked for me If the Command did not have -Rv or -Fdu it would not work it would fail with out a error.
 
Last edited:

CPLeyden1282

Cadet
Joined
Dec 28, 2018
Messages
4
@Snow if you ever have to do this again using 11.2, setting up a replication task was very straightforward, I highly recommend it.

For anyone else using these instructions to move your stuff into a large pool, there may be an important step if you have existing iocage jails. When I brought my pool back online with the same name and tried to start the jails, they were unable to see the default /mnt/iocage path. The solution for this was simply to set the mountpoint for /mnt/iocage using the following command:

Code:
 zfs set mountpoint="/iocage" tank/iocage


Thank you @depasseg for this guide, it was a huge help!
 
Last edited:

Snow

Patron
Joined
Aug 1, 2014
Messages
309
Thanks mam, no I pretty much got it to load and it keep wanting to mnt the old pool. How I fixed this was to detach all pools them remount My New main pool then saved a config. After I got all my jails up, then I loaded the same saved config in to freenas then rebooted. this fixed my problem of my jail's and Temp-old-pool from loading at boot. I did try to get replication to work, but it keep giving me remote host mismatched key error's. I never did figure out why. Also ZFS Send and Receive would stop working if I closed putty out. So I added the | Pv | command to see where I was at. I had to transfer around 30Tb of stuff took around 3 1/2 day's. Yes thank you for the guide Not to pick but would be nice to have like a side note/Guide how to set up a local to local replication task. Every Guide I found was for remote to remote system's or was very outdated.
 

CPLeyden1282

Cadet
Joined
Dec 28, 2018
Messages
4
Thanks mam, no I pretty much got it to load and it keep wanting to mnt the old pool. How I fixed this was to detach all pools them remount My New main pool then saved a config. After I got all my jails up, then I loaded the same saved config in to freenas then rebooted. this fixed my problem of my jail's and Temp-old-pool from loading at boot. I did try to get replication to work, but it keep giving me remote host mismatched key error's. I never did figure out why.

You actually set up remote replication the same exact way you would set up remote replication. The "remote host" in this case would simply be the name or IP address of your local host. Here is a screenshot of how I'm configured. I've left out my SSH key for obvious reasons but when you get to that field, you simply click the "scan ssh key" (assuming you are running 11.1 or 11.2, this button may exist with previous versions but I built my system just before 11.1).

1550632134177.png


Also ZFS Send and Receive would stop working if I closed putty out. So I added the | Pv | command to see where I was at. I had to transfer around 30Tb of stuff took around 3 1/2 day's. Yes thank you for the guide Not to pick but would be nice to have like a side note/Guide how to set up a local to local replication task. Every Guide I found was for remote to remote system's or was very outdated.[/QUOTE said:
Are you running a tmux session before starting the zfs send/receive? Putty will only keep your SSH session live as long as you leave it open without an app like tmux or screen. Next time you open putty, type tmux. This will create a tmux socket on your freenas server. As long as your server stays up and running, anything going on inside of the tmux session will keep chugging along. This is especially useful if you're doing anything that's going to take several hours or days. If you need to reboot your laptop or if you lose your network connection and Putty closes your session, simply SSH back into your Freenas server and type vtmux attach to attach back to your most recent tmux session. That should get you started but tmux is extremely powerful and allows you to do much more than preserve your sessions. Have a look here when you get some time.
https://www.hamvocke.com/blog/a-quick-and-easy-guide-to-tmux/
 
Last edited:

Lighthouse

Dabbler
Joined
Nov 15, 2018
Messages
15
Hello folks. I am in a process of upgrading my storage very soon, and I found this very helpful guide, and I do see some additional information provided by other people in this thread. I am a bit slightly confused in procedures combining depasseg, Snow and CPLeyden1282's posts. So here is the completed procedures from my understanding.

Edit (3-13-2019): With this procedure I successfully migrated my pool into bigger pool.


<old pool> = current pool's name
<new pool> = any tempory name for added pool.


1.) Move off the system dataset.

2) Create system config backup. May not be needed but just in case.

3) Create a snapshot of the dataset I'd like to move, which will be <name of snapshot>. If create snapshot using command, make sure use "snapshot -r" to create all recursive snapshots.

4) Replicate the dataset from the old one to the new one using CPLeyden1282's instruction. Using GUI for 11.2 ver.
Or with high chance GUI is bugged and not working, then use the commend "zfs send -R <old pool>@<name of snapshot> | zfs receive -F <new pool>"

***If you use remote ssh connection, when you close it the replication will stop. Use the command "zfs send -Rv <old pool>@<name of snapshot> | pv | zfs receive -Fdu <new pool>" provided by Snow.***

5) Detach both old and new volumes. When detaching the volume, do not select "destroy" and select to save share-related configs.

6) Import & swap the names of pools using CLI, then export them.
"zpool import <old pool> <new pool>" and "zpool import <new pool> <old pool>"
Then
"zpool export <old pool>" and "zpool export <new pool>"

7) Use the GUI to import the renamed pools so FreeNAS can understand them.

8) If something happened to share and not working, use the system config backup did on 2).

9) Wipe out the old volume if I like.
 
Last edited:
Top