migrate to new zfs pool

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
Currently, I have 6*4T with zfs raidz2, and I am planning to upgrade my storage with 4*10T with zfs raidz1, is there a simply way to migrate data to new pool?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Fastest to slowest: cp/mv, zfs send/recv, rsync
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
Thanks for your short but very useful reply.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
zfs send/recv as pointed out by SweetAndLow would be my preferred solution. Most reliable and it will retain the entire history of your pool. Something cp/mv won't be able to give you.
Even then, I believe replication is the fastest of all.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
zfs send/recv as pointed out by SweetAndLow would be my preferred solution. Most reliable and it will retain the entire history of your pool. Something cp/mv won't be able to give you.
Even then, I believe replication is the fastest of all.
Incorrect replication is not the fastest. mv/cp is the fastest. Zfs send/recv would probably be my choice though.

EDIT: actually I wonder how compressed arc and replication have changed things recently. I need to retest performance. zfs send/recv might be faster now that it can transfer compressed data.
 
Last edited:

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
One more question, if I have many datasets in pool "old" (like old/d1, old/d2,old/d3), Is there a way to migrate them by
Code:
zfs send old@now | ssh root@IPADDRESS "zfs receive new"

so everything can be in pool "new" and I needn't send them one by one ?
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Probably using -r with the send command. This should transfer all child datasets. Check it the documentation for zfs send.
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
I tried, system told me device is busy. I also had a look at zfs doc, no clue.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Umm device busy? That is a an error I would not have expected. Also you don't need to use ssh if your to pools are in the same system. What's the command you executed and full error message?
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
I will make another test tomorrow and send the screenshot. Thanks for your reply.
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Raid Z1 is not recommended with drives larger than 2TB.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
One more question, if I have many datasets in pool "old" (like old/d1, old/d2,old/d3), Is there a way to migrate them by
Code:
zfs send old@now | ssh root@IPADDRESS "zfs receive new"

so everything can be in pool "new" and I needn't send them one by one ?
You can use the following command:

Assuming "old@now" is a recursive snapshot of your entire pool, you can use the following command to allow replication of the pool up to the "@now" snapshots.
Code:
zfs send -vv -R old@now | ssh -i /data/ssh/replication root@IPADDRESS zfs receive -vv -F new


You have to be careful with the casing of the options as they have a meaning.
-R stand for "recursive" and will preserve dataset properties, ie user permissions, quota...

Later, if you need to perform incremental replication you would want to use the following command:
Code:
zfs send -vv -I old@now old@today | ssh -i /data/ssh/replication root@IPADDRESS zfs receive -vv -F new

-I stands for "incremental" and capital letter stands for being recursive.

-vv stand for "verbosity" so you will get feedback on the amount of data supposed to be sent and display various operations and errors if any.

For this to work properly, you want your "new" pool to be without any single dataset.
If you want to create a dataset to store your "old" pool, you can do that, but you will have to create the dataset manually on the "new" pool and change the previosu commands to include the path on the remote such as so:

Code:
zfs send -vv -R old@now | ssh -i /data/ssh/replication root@IPADDRESS zfs receive -vv -F new/my_new_dataset

Code:
zfs send -vv -I old@now old@today | ssh -i /data/ssh/replication root@IPADDRESS zfs receive -vv -F new/my_new_dataset
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
I use the "zfsrz2" as pool name in both old/new system.
I ran the cmd below, seems the command never ends, and I didn't see the dataset is created.
Code:
root@freenas[~]# zfs send -vv -R zfsrz2@a | ssh -i /data/ssh/replication root@192.168.10.179 zfs receive -vv -F zfsrz2
full send of zfsrz2@a estimated size is 12.1K
full send of zfsrz2/share@a estimated size is 12.1K
full send of zfsrz2/.system@a estimated size is 13.1K
full send of zfsrz2/.system/configs-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 12.1K
full send of zfsrz2/.system/rrd-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 97.1M
full send of zfsrz2/.system/syslog-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 123K
full send of zfsrz2/.system/webui@a estimated size is 12.1K
full send of zfsrz2/.system/cores@a estimated size is 12.1K
full send of zfsrz2/.system/samba4@a estimated size is 1.18M
full send of zfsrz2/bak@a estimated size is 12.1K
full send of zfsrz2/pdata@a estimated size is 12.1K
full send of zfsrz2/pdata/seafile@a estimated size is 12.1K
total estimated size is 98.5M
TIME        SENT   SNAPSHOT
TIME        SENT   SNAPSHOT
root@192.168.10.179's password: 09:18:21   57.4K   zfsrz2/share@a
09:18:22   57.4K   zfsrz2/share@a
09:18:23   57.4K   zfsrz2/share@a
09:18:24   57.4K   zfsrz2/share@a
09:18:25   57.4K   zfsrz2/share@a
09:18:26   57.4K   zfsrz2/share@a
09:18:27   57.4K   zfsrz2/share@a
09:18:28   57.4K   zfsrz2/share@a
09:18:29   57.4K   zfsrz2/share@a


if I remove -vv para, I got device busy error
Code:
root@freenas[~]# zfs send  -R zfsrz2@a | ssh -i /data/ssh/replication root@192.168.10.179 zfs receive  -F zfsrz2
root@192.168.10.179's password:
cannot unmount '/var/db/system': Device busy
warning: cannot send 'zfsrz2/.system@a': signal received
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
Raid Z1 is not recommended with drives larger than 2TB.
I will regularly backup my data outside freenas, 4*10T is fine for me, 6 disk adds hot and noise.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I think I know what the problem is. I seem to recall something similar.
When it gets stuck at the early stage of the transfer, the size of the data should increment which it doesn't.
Also there is the "password" showing up and it shouldn't.

The issue is I believe related to your ssh certificate. The remote doesn't trust the source yet. Run the following command:
ssh -i /data/ssh/replication root@192.168.10.17
I suspect you will be asked for a password and then it will complain about the key fingerprint.
Either you need to add it automatically or you will have to edit the file itslef.

You should get something like this:
The authenticity of host '192.168.10.17 (192.168.10.17)' can't be established.
ECDSA key fingerprint is SHA256:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.
Are you sure you want to continue connecting (yes/no)?
 
Last edited:

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
if I run
ssh -i /data/ssh/replication root@192.168.10.179, it will ask me the password. If I run cmd below, it didn't ask me the password, but the send/receive didn't work(didn't stop and no data was reflected in new pool)
Code:
zfs send -vv -R zfsrz2@a | ssh -i /data/ssh/replication root@192.168.10.179 zfs receive -vv -F zfsrz2


you mentioned add it automatically, do you mean I should copy /data/ssh/replication the one in old system to new system?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
if I run
ssh -i /data/ssh/replication root@192.168.10.179, it will ask me the password. If I run cmd below, it didn't ask me the password, but the send/receive didn't work(didn't stop and no data was reflected in new pool)
Code:
zfs send -vv -R zfsrz2@a | ssh -i /data/ssh/replication root@192.168.10.179 zfs receive -vv -F zfsrz2


you mentioned add it automatically, do you mean I should copy /data/ssh/replication the one in old system to new system?
Just enter the password when prompted then it will ask about the fingerprint. You will need to press Yes to proceed.
This will tell the remote to accept the ssh key and fingerprint and then it will execute your normal ssh operation.
When you do a fresh install of Freenas but the remote is still the same, the remote will have a different fingerprint from the source and it will not accept it automatically. I think it is to prevent man in the middle attack.
For that you will have to comment the ssh file on the remote under /data/ssh/replication and comment out the line with the source IP address.
Run ssh again and accept and all should come back to normal.
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
in fact, it asked me about the fingerprint when I first ran the cmd, the cmd never ends although
root@freenas[~]# zfs send -vv -R zfsrz2@a | ssh -i /data/ssh/replication root@192.168.10.179 zfs receive -vv -F zfsrz2 full send of zfsrz2@a estimated size is 12.1K full send of zfsrz2/share@a estimated size is 12.1K full send of zfsrz2/.system@a estimated size is 13.1K full send of zfsrz2/.system/configs-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 12.1K full send of zfsrz2/.system/rrd-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 97.1M full send of zfsrz2/.system/syslog-1e6cbdfb415748b98d868947a6e14a88@a estimated size is 123K full send of zfsrz2/.system/webui@a estimated size is 12.1K full send of zfsrz2/.system/cores@a estimated size is 12.1K full send of zfsrz2/.system/samba4@a estimated size is 1.18M full send of zfsrz2/bak@a estimated size is 12.1K full send of zfsrz2/pdata@a estimated size is 12.1K full send of zfsrz2/pdata/seafile@a estimated size is 12.1K total estimated size is 98.5M The authenticity of host '192.168.10.179 (192.168.10.179)' can't be established. ECDSA key fingerprint is SHA256:b7m0xo26eqsIAEnCbw79B/eLjInmlU1DFmg5eAamL3Q. No matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)? TIME SENT SNAPSHOT TIME SENT SNAPSHOT 09:16:50 57.4K zfsrz2/share@a 09:16:51 57.4K zfsrz2/share@a 09:16:52 57.4K zfsrz2/share@a
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
just run only this command without anything else:
ssh -i /data/ssh/replication root@192.168.10.17
 

lilarcor

Dabbler
Joined
May 19, 2019
Messages
24
it sshs to my new system
root@freenas[~]# ssh -i /data/ssh/replication root@192.168.10.179 root@192.168.10.179's password: Last login: Tue May 21 09:43:50 2019 from 192.168.10.180 FreeBSD 11.2-STABLE (FreeNAS.amd64) #0 r325575+9a3c7d8b53f(HEAD): Wed Mar 27 12:41:58 EDT 2019 FreeNAS (c) 2009-2019, The FreeNAS Development Team All rights reserved. FreeNAS is released under the modified BSD license. For more information, documentation, help or support, go here: [URL]http://freenas.org[/URL] Welcome to FreeNAS Warning: settings changed through the CLI are not written to the configuration database and will be reset on reboot. root@freenas[~]#
 
Top