zfs send.rec from Freenas 9.10.2-U6 to TrueNAS-13.0-U1.1 - is that safe operation?

phier

Patron
Joined
Dec 4, 2012
Messages
400
hello,
i would like to migrate my data from my old nas running on Freenas 9.10.2-U6 to TrueNAS-13.0-U1.1. Is it safe to create a recursive snapshot of the freenas dataset and send it ssh to the truenas one?

Code:
# zfs set readonly=on old-pool/root-dataset
# zfs snapshot -r old-pool/root-dataset@22082022
# zfs send -R old-pool/root-dataset@22082022 | pv | ssh root@truenas-ip "zfs receive -Fv truenas-pool/freenas-dataset_init"


not sure about that step - if its required/safe?
Promote truenas-pool/freenas-dataset_init’s new snapshot to become the dataset’s data: zfs rollback truenas-pool/freenas-dataset_init@22082022


Just want to be 100% sure that i wont damage data on the target pool; ie in case i type existing dataset name on the target it will fail the job or data on target pool could be replaced/etc...? Do i need to create new dataset on the target / Truenas pool or zfs send/rec will create it?
as per documentation:
"The target dataset on the receiving system is automatically created in read-only mode to protect the data. To mount or browse the data on the receiving system, create a clone of the snapshot and use the clone. Clones are created in read/write mode, making it possible to browse or mount them. See Snapshots for more information on creating clones."

So before the execution of the send/recv - dataset on the target system has to exist? Once the send/recv job is done; i have to set rw on target dataset and do the snapshot clone? Confusing ;/


Is it a good practice to create a "backup/snapshot" of all datasets on Truenas in my case (see attached screenshot)... or maybe snapshot of all datasets on the pool level (if possible?)

Do I have to snapshot all datasets separately?


1660861158449.png




Thank you!
 
Last edited:

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
It's probably not safe. The Release Notes for 12.0-RELEASE already indicated 9.10 systems could no longer replicate with 12.0 systems. Likewise the Release Notes for 13.0-RELEASE. You may need to first upgrade to 11.3-U5, then 12.0-U8.1, upgrading your ZFS pool features along the way to be able to do what you propose.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Samuel Tai :( i see , thank you.

the qeustion would be whats more safe...
1)
to upgrade Freenas to higher version and do zfs send/rec . At the moment i use also geli / encryption on freenas and i think upgrading it to higher version of freenas it wont work.
2)
or just transfer data from old freenas to new truenas via rsync


thanks!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I would go with option 2 as the least risky means to transfer the data.
 

Bonnie Follweiler

QA Technician
iXsystems
Joined
May 23, 2016
Messages
35
@Samuel Tai :( i see , thank you.

the qeustion would be whats more safe...
1)
to upgrade Freenas to higher version and do zfs send/rec . At the moment i use also geli / encryption on freenas and i think upgrading it to higher version of freenas it wont work.
2)
or just transfer data from old freenas to new truenas via rsync


thanks!
1) Please read https://www.truenas.com/docs/core/coretutorials/storage/pools/storageencryption/ for information about GELI pool migration.

2) Please read https://www.truenas.com/docs/core/coretutorials/updatingtruenas/updatingsoftwareforamajorversion/ which, in part, states "The upgrade path for major versions of FreeNAS/TrueNAS is 9.3 > 9.10 > 11.1 > 11.3 > 12.0. We always recommend upgrading to a supported version of the software."
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Bonnie Follweiler i just want to get rid of freenas; so thats why i thin k upgrade makes no sense and maybe safer way is using rsync?
Question would be what parameters pass to rsync to have 100% identical copy of files, hard/symlinks etc...

rsync -avPpzxH --exclude="/.zfs/" --checksum Source Target

any idea? thanks!
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
not sure ... but i think this cmd should be 100% safe?

rsync -avPxHc Source Target
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
@phier, the .zfs directory is hidden by ZFS, and contains the dataset snapshots. You should continue excluding that if you don't want to carry over the snapshots. The path exists to cd into.

Also, before you run the rsync, you may want to set a SYSCTL tunable for vfs.zfs.arc_min to 25% of your RAM on both the sender and the receiver, so the rsync doesn't stall from ARC metadata thrashing. See https://www.truenas.com/community/threads/how-to-tweak-zfs-parameters-zfs_arc_min.102801/.

Also, you can set this up via the UI, instead of trying to CLI it.

 

phier

Patron
Joined
Dec 4, 2012
Messages
400
@Samuel Tai
that vfs.zfs.arc_min is to prevent rsync from stall or also affect its performance?

thats what currently happening on my Truenas (receiver) machine;

1660996778041.png

1660996519605.png


is there any difference between UI task vs : rsync -avPxHc ?

thanks
 
Last edited:

phier

Patron
Joined
Dec 4, 2012
Messages
400
hello, so i copied out data from the first 8tb freenas drive to truenas.

i was using: rsync -avPxHc source 10.0.x:/target

I am doing some double check / sanity check and what i found out ...

The number of transferred files is same on source and/ target, the issue is size:

size on source:
freenas:
du -s storage4/
6549846896 storage4/

truenas:
du -s storage4
6571370730 storage4

Any idea what is causing that difference?

thanks!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Any idea what is causing that difference?

What's the ZFS record size on the source and destination datasets? Please report zfs get recordsize <root dataset> on both the source and destination pools.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
thats strange; seems its same?

[root@freenas] /mnt# zfs get recordsize storage4
NAME PROPERTY VALUE SOURCE
storage4 recordsize 128K default

root@truenas[/mnt/universalsoldier/freenas]# zfs get recordsize universalsoldier/freenas
NAME PROPERTY VALUE SOURCE
universalsoldier/freenas recordsize 128K default
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Is your old pool using 512K or 512e disks? Is your new pool using 4k native disks? This has got to be due to some sort of allocation difference.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
Is your old pool using 512K or 512e disks? Is your new pool using 4k native disks? This has got to be due to some sort of allocation difference.
well i cant remember how to find out that; it was somewhere in some pool settings cant remember ;/
could u advise here? thanks
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
smartctl -i /dev/ada0 will show the sector size for drive ada0.
 

phier

Patron
Joined
Dec 4, 2012
Messages
400
in that case i dont get it :(
freenas drive Sector Sizes: 512 bytes logical, 4096 bytes physical

truenas drive Sector Sizes: 512 bytes logical, 4096 bytes physical

all looks identical :(
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Ok, so that's a red herring. What are your compression settings on both pools?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
The only thing I can think of at this point for the ~20 MB difference is pool features and how they affect metadata storage. Anyway, do the source and destination files hash to the same value? If so, I wouldn't worry about it.
 
Top