SOLVED How to "clone" old volume to new one (so I can eliminate old one)

Status
Not open for further replies.

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
My FreeNAS box has had a 4x 4TB Raid Z2 volume since I built it, and I finally upgraded it with a new 5x 10TB Raid Z2 volume. In addition to your typical storage-type files (photos, videos, music, etc.), the old volume has a bhyve virtual machine and my user's home directory with scripts and other things.

The drives in the old volume are a motley assortment of what I had lying around, and I want to retire them now. So my question is, how do I best "clone" the contents of the old volume to the new one? I want to preserve file permissions, I want scripts in my user directory to keep running after the "cloning", and of course I want my VM to keep working. Also, note that both volumes are currently in the box and running side-by-side.

I come from the Linux world, and I'm pretty sure there's a BSD-style solution here that's eluding me. Thanks in advance!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
so @danb35, I think the issue here is that home boy has taken out the old drives from the system. In which case, he's not going to be able to replicate so easily.

Stranded Camel, sir, the easiest way is for you to have both the old and new pool both mounted in a single freenas, and then use replication, as dan says.

Another option is to have two different freenases, one with the old drives, one with the new, and use remote replication, but that's slightly more complex.

If you cannot mount the old drives at the same time you have the new drives mounted, then, there will be considerable need to think creatively.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79

Thanks so much -- that seems like it's just what I needed! A couple questions:

1. Where is digger coming from in the code (see below)? When the whole process is done, can I safely delete it?

2. Do I have to do anything in order to get FreeNAS to look for things like cron scripts (which are currently located on my old tank volume) in my new dozer volume? Something involving changing the System Dataset perhaps?

Thanks again for your help!

Code:
# zfs snapshot -r tank@01
# zfs send -R tank@01 | zfs receive -Fdvu dozer

# zfs snapshot -r tank@02
# zfs send -R -i tank@01 tank@02 | zfs receive -dvu dozer

# zfs set readonly=on tank
# zpool export tank
# zpool export dozer
# zpool import tank digger
# zpool import dozer tank
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
use remote replication, but that's slightly more complex.
OTOH, you can do that entirely through the GUI--so in some sense, it's simpler.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
so @danb35, I think the issue here is that home boy has taken out the old drives from the system. In which case, he's not going to be able to replicate so easily.

Stranded Camel, sir, the easiest way is for you to have both the old and new pool both mounted in a single freenas, and then use replication, as dan says.

I still have all the drives in the system, fortunately! I'm currently running danb35's solution, which will take some time to finish, of course.

Thanks for your response!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Where is digger coming from in the code (see below)?
It's simply a randomly-chosen third pool name.
Do I have to do anything in order to get FreeNAS to look for things like cron scripts (which are currently located on my old tank volume) in my new dozer volume? Something involving changing the System Dataset perhaps?
You might need to adjust the location of the .system dataset (and you'd certainly want to check it), but nothing else should need to be changed--any other paths should remain the same. The last step renames the new pool to the same name as the old pool (which is why it first renamed the old pool to digger), which should mean your paths won't be changed.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
You might need to adjust the location of the .system dataset (and you'd certainly want to check it), but nothing else should need to be changed--any other paths should remain the same.

Okay -- looks like this is going to actually be easy with your advice! One more question... How do I check and adjust the .system dataset's location? In other words, where should it be, where should I look for it if it isn't there, and how do I move it if that turns out to be necessary?

Thanks!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How do I check and adjust the .system dataset's location?
Once the process is done, just check it in the web GUI (System -> System Dataset) and make sure it's set to your pool.
 

Stranded Camel

Explorer
Joined
May 25, 2017
Messages
79
I ran into a few hitches, but in the end everything worked out. I had to force the export of what was my system volume ( zpool export -f <VOLUME>) because FreeNAS was constantly writing a log file to it. I also had to do a few reboots, export some volumes again, and then reimport the new volume. But in the end, I've got everything as it should be. Thanks again!
 

iguaan

Cadet
Joined
Oct 9, 2017
Messages
2
Had the same problem with my system dataset and jails volume.
Used the same method:

Ran also in some issues which i managed to figure out myself, i don't have any unix etc.. knowledge from school, all from google :)

When someone newbie like me faces the same problems, ill write the steps i had to go through and i'll try to explain what is done as i understand:
Code:
replace tank, dozer and digger with your dataset related names

# zfs snapshot -r tank@01									 //take a fresh recursive snapshot of broken drive named tank
# zfs send -R tank@01 | zfs receive -Fdvu dozer	  //send a copy to your new hard drive (named dozer) zpool with 2nd name, for my 500GB drive it took about 1,5h

# zfs snapshot -r tank@02									
# zfs send -R -i tank@01 tank@02 | zfs receive -dvu dozer	 // another copy and incremental sending to new drive, did it in few minutes

//Time to "remove" zpool's from system, so you can later place them on their corresponding places
# zfs set readonly=on tank
# zpool -f export tank		  //had to force, jails were using it, although i have stopped them all and also disabled services
# zpool export dozer			// no forcing needed, because it is just a copy on new drive
# zpool import tank digger   //imports and renames your broken drive zpool with 3rd name, if u need a copy
# zpool import dozer tank	//imports and renames your new drive zpool to the place where broken one was

here i had mount problems, i didn't see any of my jails in freenas tab so here's what i figured out:
# zfs umount -f tank		//and again it was used by some jails and had to force, though in freenas jails list was empty
# zfs set mountpoint=/mnt/tank tank	 //set new (where broken used to be) mountpoint for tank volume
# zfs mount tank								 // mounted the volume back online

Rebooted the system and everything was back to normal again, jails list was back and they were up & running :)



EDIT:
beacause i used the "new drive" before in current freenas it received a guid which got me in another problem when i removed bad drive.
freenas spammed every second vdev state changed....
don't know for sure all the steps were neccesary but what i did was:
Code:
# zpool reguid tank
# zpool reguid digger
# zpool export digger
 
Last edited:
Status
Not open for further replies.
Top