Moving data from old pool to new pool on the same machine

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
So here is where I'm at, my original pool consist of 6 10TB drives in RAIDZ1 configuration. I had asked on here what was the best way to expand my storage and was advised that using larger drives should be configured in a RAIDZ2 configuration at minimum. That is was best if I made a new pool using RAIDZ2 then transferred the data from the old pool to the new pool, then destroy the old pool and and the disks to the new pool as a second vdev.
I have purchased 6 14TB drives and configured them in a RAIDZ2 for a new pool.
I have automatic snapshots setup.
I have made a new snapshot of the oldpool and sent it to the new pool using zfs send | recv
The data copied to the new pool but took about 36 hours and thus there have been changes to the original data but when making a new snapshot and attempting to send it, it would not send
zfs send -RI Media@migrate02-26 Media@migrate02-28new | zfs recv -F Tank cannot receive new filesystem stream: destination has snapshots (eg. Tank@auto-2022-02-25_00-00) must destroy them to overwrite it warning: cannot send 'Media@auto-2022-02-12_00-00': signal received warning: cannot send 'Media@auto-2022-02-13_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-14_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-15_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-16_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-17_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-18_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-19_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-20_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-21_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-22_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-23_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-24_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-25_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-26_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-27_00-00': Broken pipe warning: cannot send 'Media@auto-2022-02-28_00-00': Broken pipe warning: cannot send 'Media@migrate02-28new': Broken pipe
Any help with this would be appreciated, I have been trying to find a guide on this but can't seem to find what I'm looking for.

Edit: What i'm looking for is advice on the correct way to move the data from the old pool to the new pool and then make the new pool take the place of the old pool so that everything works, all jails and plugins. Like can you change the name of the new pool to the name of the old pool after everything is transfer and the old pool is exported/destroyed?
 
Last edited:

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
why use the commandline zfs send instead of the GUI, as is intended and almost universally recomended?
cannot receive new filesystem stream: destination has snapshots (eg. Tank@auto-2022-02-25_00-00) must destroy them to overwrite it
it tells you exactly what is wrong. your destination has snapshots, and it wont destroy data by default. you have to give it the switch to destroy the destination.
I dont know the commandline switch offhand, usually something like "-force" and I would recommend using the webUI anyway.

do not enable snapshots on a replication destination, they will prevent default replications, or just be destroyed by the options that clear the destination.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
why use the commandline zfs send instead of the GUI, as is intended and almost universally recomended?
I was unaware it could be done this way, as I was told to use zfs send|recv
that is why I have post a separate question regarding how to move the data from the old pool to the new pool.
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
fair enough.
(Im not positive this would work exactly as is, i dont really haven anything to replicate, but here are some general screenshots)
in the GUI, under tasks, replication tasks, you want to set something like this:
the basic add replication (no option to destroy data):
1646094945459.png
the advanced add replication (option to replace the destination):

1646094745182.png
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
or. um. hmm. looking in your system, you are on 11.2-RELEASE-U1?
if so, the GUI might be a bit different, but I think the general options are there?
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
or. um. hmm. looking in your system, you are on 11.2-RELEASE-U1?
if so, the GUI might be a bit different, but I think the general options are there?
No I'm running TrueNAS-12.0-U8

I was looking at the replication area in the GUI, and I wasn't sure if basic would do what I needed and also didn't want to screw anything up in the advanced, like you have the "DANGER" marked.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
Try to do this but not from top-level root dataset to top-level root dataset. Do one child level down, and replicate those, individually.

Are you saying I can't copy the whole pool at once?

You can, but it would nest it underneath the target pool's top-level root dataset.

So you would have something like this on your target pool:
  • Tank
    • Media
      • child1
      • child2
      • child3
      • etc
Otherwise, you would need replicate one level down from the original pool's top-level root dataset (child, by child, by child, ...), so that you end up with:
  • Tank
    • child1
    • child2
    • child3
    • etc
 
Last edited:

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
So I tried one child level dataset as a test and it looks like that because I initially did send the top-level root dataset to top-level root dataset using zfs send|recv that the replication task failed because the destination already has data and will not destroy it.

So do I destroy the new pool and start over or use the advanced replication?
 
Joined
Oct 22, 2019
Messages
3,641
So I tried one child level dataset as a test and it looks like that because I initially did send the top-level root dataset to top-level root dataset using zfs send|recv that the replication task failed because the destination already has data and will not destroy it.

So do I destroy the new pool and start over or use the advanced replication?
What does the hierarchy look like now? (Source and target? Snapshots?)

How did you initially do the first (full) replication?
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
What does the hierarchy look like now? (Source and target? Snapshots?)

How did you initially do the first (full) replication?

Screen Shot 2022-03-01 at 10.19.55 AM.png

Screen Shot 2022-03-01 at 10.20.10 AM.png

I have automatic snapshots setup for the original pool and set to expire after 2 weeks, takes snaps of the entire pool and is set to recursive.

I initially did the first (full) replication by creating a fresh snapshot zfs snapshot -r Media@migrate02-26 and using the zfs send|recv command zfs send -R Media@migrate02-26 | zfs recv -F Tank

This then caused problems because it copied all the original snapshots to the new pool as well, which I have removed in an attempt to bring the data up to current, but it didn't help
 
Joined
Oct 22, 2019
Messages
3,641
What if you try with just one of the children, such as "downloads"?

Something like:
zfs send -R -I -v Media/downloads@migrate02-26 Media/downloads@migrate02-28new | zfs recv -v -d -F Tank
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
if you check the synchronize option in the webUI, it will overwrite snapshots and data on the destination and make them match source. its never on by default, because destroying by default is a bad idea, but if you are sure there is nothing there you can about, check the box, and it's supposed to make it the same as the source.
you can replicate the whole pool to a dataset (i replicate from main pools to a backup pool, because I have multiple replications from multiple servers)
eg
mainpool -> backuppool/mainpool
otherpools _> backuppool/otherpools
I think I had issues with cloning the whole pool to the whole pool, but its been awhile and i don't remember exactly what. it should work that way, realistically, but, for example,e the 2nd server will use the IOCAGE as iocage, and you might not want that.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
I think I had issues with cloning the whole pool to the whole pool, but its been awhile and i don't remember exactly what. it should work that way, realistically, but, for example,e the 2nd server will use the IOCAGE as iocage, and you might not want that.
Yes I was getting error notifications because the system detected 2 pools marked active for iocage usage.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
What if you try with just one of the children, such as "downloads"?

Something like:
zfs send -R -I -v Media/downloads@migrate02-26 Media/downloads@migrate02-28new | zfs recv -v -d -F Tank
that gives a too many arguments error
 
Joined
Oct 22, 2019
Messages
3,641
that gives a too many arguments error
Try typing it in manually. Copying + pasting from these forums might introduce similar-looking characters that cause issues in the terminal.
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
Try typing it in manually. Copying + pasting from these forums might introduce similar-looking characters that cause issues in the terminal.
yup tried that, still to many arguments
 
Joined
Oct 22, 2019
Messages
3,641
Try without the -v on either side?
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
Try without the -v on either side?
just keep getting the same error

I tried using the GUI replication task, just did the TimeMachine dataset but i messed it up at first and chose Media,Media/TimeMachine to Tank/TimeMachine so it stated copying everything into the Tank/TimeMachine location. I couldn't get it to stop so I restarted the system and fixed the mistake just making the task Media/TimeMachine to Tank/TimeMachine, that ran but now it doesn't match, the compression is different and the max volume is different and it didn't remove the extra data that was there from the first mistake. so I went in and edited the task and checked (Almost) Full Filesystem Replication and Synchronize Destination Snapshots With Source is also checked, but the task would not run again.
Code:
[2022/03/02 11:07:43] INFO     [replication_task__task_1] [zettarepl.replication.run] No snapshots to send for replication task 'task_1' on dataset 'Media/TimeMachine'
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
Well I have accomplished my goal. I ended up using the GUI to accomplish it. I went to the snapshots section and created a fresh snapshot and then setup replication task for each child dataset, then a few days later I made a new snapshot and ran the replication tasks again to update the data. It worked as expected but there is some differences in the data size, most was minimal only one of the dataset had a difference of several Gigabits. A few days later I made one more snapshot and ran the replication task and then switched the old pool and the new pool using the suggested procedure
  • Export the pool using the GUI
  • From the CLI, zpool import newpool oldpool, substituting newpool and oldpool with the new and old pool names, respectively.
  • From the CLI, zpool export oldpool
  • From the GUI, import the pool
I had to assign a pool for iocage again since I had removed the pools. I reset the system for good measure and i'm happy to say everything is working without issue. I'm going to wait some time to make sure everything is 100% good and then I will add the disks from the original pool to the new pool as another vdev.

Thank you to those who helped
 

Amsoil_Jim

Contributor
Joined
Feb 22, 2016
Messages
175
It was a good thing I waited because I soon realized that the User folders that I had SMB shares setup for were only folders and not datasets and therefore did not get copied over in the replication process and I manually copied this data to the new pool.

Then after all this I found this thread with pretty much exactly what I was looking for, only they mention to replicate the Pool level dataset which would have copied those directories and files which I missed.
 
Top