Migrate pool+data to new pool

dalnew

Dabbler
Joined
Dec 9, 2020
Messages
26
So I am running TrueNAS Scale as my primary NAS system and I have an old Synology system that I use to backup all the TrueNAS data. I currently have a pool, let's call it "tank1" that is composed of 4 16TB drives in a RAIDZ1 configuration, but I just recently purchased 6 new 18TB drives that I would like to replace this with using a RAIDZ2 configuration.

What I would like to do is take all the datasets/configuration/etc on tank1 and move it to the new tank2 that's composed of 6 18TB drives. Essentially the new tank2 should be an exact clone of what tank1 is today ie all the configuration, datasets, files, timestamps, etc should be the same... a "clone". Once that's done, and rigorously tested to make sure everything is still there :), I would like to wipe and remove the original 4 drives and move them over to my Synology to use for additional backup storage. Then I'd like to rename tank2 to tank1 so that nothing will appear to be different (other than the fact that it's now in a 6 drive RAIDZ2 configuration).

My question, what's the best way to do this? I looked around the Scale GUI and I didn't see anything explicitly looked like it would handle it but I found this discussion (https://serverfault.com/questions/88638/moving-a-zfs-filesystem-from-one-pool-to-another) that seems to indicate that this process would essentially be what I want to do:
# Do the initial send
zfs snapshot -r tank1@golden
zfs send -R tank1@golden | zfs receive -F tank2

# Shut everything down ???? and rerun
zfs snapshot -r tank1@golden2
zfs send -Ri tank1@golden tank1@golden2 | zfs receive -F tank2

A couple questions:
  1. Ideally I'd like to keep this system up and available for use during the initial send as there is ~36TB of data to send and this will probably take a while. Would this accomplish what I want?
  2. For the second part what does "shut everything down" entail? Do I just shutdown all the docker services I have and then go to the TrueNAS GUI System->Services and turn all those off? Is that it or is there some "safe mode" I should be in to really accomplish this correctly?
  3. Once the receive is done how do I just rename tank1->tankOld and tank2->tank1? Someone mentioned changing mount points but I'm not familiar with that in zfs/TrueNAS.
  4. Also, according to the TrueNAS Storage GUI, tank1 also contains my System Dataset Pool. I have a set of mirrored SSDs that I think I'd rather have this living on though. Is there an easy way to move that over in the current GUI before I do this move?
  5. Also there was mention of using "pv" to get an estimated ETA for the initial transfer... eg something like "zfs send -R tank1@golden | pv -s [total-estimated-size] | zfs receive -F tank2". I guess that's the best way to get an ETA? It won't slow down the transfer I assume?
Thoughts? Better ways of doing this? Thanks for the advice!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
Do you have enough space to put the new disks in as well as the old disks? If so then this is actually quite easy
I use the following script to do something very similar
zfs snap BigPool/SMB/Archive@migrate zfs send -R BigPool/SMB/Archive@migrate | zfs recv -F BigPool/SMB/Archive_New zfs snap BigPool/SMB/Archive@migrate2 zfs send -i @migrate BigPool/SMB/Archive@migrate2 | zfs recv -F BigPool/SMB/Archive_New zfs destroy -rf BigPool/SMB/Archive zfs rename -f BigPool/SMB/Archive_New BigPool/SMB/Archive
It does a snapshot and sends it to a new location. Then it does another snapshot and sends that as well so should take care of any deltas, deletes the original dataset and then renames the temp dataset to the original name. Don't run this as it stands on an encrypted dataset
This is not quite how you want to do it - but the principle is the same

1. Sending the primary base snapshot and then sending a delta snapshot should enable the main dataset to remain up all the time. Just be careful
2. I would just shut docker down completely
3. Don't know
4. You should be able to just move this, unless you are AD connected in which case you will need to disconnect AD, move the dataset via the GUI and then reconnect AD
5. Don't know
 

dalnew

Dabbler
Joined
Dec 9, 2020
Messages
26
Thanks for all the great info!

Yes it's a 12 bay NAS system and so there's space for both at the same time.

So based on the commands you're using are you putting the new drives in the same BigPool not a new pool? Are these added as a second vdev in BigPool and then transferred? Wouldn't that just split the data across both vdevs then? Do new drives that are intended to be independent from the original drives not need to be in their own pool?

Also I figured out how to move the system dataset. Apparently there's a GUI option under System/Advanced/System Dataset Pool. I guess I would have expected that option to be present on the Storage tab or an option on the pool that contains it like "Move system dataset pool", but whatever :)
 
Last edited:

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
No - the script I posted is not what you want, but should guide you
Put the new disks in a new pool and snapshot across
 

dalnew

Dabbler
Joined
Dec 9, 2020
Messages
26
Thanks, it's in progress. Quick question. Do you have any idea how long this should take? It's been about 36 hours and I seem to have two processes related to the command:
sudo zfs send -R tank1@golden_moving_12.5.21 | pv -pets 36600000000000 | sudo zfs recv -F tank2
The pv is really just to give ETA on the job but it hasn't transferred ANY data yet. At least not that I can tell. I have two processes that were created from this command:

4 0 381972 54145 20 0 9276 4256 - S+ pts/0 0:00 sudo zfs recv -F tank2
4 0 381973 381972 20 0 10752 5184 - R+ pts/0 2125:44 zfs recv -F tank2
The second one has been running for ~36 hours and the other doesn't seem to have done ANYTHING.

If I look at the top I see the CPU has been pegged at ~100% for one thread for the entire time for the zfs send, but when I look at I/O stats or the storage of the destination pool I see no data has been transferred.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
381973 root 20 0 10752 5184 4204 R 99.7 0.0 2129:42 zfs

Is this normal? I know it's a ~37TB snapshot I'm trying to send, but I would have expected some data to have been moved at this point no? Should I kill it and start over? It's obviously doing something given that one thread is sitting at 100%, but I just can't tell if it's doing something *productive* or if it's stuck in some useless loop.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,945
Data should be moving almost immediately, and quite quickly. Looks like summat has got hung up. If you look at the pool useage on the destination pool I assume that still showing nothing.

I would do a simple send | recv and use the way the destination is filling up to work out how long its going to take.
 
Top