Howto: migrate data from one pool to a bigger pool

Lighthouse

Dabbler
Joined
Nov 15, 2018
Messages
15
I have updated my instruction, regarding snapshot creation in point 3 of my intruction. I found the problem during the process of updrading HDDs and found out this problem where TrueNAS refuses to move files because recursive snapshots are missing.
 

Lighthouse

Dabbler
Joined
Nov 15, 2018
Messages
15
So the time has passed since then and now I face another problem.


Since 2019, after 2 years finally data I store is outgrowing my main storage and I am in need to upgrade again.

The problem is I encrypted my main storage (legacy) and I am not sure how this is going to work. I fully upgraded my pool when TrueNAS arrived.

So far info is all over the place.
 

TooMuchData

Contributor
Joined
Jan 4, 2015
Messages
188
There have been a lot of posts recently about how to move data from one pool to another. Usually it's because folks want to upgrade to a much larger pool and the drive-by-drive resilvering process will take too long. I've recently gone through this as well. Here are the steps that I followed.

This assumes that you have both pools setup and connected to the same system, but the replication steps can be done between two systems and the disks moved after replication (this would apply in the instance where the primary machine doesn't have enough ports to handle the additional drives for the second pool).

Assumption is that "tank" is the primary dataset name. "temp-tank" is the name for the new pool prior to data migration.

The steps in a nutshell are replicate from tank to temp-tank, remove (or rename) tank, and then rename temp-tank to tank.

1. the system dataset needs to be moved off of TANK. Use the GUI to select a new location other than tank or temp-tank.

2. Create a system config backup using the GUI. This will be needed later, because when you detach tank, you will lose your share, snapshot and replication settings.

3. Use the GUI to create a snapshot of the dataset you want to move. If you want to move everything, select the root dataset. For flexibility in the future, I'd suggest checking the "recursive" option. Also, minimize use of tank. You will want to pick a time where nothing is changing, then ensure you have a snapshot, and then wait for replication to finish. The amount of time this will take depends on how much storage and the speed of your machine. It took ~36 hours to move 20TB locally for me. [ alternatively, you can use the CLI to create the snapshot and then replicate manually. "zfs snapshot -r tank@migrate" and then "zfs send -R tank@migrate | zfs receive temp-tank"]

4. Once replication is complete and you are satisfied that all data is on temp-tank it's time to detach both tank and temp-tank. Use the GUI to "detach volume" for tank and then repeat for temp-tank. When the confirmation window pops up DO NOT CHOOSE THE OPTION TO DESTROY.

5. using the CLI (or SSH) run the following to import and rename "zpool import tank old-tank" and then "zpool import temp-tank tank". (for reference: zpool import [old-pool-name] [new-pool-name-name])

6. Once the pools are renamed, export them at the CLI "zpool export old-tank" and "zpool export tank"

7. Using the GUI, go to the storage tab and select the import volume tab and import tank. This step is what
enables freenas to understand and control the pool.

8. Once the pool is imported, you can either manually recreate your shares, or you can restore from the configuration backup we made in step 2.

9. I would verify that everything is working to your liking before doing anything with old-tank. For safety, I'd leave it un-imported until you decide you need it or want to get rid of it. If you want to get rid of the data on the disks, I would import old-tank and then once it is in freenas, select the detach volume option for old-tank and this time select the destroy data option to blank out the drives. This is the point of no return, so know what you are doing before confirming.

[edit: note to self - here's the link to a great post on how to move Jails: https://forums.freenas.org/index.ph...-volume-to-new-ssd-volume.42105/#post-271740]
Everything worked, but I ended up with the same available space even though the "old-tank" was 6x8TB and "tank" is now 6x14TB. Am running scrub now in hopes I'll see more available space when it completes.

depasseg or anyone else, have you seen this before? Any other suggestions?

I cancelled the scrub in order to "expand pool". No change, still old available size. Then checked that autoexpand = ON. It was. Finally discovered that the root level dataset had a quota set. Removed that and now alles en orden.
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Everything worked, but I ended up with the same available space even though the "old-tank" was 6x8TB and "tank" is now 6x14TB. Am running scrub now in hopes I'll see more available space when it completes.

depasseg or anyone else, have you seen this before? Any other suggestions?

I cancelled the scrub in order to "expand pool". No change, still old available size. Then checked that autoexpand = ON. It was. Finally discovered that the root level dataset had a quota set. Removed that and now alles en orden.
If you use zpool list -v it will show you the capacity of each vdev and I don't think you would have had your issues.
 

mysticpete

Contributor
Joined
Nov 2, 2013
Messages
146
I would think that if you have spare SATA ports then you can just use the replication feature to move/copy the whole pool or datasets to the new pool then when you have confirmed that that all the data has been copied you can take the old pool/dataset offline and swap them over/
 

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208
ANSWERED MY OWN QUESTION, SEE BELOW A FEW POSTS

Hello @depasseg , I hope it's OK to respond to this post.

I've done as you suggested and all is well, thank you so much for taking the time those many years ago.

I just wonder though, if this procedure is followed, are original snapshots retained? I only ask because I tested this, for the scenario where a 2nd machine pulls replications and it gave the "unable to start from scratch" warning. It's not a big deal to re-start in most cases, but I would be grateful knowing if this is an expected side effect?

Many thanks, Chris
 
Last edited:

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
you can always check...

zfs list -t snap pool/dataset
 

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208
I thought I would try this again. The simple answer is that the original snapshots schema should be used to replicate the data to the new drive arrangement. Then any existing replications that occur to other machines will work, without having to start from scratch. It's likely common sense, but replications and snapshots should be temporarily disabled on all related machines while the work is being done.
 

rafpigna

Dabbler
Joined
Dec 6, 2023
Messages
12
Hello, I found this post/guide while looking for a solution to my problem.

I have a single disk pool on my SCALE installation, 7tb, that is almost full.
I have another 12tb drive ready for use.
On my truenas host I have only ONE sata port, so I cant hook up the new disk to SATA but only via USB or putting on the newtwork another truenas scale "temp" system with the 12tb drive in it.

will this guide work for my situation?

But mainly I have another question, when I read

1. the system dataset needs to be moved off of TANK. Use the GUI to select a new location other than tank or temp-tank.

I have system already installed on an internal nvme ssd. This is the report of my installed storage

sda 7HK6R7RF 7.28 TiB Pool1 Disk Type:HDD
sdb drive-scsi0 64 GiB boot-pool Disk Type:SSD

I have to move the boot-pool to a different locatiom, or it's already ok how it's now?

Thanks for any suggestions and help
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
You can try a SATA-to-USB adapter. These are not a good idea for long term use, @Arwen has an excellent resource in the Resources section explaining why, but as a transitional aid, one USB adapter can probably get the job done just fine.
 

rafpigna

Dabbler
Joined
Dec 6, 2023
Messages
12
You can try a SATA-to-USB adapter. These are not a good idea for long term use, @Arwen has an excellent resource in the Resources section explaining why, but as a transitional aid, one USB adapter can probably get the job done just fine.
Thanks, I'm doing in this way just now.
Anyway I have to figure out how to do the replication correctly, just adding the new pool in the GUI give me a lot of options that I dont understand 90% of them... will be a long night tho! :)
 

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208
Thanks, I'm doing in this way just now.
Anyway I have to figure out how to do the replication correctly, just adding the new pool in the GUI give me a lot of options that I dont understand 90% of them... will be a long night tho! :)
This may not be available to you, but I have a spare machine that I test procedures on. That can be really helpful when changing out drive arrangements.

Also, I wonder if your motherboard has a spare PCI slot, as you could get a HBA card to give you 4 extra sata ports?

I did a video a while back when I did this, it's not very good and should only be used as a guide, but it may help you: https://www.youtube.com/watch?v=mibON3DRo14
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
The accumulated info and the lead should be a resource…
 

rafpigna

Dabbler
Joined
Dec 6, 2023
Messages
12
EDIT:
I partially fixed the issue.
Seems that to migrate apps to the new pool, before exporting the pools, it's needed to do the app migration inside the gui in apps-setting-choose pool.

The problem is that when you import back the new (and renamed) pool, most of the apps will not work anymore because will try to mount datasets in the "old name" pool!
So basically, seems that if you are on SCALE and want to keep your apps, you have to NOT change the pools names.
You can do data replications, app migrations, old pool export and stop. You wil stay with the new pool named "new pool" or what you choose for it.

I didnt find a solution for this, in my case it's not a big issue since I just had a couple of shares that I can re-do by hand and no other stuff pointing to the "old pool" name, so staying with the new name makes no difference, but if you are on SCALE and mandatory need to have the old name for the new pool, app moving will be a big problem.
If someone has a fix or solution for this, I would be happy to know,


##################################################
Old post

I just ended the procedure and all seems worked fine, but I'm having a problem with APPS .
As soon I try to set the apps pool, I receive this error

[EFAULT] Command mount -t zfs Pool1/ix-applications/k3s/kubelet /var/lib/kubelet failed (code 1): filesystem 'Pool1/ix-applications/k3s/kubelet' cannot be mounted using 'mount'. Use 'zfs set mountpoint=legacy' or 'zfs mount Pool1/ix-applications/k3s/kubelet'. See zfs(8) for more information.

I tried to reboot a couple of times and retry, no success. I tried to search for a solution but I didnt find a lot, only one old post with a lot of users with same problem and seems the solution one found was to completely delete the ix-application dataset, but I would like to avoid that if it's not totally mandatory since I will have to restore all apps backups and I may encounter other issues. Luckly I still have the old disk/pool and I can revert everything back, but this will leave me with the same problem: running out of space on a system with just one sata port.

Also, I wonder if your motherboard has a spare PCI slot, as you could get a HBA card to give you 4 extra sata ports?

Unfortunately my host is a mini-pc (Hp EliteDesk 800 Mini G3). It hase 1 NVME and 1 SATA port for 2,5" drive. I used the NVME for proxmox/boot and bought an extension cable to connect the internal sata to a 3,5" disk out of the pc (doing also an hole in the chaiss :D ) It's not a perfect and clean solution, but works to learn the system, waiting for my new hardware arriving in one month that will have 12 drives bay, 2 epyc cpus, 264 gb ram, nvidia quadro gpu and so on...
 
Last edited:

Bikerchris

Patron
Joined
Mar 22, 2020
Messages
208
So basically, seems that if you are on SCALE and want to keep your apps, you have to NOT change the pools names.
I found that to be the case with Core.

Unfortunately my host is a mini-pc (Hp EliteDesk 800 Mini G3). It hase 1 NVME and 1 SATA port for 2,5" drive. I used the NVME for proxmox/boot and bought an extension cable to connect the internal sata to a 3,5" disk out of the pc (doing also an hole in the chaiss :D ) It's not a perfect and clean solution, but works to learn the system, waiting for my new hardware arriving in one month that will have 12 drives bay, 2 epyc cpus, 264 gb ram, nvidia quadro gpu and so on...
Ah, well it sounds like you'll have far more ease when you go to the new system. If you have any other Scale related issues, it might be best to create a thread in the Scale area of this forum: https://www.truenas.com/community/forums/truenas-scale-discussion/
 

rafpigna

Dabbler
Joined
Dec 6, 2023
Messages
12
I found that to be the case with Core.


Ah, well it sounds like you'll have far more ease when you go to the new system. If you have any other Scale related issues, it might be best to create a thread in the Scale area of this forum: https://www.truenas.com/community/forums/truenas-scale-discussion/

Sorry, I didnt saw that this was not the right forum, I just saw that this was a quite similar need as mine and I wrote here :)

Anyway I got all fixed.
- Reverted back to the point where i had just completed the 24-hours-long replication task from old pool (7tb single disk connected via sata) to the new pool (12tb single disk connected via usb3 with sata2usb adapter)
- Started a Heavy Script manual backup for the apps, just in case to revert back at this stage.
- Migrated the apps via GUI to the "new pool", I got the error that dataset "ix-applpications" was already on the new pool, so I deleted it (took 20 minutes, was a 22gb dataset!) and the migration worked fine (another 30 minutes...)
- Exported both pools and imported the new pool with it's name (NewPoolNew) and the old pool with as OldPool
- Did the migration again from NewPoolNew to OldPool.
- Exported both pools again, then imported the new pool as Pool1 (the old pool original name) and the OldPool as OldPool :)
- Migrated the last time the apps from OldPool to Pool1 (new pool). At this point any mount point is correctly pointed to the old-name new disk/pool :)
- Unfortunately discovered that during this multiple imports/exports/migrations, the apps that were configured to save configs on PVC volumes, lost their configs so were starting like just installed from scratch. I did a restore from an HeavyScript backup I did BEFORE starting all this stuff, so before the replication task itself, and the restore did fine and now all apps are working again with no errors and all the shares and configs are ok since "Pool1" is there, but on a new disk.
- Now I still have to move the new disk from usb to the sata port, removing the old disk, but I think that this should work: Export the Pool, shut down the host, remove the old disk from sata and connect the new one. turn on the system, import the pool. I hope :)

I didnt know how I had the idea to do the 3 exports/imports with a different pool name in the second cyckle to end with the right name, but I'm happy I had this genius lighting! :D

Anyway thanks for the guide and for the help, maybe my experience will help some one in the same situation, maybe I should write a new thread in Scale forum as a guide and warning to do the things in the right order :)
 
Last edited:
Top