[11.2] Migrating the iocage dataset to an other pool ?

seedz

Dabbler
Joined
May 2, 2018
Messages
39
Hi !

At first this was a question, now this is how you should do it :
(less scrolling for new people, thanks to everyone who participated)

First, you migrate your pools like this :
Code:
iocage stop ALL
zfs unmount -f tank/iocage
zfs snapshot -r tank/iocage@migration
zfs send -R tank/iocage@migration | zfs receive -v dozer/iocage
iocage clean -a
zfs destroy -f tank/iocage
iocage activate dozer
zfs destroy -r dozer/iocage@migration

here, you should use your own pool names.
the old pool is named tank, the new one dozer

Then, you could need to edit your jail mount points.
To do this, you have to :
set nano as your editor (optional if you know how to search and replace with vi)
Code:
EDITOR=/usr/local/bin/nano; export EDITOR

then edit the fstab of your jails
Code:
iocage fstab -e yourjail

then do search and replace all with nano, searching with your old mountpoint /mnt/tank/iocage with /mnt/dozer/iocage

Tested on 11.2U1


(old post contents)
I have lots of jails set up for my uses that i'd like to transfer to a newly bought SSD from my HDD pool.
I've found several posts talking about changing the iocage dataset location doing a reset, but :
nowhere i read touches upon how to effectively transfer an active jail dataset without having to remake every one of them.

What i already did :
- copy the whole dataset to a save folder
- prepared the new pool

So, my questions are :
- is there an other way than doing an "iocage clean -a" / "iocage reset" / "iocage activate MYPOOL" to move the dataset ?
- doing that, what's the right way to go to not loose everything ?

I'm running FreeNAS-11.2-BETA3
 
Last edited:
Joined
Jul 10, 2016
Messages
521
There's no iocage reset command. Anyway, assuming that:
  • your iocage dataset currently resides on a pool called "tank" and this pool is "active" for iocage use.
  • your iocage dataset tank/iocage is mounted as /mnt/iocage
  • you want to move it to a brand new pool called "dozer"
You can move your iocage dataset from pool "tank" to pool "dozer" as follows:
Code:
iocage stop ALL
zfs unmount -f tank/iocage
zfs snapshot -r tank/iocage@migration
zfs send -R tank/iocage@migration | zfs receive -v dozer/iocage
iocage clean -a
zfs destroy -rf tank/iocage
iocage activate dozer
zfs destroy -r dozer/iocage@migration

**NOTE ADDED** Use a real SSH client and logoff from the WebUI while you do this; Do not run any other iocage instructions in between. Some statements that may seem innocent, e.g. iocage list will automatically try to recreate or mount the old datasets before the new pool is activated.

Early versions of iocage mounted the first iocage dataset in /mnt/iocage but that was changed to /mnt/<activepool>/iocage in later versions. Be prepared to change the above instructions to fit your particular installation and change the fstab entries to match the new mountpoints.
 
Last edited:

seedz

Dabbler
Joined
May 2, 2018
Messages
39
thanks, such a nice template !

will try that during the week and will report about how it did
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
worked like a charm, almost !

It did move nearly everything, just one jail was borked
Could save it from the backup i did before the whole zfs chebang though.

The deletion of the old dataset didn't go as well sadly : most of it is still there and stuck.
zfs destroy says it's doing its job, GUI reports the dataset still exists but says it fails to delete it.
maybe a GUI glitch ? I don't really know zfs through CLI to know how to check if the old dataset is truly gone or not.
 

8-bit Yoda

Explorer
Joined
Jun 16, 2018
Messages
68
You can run
Code:
zfs list
from the command line.
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
oh !
well, so bad news, the old dataset is still there
the zfs destroy doesn't work anymore : it tries unmounting the iocage dataset, which is now mounted from the new disk

resolved it while i changed boot disks (switched from a really old IDE HDD to a M2 SSD)
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
oh !
well, so bad news, the old dataset is still there
the zfs destroy doesn't work any more : it tries unmounting the iocage dataset, which is now mounted from the new disk

resolved it while i changed boot disks (switched from a really old IDE HDD to a M2 SSD)


So are jurgens instructions correct or some slight issues? I will be needing to do this in the upcoming weeks.
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
I'm not sure if clean up instructions failed or PEBKAC.
As i said, in my case, the move part worked really fine, it's the cleanup that failed
(the cleanup commands on the old dataset failed repeatedly on my system if my memory is correct)

Also, prepare a save of your jail configs in case anything goes bad
(one of my jails, the one i use as my reverse-proxy, lost all its config files)

I got around it by reinstalling freenas from the ground up and doing the clean up then
(I reinstalled mainly to leave the nightlies and get back to stable)
I must add that some of the jails broke in the nightlies, not letting me do "top" of even "pkg" in them, so i had to reinstall them anyway : I could at least save my databases and configs
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
There's no iocage reset command. Anyway, assuming that:
  • your iocage dataset currently resides on a pool called "tank" and this pool is "active" for iocage use.
  • your iocage dataset tank/iocage is mounted as /mnt/iocage
  • you want to move it to a brand new pool called "dozer"
You can move your iocage dataset from pool "tank" to pool "dozer" as follows:
Code:
iocage stop ALL
zfs unmount -f tank/iocage
zfs snapshot -r tank/iocage@migration
zfs send -R tank/iocage@migration | zfs receive -v dozer/iocage
iocage clean -a
zfs destroy -f tank/iocage
iocage activate dozer
zfs destroy -r dozer/iocage@migration

**NOTE ADDED** Use a real SSH client and logoff from the WebUI while you do this; Do not run any other iocage instructions in between. Some statements that may seem innocent, e.g. iocage list will automatically try to recreate or mount the old datasets before the new pool is activated.

I followed these precise instructions to the letter and it did not work. I've ended up with 0 jails.
Luckily this is my test machine.

The only difference is I'm moving from my SSD to my TANK (again: for the sake of testing)


Code:
iocage stop ALL
zfs unmount -f SSD/iocage
zfs snapshot -r SSD/iocage@migration
zfs send -R SSD/iocage@migration | zfs receive -v TANK/iocage
iocage clean -a 
zfs destroy -f SSD/iocage
iocage activate SSD
zfs destroy -r TANK/iocage@migration


EDIT: 5 minutes later, you can clearly see my mistake in the second last command, I re-activated the old path, not the new path.
Embarrassingly, I hope this helps others, thanks Jurgen, great work, again!



Here's the copied log, you can see my mistakes in there (which however, should not have broken the process)
Have I missed something obvious?

Code:
root@freenasnew:/mnt/SSD/iocage/jails # iocage stop ALL
qbittorrent is not running!
TestJail01 is not running!
sonarr is not running!
subsonic is not running!
root@freenasnew:/mnt/SSD/iocage/jails # zfs unmount -f tank/io
tank/ not found

root@freenasnew:/mnt/SSD/iocage/jails # zfs unmount -f tank/iocage
cannot open 'tank/iocage': dataset does not exist
root@freenasnew:/mnt/SSD/iocage/jails # zfs unmount -f ssd/iocage
cannot open 'ssd/iocage': dataset does not exist
root@freenasnew:/mnt/SSD/iocage/jails # zfs unmount -f /mnt/ssd/iocage
cannot unmount '/mnt/ssd/iocage': No such file or directory
root@freenasnew:/mnt/SSD/iocage/jails # zfs unmount -f SSD/iocage
root@freenasnew:/mnt/SSD/iocage/jails # zfs snapshot -r SSD/iocage@migration
root@freenasnew:/mnt/SSD/iocage/jails # zfs send -R SSD/iocage@migration | zfs r					eceive -v SSD/iocage
cannot receive new filesystem stream: destination 'SSD/iocage' exists
must specify -F to overwrite it
warning: cannot send 'SSD/iocage@migration': signal received
root@freenasnew:/mnt/SSD/iocage/jails # zfs send -R SSD/iocage@migration | zfs r					eceive -v TANK/iocage
receiving full stream of SSD/iocage@migration into TANK/iocage@migration
received 3.31MB stream in 1 seconds (3.31MB/sec)
receiving full stream of SSD/iocage/templates@migration into TANK/iocage/templat					es@migration
received 46.6KB stream in 1 seconds (46.6KB/sec)
receiving full stream of SSD/iocage/log@migration into TANK/iocage/log@migration
received 59.1KB stream in 1 seconds (59.1KB/sec)
receiving full stream of SSD/iocage/images@migration into TANK/iocage/images@mig					ration
received 46.6KB stream in 1 seconds (46.6KB/sec)
receiving full stream of SSD/iocage/download@migration into TANK/iocage/download					@migration
received 48.2KB stream in 1 seconds (48.2KB/sec)
receiving full stream of SSD/iocage/download/11.2-RELEASE@migration into TANK/io					cage/download/11.2-RELEASE@migration
received 126MB stream in 3 seconds (41.9MB/sec)
receiving full stream of SSD/iocage/jails@migration into TANK/iocage/jails@migra					tion
received 53.2KB stream in 1 seconds (53.2KB/sec)
receiving full stream of SSD/iocage/jails/qbittorrent@migration into TANK/iocage					/jails/qbittorrent@migration
received 159KB stream in 1 seconds (159KB/sec)
receiving full stream of SSD/iocage/jails/TestJail01@migration into TANK/iocage/					jails/TestJail01@migration
received 53.8KB stream in 1 seconds (53.8KB/sec)
receiving full stream of SSD/iocage/jails/sonarr@migration into TANK/iocage/jail					s/sonarr@migration
received 133KB stream in 1 seconds (133KB/sec)
receiving full stream of SSD/iocage/jails/subsonic@migration into TANK/iocage/ja					ils/subsonic@migration
received 133KB stream in 1 seconds (133KB/sec)
receiving full stream of SSD/iocage/releases@migration into TANK/iocage/releases					@migration
received 48.2KB stream in 1 seconds (48.2KB/sec)
receiving full stream of SSD/iocage/releases/11.2-RELEASE@migration into TANK/io					cage/releases/11.2-RELEASE@migration
received 48.2KB stream in 1 seconds (48.2KB/sec)
receiving full stream of SSD/iocage/releases/11.2-RELEASE/root@TestJail01 into T					ANK/iocage/releases/11.2-RELEASE/root@TestJail01
received 621MB stream in 7 seconds (88.7MB/sec)
receiving incremental stream of SSD/iocage/releases/11.2-RELEASE/root@qbittorren					t into TANK/iocage/releases/11.2-RELEASE/root@qbittorrent
received 36.5KB stream in 1 seconds (36.5KB/sec)
receiving incremental stream of SSD/iocage/releases/11.2-RELEASE/root@subsonic i					nto TANK/iocage/releases/11.2-RELEASE/root@subsonic
received 1.00MB stream in 1 seconds (1.00MB/sec)
receiving incremental stream of SSD/iocage/releases/11.2-RELEASE/root@sonarr int					o TANK/iocage/releases/11.2-RELEASE/root@sonarr
received 288KB stream in 1 seconds (288KB/sec)
receiving incremental stream of SSD/iocage/releases/11.2-RELEASE/root@migration					 into TANK/iocage/releases/11.2-RELEASE/root@migration
received 1.40MB stream in 1 seconds (1.40MB/sec)
found clone origin TANK/iocage/releases/11.2-RELEASE/root@qbittorrent
receiving incremental stream of SSD/iocage/jails/qbittorrent/root@migration into					 TANK/iocage/jails/qbittorrent/root@migration
received 585MB stream in 8 seconds (73.1MB/sec)
found clone origin TANK/iocage/releases/11.2-RELEASE/root@TestJail01
receiving incremental stream of SSD/iocage/jails/TestJail01/root@migration into					 TANK/iocage/jails/TestJail01/root@migration
received 63.2KB stream in 1 seconds (63.2KB/sec)
found clone origin TANK/iocage/releases/11.2-RELEASE/root@sonarr
receiving incremental stream of SSD/iocage/jails/sonarr/root@migration into TANK					/iocage/jails/sonarr/root@migration
received 613MB stream in 10 seconds (61.3MB/sec)
found clone origin TANK/iocage/releases/11.2-RELEASE/root@subsonic
receiving incremental stream of SSD/iocage/jails/subsonic/root@migration into TA					NK/iocage/jails/subsonic/root@migration
received 654MB stream in 10 seconds (65.4MB/sec)
root@freenasnew:/mnt/SSD/iocage/jails # iocage clean -a

This will destroy ALL iocage data!

Are you sure? [y/N]: y
Cleaning iocage/templates
Cleaning iocage/releases
Cleaning iocage/log
Cleaning iocage/jails
Cleaning iocage/images
Cleaning iocage/download
Cleaning iocage
All iocage datasets have been destroyed.
root@freenasnew:/mnt/SSD/iocage/jails # zfs destroy -f SSD/iocage
root@freenasnew:/mnt/SSD/iocage/jails # iocage activate SSD
ZFS pool 'SSD' successfully activated.
root@freenasnew:/mnt/SSD/iocage/jails # zfs destroy -r TANK/iocage@migration
root@freenasnew:/mnt/SSD/iocage/jails # iocage list
Creating SSD/iocage
Creating SSD/iocage/download
Creating SSD/iocage/images
Creating SSD/iocage/jails
Creating SSD/iocage/log
Creating SSD/iocage/releases
Creating SSD/iocage/templates
+-----+------+-------+---------+-----+
| JID | NAME | STATE | RELEASE | IP4 |
+=====+======+=======+=========+=====+
+-----+------+-------+---------+-----+
root@freenasnew:/mnt/SSD/iocage/jails # iocage list
+-----+------+-------+---------+-----+
| JID | NAME | STATE | RELEASE | IP4 |
+=====+======+=======+=========+=====+
+-----+------+-------+---------+-----+
root@freenasnew:/mnt/SSD/iocage/jails # iocage list
+-----+------+-------+---------+-----+
| JID | NAME | STATE | RELEASE | IP4 |
+=====+======+=======+=========+=====+
+-----+------+-------+---------+-----+
root@freenasnew:/mnt/SSD/iocage/jails #
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
EDIT: 5 minutes later, you can clearly see my mistake in the second last command, I re-activated the old path, not the new path.
Embarrassingly, I hope this helps others, thanks Jurgen, great work, again!

Exactly what i was gonna point out from the mail, but seems you caught it in the end !

Semms you got lucky on your destruction, or i was unlucky on mine, since i had failures in the zfs destroy command happen to me :>
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
What happened? Did you lose data?
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
nope, i lost nothing, i always save to an other location before making switches like that ^^
 

TimJC

Cadet
Joined
Dec 21, 2012
Messages
5
I migrated all my data to a new pool yesterday, but I am unable to get my jails running. The jails are listed in the gui, but they won't start. When I attempt to start them via ssh I get an error.

root@freenas:/mnt/storage/iocage # iocage start plex
* Starting plex
+ Start FAILED
mount: /mnt/iocage/jails/plex: No such file or directory
jail: /sbin/mount -t nullfs -o rw /mnt/storage/apps/plex /mnt/iocage/jails/plex/root/config: failed

I assume I need to mount /mnt/storage/iocage to /mnt/iocage, but how do I accomplish this in FreeNAS.
 
Joined
Jul 10, 2016
Messages
521
Correct, your mountpoints were likely changed and now you need to update the jail's fstab file.

For starters execute zfs list -r `iocage get -p`/iocage from the command line and post in CODE tags. This will show the mountpoints for the iocage dataset on your active pool.

Then run iocage fstab -l plex that will show you what is mounted in the jail. For plugins, there will be a whole bunch of them.
Compare the paths of the two outputs and make the needed corrections. Look at iocage fstab --help for options on how to do this, or navigate to the fstab file and edit directly.

Ideally avoid using the built-in shell, but instead use PuTTY or something that allows to copy/paste and wraps properly. When in doubt, post the requested outputs in CODE tags so we can see what's going on.
 

TimJC

Cadet
Joined
Dec 21, 2012
Messages
5
Jurgen,
Thanks for the reply.

Before the migration, iocage was listed as a dataset at the same level as the my root storage pool. I thought this was odd, since it was a part of the storage pool. It looks like it was symlinked or something from its location at /mnt/storage/iocage to /mnt/iocage. Is there some way to rebuild this link without editing fstab for each of my jails?

root@freenas:~ # zfs list -r `iocage get -p`/iocage

NAME USED AVAIL REFER MOUNTPOINT
storage/iocage 6.16G 25.5T 3.92M /mnt/storage/iocage
storage/iocage/download 126M 25.5T 176K /mnt/storage/iocage/download
storage/iocage/download/11.2-RELEASE 125M 25.5T 125M /mnt/storage/iocage/download/11.2-RELEASE
storage/iocage/images 176K 25.5T 176K /mnt/storage/iocage/images
storage/iocage/jails 5.65G 25.5T 192K /mnt/storage/iocage/jails
storage/iocage/jails/jackett 497M 25.5T 192K /mnt/storage/iocage/jails/jackett
storage/iocage/jails/jackett/root 497M 25.5T 885M /mnt/storage/iocage/jails/jackett/root
storage/iocage/jails/lidarr 554M 25.5T 192K /mnt/storage/iocage/jails/lidarr
storage/iocage/jails/lidarr/root 554M 25.5T 942M /mnt/storage/iocage/jails/lidarr/root
storage/iocage/jails/ombi 496M 25.5T 192K /mnt/storage/iocage/jails/ombi
storage/iocage/jails/ombi/root 496M 25.5T 884M /mnt/storage/iocage/jails/ombi/root
storage/iocage/jails/organizr 367M 25.5T 192K /mnt/storage/iocage/jails/organizr
storage/iocage/jails/organizr/root 367M 25.5T 755M /mnt/storage/iocage/jails/organizr/root
storage/iocage/jails/plex 910M 25.5T 192K /mnt/storage/iocage/jails/plex
storage/iocage/jails/plex/root 909M 25.5T 1.27G /mnt/storage/iocage/jails/plex/root
storage/iocage/jails/radarr 550M 25.5T 192K /mnt/storage/iocage/jails/radarr
storage/iocage/jails/radarr/root 549M 25.5T 938M /mnt/storage/iocage/jails/radarr/root
storage/iocage/jails/sonarr 556M 25.5T 192K /mnt/storage/iocage/jails/sonarr
storage/iocage/jails/sonarr/root 555M 25.5T 944M /mnt/storage/iocage/jails/sonarr/root
storage/iocage/jails/subsonic 550M 25.5T 320K /mnt/storage/iocage/jails/subsonic
storage/iocage/jails/subsonic/root 549M 25.5T 560M /mnt/storage/iocage/jails/subsonic/root
storage/iocage/jails/tautulli 407M 25.5T 192K /mnt/storage/iocage/jails/tautulli
storage/iocage/jails/tautulli/root 406M 25.5T 795M /mnt/storage/iocage/jails/tautulli/root
storage/iocage/jails/transmission 895M 25.5T 192K /mnt/storage/iocage/jails/transmission
storage/iocage/jails/transmission/root 895M 25.5T 1.25G /mnt/storage/iocage/jails/transmission/root
storage/iocage/log 272K 25.5T 272K /mnt/storage/iocage/log
storage/iocage/releases 401M 25.5T 176K /mnt/storage/iocage/releases
storage/iocage/releases/11.2-RELEASE 401M 25.5T 176K /mnt/storage/iocage/releases/11.2-RELEASE
storage/iocage/releases/11.2-RELEASE/root 401M 25.5T 395M /mnt/storage/iocage/releases/11.2-RELEASE/root
storage/iocage/templates 176K 25.5T 176K /mnt/storage/iocage/templates


root@freenas:~ # iocage fstab -l plex

+-------+------------------------------------------------------------------------------------------+
| INDEX | FSTAB ENTRY |
+=======+==========================================================================================+
| 0 | /mnt/storage/apps/plex /mnt/iocage/jails/plex/root/config nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 1 | /mnt/storage/media /mnt/iocage/jails/plex/root/mnt/media nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 2 | /mnt/storage/media/tv_shows /mnt/iocage/jails/plex/root/mnt/media/tv_shows nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 3 | /mnt/storage/media/movies /mnt/iocage/jails/plex/root/mnt/media/movies nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 4 | /mnt/storage/media/music /mnt/iocage/jails/plex/root/mnt/media/music nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 5 | /mnt/storage/timjc/Videos /mnt/iocage/jails/plex/root/mnt/timjc/videos nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
| 6 | /mnt/storage/timjc/Pictures /mnt/iocage/jails/plex/root/mnt/timjc/pictures nullfs rw 0 0 |
+-------+------------------------------------------------------------------------------------------+
 

TimJC

Cadet
Joined
Dec 21, 2012
Messages
5
I think I figured it out. I needed to set a ZFS mountpoint. After doing this, iocage is listed in the gui as a separate dataset from my root storage dataset. The jails are now starting correctly.

root@freenas:~ # zfs set mountpoint=/iocage storage/iocage

root@freenas:~ # zfs get mountpoint storage/iocage
NAME PROPERTY VALUE SOURCE
storage/iocage mountpoint /mnt/iocage local

Note: zfs set mountpoint= appears to be relative to the pool's location, which is /mnt. When I attempted setting zfs set mountpoint=mnt/iocage storage/iocage the mountpoint was set to /mnt/mnt/iocage.

The question now is, will this setting persist across a reboot.
 
Joined
Jul 10, 2016
Messages
521
Yes, originally the iocage dataset was mounted in /mnt/iocage but later versions changed that to /mnt/<activepool>/iocage. I added a note to the post above.
Leave off the /mnt prefix; FreeNAS adds that automatically (determined by the altroot property on your pool).
For future reference, a safe way to change the mountpoint from /mnt/storage/iocage to /mnt/iocage is below.
Don't run any other commands in between, especially iocage commands, as those may try to fix/recreate the file structure.

Code:
cd /root
iocage stop ALL
zfs unmount -f storage/iocage
zfs set mountpoint=/iocage storage/iocage
zfs mount -a
 

seedz

Dabbler
Joined
May 2, 2018
Messages
39
oh i didn't know you could set the /mnt/iocage mountpoint back
i'm using the /mnt/<storagepool>/iocage folder now anyway, but it is nice to know !
 

MarcusJ

Cadet
Joined
Apr 4, 2018
Messages
8
Yes, originally the iocage dataset was mounted in /mnt/iocage but later versions changed that to /mnt/<activepool>/iocage. I added a note to the post above.
Leave off the /mnt prefix; FreeNAS adds that automatically (determined by the altroot property on your pool).
For future reference, a safe way to change the mountpoint from /mnt/storage/iocage to /mnt/iocage is below.
Don't run any other commands in between, especially iocage commands, as those may try to fix/recreate the file structure.

Code:
cd /root
iocage stop ALL
zfs unmount -f storage/iocage
zfs set mountpoint=/iocage storage/iocage
zfs mount -a

Just wanted to say thanks. Read through some guides and thought I had it from start to finish and ended up with all my jails showing as CORRUPTED. After correctly the mount all is well. phew... I had backups but didn't want to go through the mess.
 

MarcusJ

Cadet
Joined
Apr 4, 2018
Messages
8
Wow... always seem to speak too soon.

All is well until a reboot then jails appeared as corrupted again. I then, had to go through and fix the mount point again.

Any ideas?

EDIT:

Well figured it out. I noticed I had a small IOCAGE dataset still on the old POOL. I guess I had activated on the wrong POOL (in error). I ended up rebooting then deleting the old IOCAGE dataset then the JAILS showed up correctly. One little error caused a little stress :)
 
Last edited:
Top