Migration process via train change warnings.

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Hello all.


I thought I'd just have a play with SCALE, so I went to the train change update area and note that the warnings sound particularly scary.


I've seen a few posts online where people claim you can migrate to SCALE, boot up, import your old config and if you don't like it, change the boot environment back to CORE without issue.
(Assuming you make no significant changes or pool upgrades)


Is that actually how it works? Or is there a chance it will damage / adjust the existing bhyve VMs, jails, etc?


None of my stuff is encrypted by the way.

Anyone know?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Here's the exact notification.
My emphasis in bold

Perhaps nothing is written which will damage stuff, and I can just try SCALE and go back but this certainly READS like it "tries to do stuff for you" in the migration process?

TrueNAS SCALE migrations are still in development and can risk configuration errors or even data loss. Please back up any critical data to an external system before attempting the migration. Migrating to SCALE is intended to be a one-time event. Reverting back to CORE after migration is unsupported.
These CORE configuration items cannot migrate to SCALE:

  • — NIS Data
  • — Jails/Plugins
  • — Tunables
  • — System Boot Environments
  • — GELI encrypted pools
  • — AFP Shares

For more details, please see the CORE migration documentation. Please ensure the system is prepared for the migration and review the system configuration post-
 
Last edited:

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Hello I'm curious if anyone has any thoughts, please.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I did not test reverting as I paid heed to the warning, and successfully migrated one system at a time.

I would be concerned that the boot device is modified in a fairly fundamental way.

BUT I think if you had a config saved, your could re-upload that to a fresh install and you should be fine.

People say you can just switch trains back to CORE and re-update... I wouldn't be game to test that ;)

I also had a detached mirrored boot device ready when I did a rather iffy migration ;)
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
I did not test reverting as I paid heed to the warning, and successfully migrated one system at a time.

I would be concerned that the boot device is modified in a fairly fundamental way.

BUT I think if you had a config saved, your could re-upload that to a fresh install and you should be fine.

People say you can just switch trains back to CORE and re-update... I wouldn't be game to test that ;)

I also had a detached mirrored boot device ready when I did a rather iffy migration ;)

I mean I literally just want to have a play on my 'main' system just to see if it can do it.

I agree regarding the boot device, though people on reddit, I could've sworn have basically said they just choose the boot environment option to switch to an old one to go back.

I'm also curious about the whole Bhyve VMs being incompatible with KVM. I wonder if the data can be imported, I've got a VM in there and it's just files really, I see no reason another hypervisor couldn't read those files.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I mean I literally just want to have a play on my 'main' system just to see if it can do it.

I agree regarding the boot device, though people on reddit, I could've sworn have basically said they just choose the boot environment option to switch to an old one to go back.

I'm also curious about the whole Bhyve VMs being incompatible with KVM. I wonder if the data can be imported, I've got a VM in there and it's just files really, I see no reason another hypervisor couldn't read those files.

All my bhyve VMs worked with minimal changes.

The only thing was I had to reselect any pci devices ie 5/5/5 becomes 5.0005:05 or something and I had to change “CPU mode” to “host pass through” so that guests could tell the CPU of the host (to enable AES-NI etc)

And then update network interface name in the network config inside the vm

And scale doesn’t auto bridge VMs.

This tutorial discusses manual bridging

 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
All my bhyve VMs worked with minimal changes.

The only thing was I had to reselect any pci devices ie 5/5/5 becomes 5.0005:05 or something and I had to change “CPU mode” to “host pass through” so that guests could tell the CPU of the host (to enable AES-NI etc)

And then update network interface name in the network config inside the vm

And scale doesn’t auto bridge VMs.

This tutorial discusses manual bridging

The bigger question is the ability to roll back easily using the boot environment option.

I entirely want to do this temporarily
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The bigger question is the ability to roll back easily using the boot environment option.

I entirely want to do this temporarily
Should be able to test that in a vm.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
Appreciate the suggestion, normally I'd listen to others online who said it was fine but I thought I'd test a VM first......

Managed to break it quick smart, got it to install SCALE, booted back to CORE - nuked the scale environment for testing, now can't re-install SCALE.

"Error: 30 is not a valid PoolStatus"
I'll hold off fiddling!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Bug report?
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,377
I'll see if I can re-create it first.

Ok yes I can recreate it.



Failed to check for alert BootPoolStatus: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/local/lib/python3.9/concurrent/futures/process.py", line 246, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 111, in main_worker res = MIDDLEWARE._run(*call_args) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 45, in _run return self._call(name, serviceobj, methodobj, args, job=job) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/worker.py", line 39, in _call return methodobj(*params) File "/usr/local/lib/python3.9/site-packages/middlewared/schema.py", line 985, in nf return f(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 87, in query pools = [i.__getstate__(**state_kwargs) for i in zfs.pools] File "libzfs.pyx", line 402, in libzfs.ZFS.__exit__ File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/zfs.py", line 83, in query pools = [zfs.get(filters[0][2]).__getstate__(**state_kwargs)] File "libzfs.pyx", line 2489, in libzfs.ZFSPool.__getstate__ File "libzfs.pyx", line 2693, in libzfs.ZFSPool.healthy.__get__ File "libzfs.pyx", line 2675, in libzfs.ZFSPool.status_code.__get__ File "/usr/local/lib/python3.9/enum.py", line 384, in __call__ return cls.__new__(cls, value) File "/usr/local/lib/python3.9/enum.py", line 702, in __new__ raise ve_exc ValueError: 30 is not a valid PoolStatus """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/middlewared/plugins/alert.py", line 740, in __run_source alerts = (await alert_source.check()) or [] File "/usr/local/lib/python3.9/site-packages/middlewared/alert/source/boot_pool.py", line 16, in check pool = await self.middleware.call("zfs.pool.query", [["id", "=", boot_pool]]) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1283, in call return await self._call( File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1248, in _call return await self._call_worker(name, *prepared_call.args) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1254, in _call_worker return await self.run_in_proc(main_worker, name, args, job) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1173, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) File "/usr/local/lib/python3.9/site-packages/middlewared/main.py", line 1156, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ValueError: 30 is not a valid PoolStatus​

2024-03-26 22:34:17 (America/Los_Angeles)

This error spits out when you switch back to core, simply by choosing it from the startup menu list.

When you go to the upgrade menu, you're now locked to / forced into the migration to 24 regardless, no way to stick with 13 from what I can see:
 
Top