Migrating data to another pool on SCALE

fidgety

Cadet
Joined
Feb 25, 2021
Messages
6
Hi, looking for a bit of advice about migrating my data to another pool in SCALE. I have TrueNAS-SCALE-22.02.0.
I have replication set up and it is working fine - I can see the expected files at the destination.
I have moved the system dataset pool to boot-pool and rebooted. At this point I'd expect that all system processes are running on boot-pool.
I'm aiming to follow the process outlined here: Howto: migrate data from one pool to a bigger pool
However, when I try to export/disconnect the replication source, tank, I get the warning:

export-disconnect.JPG


and there is a long list of processes using tank:

- systemd - systemd - systemd - systemd - systemd - dbus-daemon - blkmapd - blkmapd - blkmapd - systemd-udevd - middlewared (wo - middlewared (wo - python3 - python3 - python3 - python3 - python3 - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - dhclient - middlewared (wo - middlewared (wo - middlewared (wo - middlewared (wo - middlewared (wo - middlewared (wo - mdadm - systemd-journal - systemd-journal - systemd-journal - systemd-journal - systemd-journal - rpcbind - smartd - smartd - smartd - nscd - nscd - nscd - systemd-logind - systemd-logind - systemd-logind - systemd-logind - systemd-logind - zed - zed - zed - zed - zed - zed - zed - syslog-ng - syslog-ng - syslog-ng - winbindd - winbindd - winbindd - nginx - nginx - nginx - nginx - nginx - nginx - nginx - nginx - cli - cli - cli - cli - cli - cli - cli - cron - winbindd - winbindd - winbindd - rrdcached - rrdcached - rrdcached - winbindd - winbindd - winbindd - systemd-machine - libvirtd - winbindd - winbindd - winbindd - avahi-daemon - avahi-daemon - wsdd.py - smbd - smbd - smbd - smbd-notifyd - smbd-notifyd - smbd-notifyd - cleanupd - cleanupd - cleanupd - virtlogd - collectd - ntpd - ntpd - ntpd - smbd - smbd - smbd - smbd - smbd

Clearly, this is not going to work... if I go ahead and export/disconnect anyway, the GUI crashes and systemd keeps restarting on the console until I get fed up and reboot.

Here is the confirmation that I've moved the system processes to boot-pool:

System dataset pool.JPG


but it doesn't look like this is working as expected.

Have I missed something or is this a bug ?

Thanks in advance!
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm wondering if you missed a step when migrating the system dataset?
 

fidgety

Cadet
Joined
Feb 25, 2021
Messages
6
It's a single drop-down to select a pool, then "save"...

dropdown.JPG
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
That's a headscratcher for sure.

Do you have any shares, jails, VMs, or similar running on the TrueNAS System?

I'd expect to see at least at least a few of those processes using the main pool (for example, I'm pretty sure TrueNAS uses systemd to mount the ZFS volumes), but things like Winbind and SMB indicate that there's some kind of share that's using the pool "tank".

In theory, the "Export/Disconnect" service should be able to handle these, so you may be looking at a bug of some sort.
 

fidgety

Cadet
Joined
Feb 25, 2021
Messages
6
Yes, I have a couple of VMs. However, I shut them down before starting the export/disconnect process, so any processes associated with them should have stopped. I could understand why SMB would use the pool for a share, but things like NTP, not so much.

I think you are right, a bug report is the next step. I just wanted to be sure there isn't a control hidden somewhere that fixes this problem, that has to be used before starting export/disconnect. Thank you.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633

ropeguru

Dabbler
Joined
Jan 25, 2022
Messages
29
Yes, I have a couple of VMs. However, I shut them down before starting the export/disconnect process, so any processes associated with them should have stopped. I could understand why SMB would use the pool for a share, but things like NTP, not so much.

I think you are right, a bug report is the next step. I just wanted to be sure there isn't a control hidden somewhere that fixes this problem, that has to be used before starting export/disconnect. Thank you.
Did you find a fix for this? I am in the same situation as what you had.

I have one VM which is shutdown but still have the long list of services when trying to export a pool. The initial move ended with an error on the move to boot-pool about unable to mount syslogng.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You're running SCALE here, so you need to consider Apps too... Docker container processes will show up in things like lsof and top just as local processes on the host will.
 

ropeguru

Dabbler
Joined
Jan 25, 2022
Messages
29
You're running SCALE here, so you need to consider Apps too... Docker container processes will show up in things like lsof and top just as local processes on the host will.

I have one VM configured, and as far as apps, I do not even have a pool defined for those. It is a ver simple setup...


This is what I am getting even trying to move the from boot-pool to a ZFS pool. Same error as I got when trying to move TO the boot-pool. What is interesting is that after the error occurs, the config shows it is now in the destination pool.

[EFAULT] Unable to umount boot-pool/.system/syslog-d762992650eb49729863cf3946f503d1: umount: /var/db/system/syslog-d762992650eb49729863cf3946f503d1: target is busy.



Code:
Error: Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/plugins/sysdataset.py", line 436, in __umount
    await run('umount', '-f', dataset)
  File "/usr/lib/python3/dist-packages/middlewared/utils/__init__.py", line 64, in run
    cp.check_returncode()
  File "/usr/lib/python3.9/subprocess.py", line 460, in check_returncode
    raise CalledProcessError(self.returncode, self.args, self.stdout,
subprocess.CalledProcessError: Command '('umount', '-f', 'boot-pool/.system/syslog-d762992650eb49729863cf3946f503d1')' returned non-zero exit status 32.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 423, in run
    await self.future
  File "/usr/lib/python3/dist-packages/middlewared/job.py", line 459, in __run_body
    rv = await self.method(*([self] + args))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1129, in nf
    res = await f(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/sysdataset.py", line 212, in do_update
    await self.setup(data.get('pool_exclude'))
  File "/usr/lib/python3/dist-packages/middlewared/schema.py", line 1261, in nf
    return await func(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/middlewared/plugins/sysdataset.py", line 305, in setup
    await self.__umount(mounted_pool, config['uuid'])
  File "/usr/lib/python3/dist-packages/middlewared/plugins/sysdataset.py", line 442, in __umount
    raise CallError(f'Unable to umount {dataset}: {stderr}')
middlewared.service_exception.CallError: [EFAULT] Unable to umount boot-pool/.system/syslog-d762992650eb49729863cf3946f503d1: umount: /var/db/system/syslog-d762992650eb49729863cf3946f503d1: target is busy.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
You can log a bug report for it. It shouldn't happen like that.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
On my scale box I removed a pool recently and got a similar message, ignored it and when ahead. The pool was deleted. I did not use the pool as a systemdataset and had no apps/vms on the pool.
 
Top