Replication task fails with not enough values to unpack (expected 4, got 1).

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Hi,

Actually i'm trying to move a whole pool from a machine to another. Once moved i want to get rid of the old pool so the pool on the new environment must be writeable.

Once i set up the replication task on GUI, i got this error:
not enough values to unpack (expected 4, got 1).

I already took a look at 1 and 2 . Actually i think i did as 2 told. I don't fully understand 1.

My replication task is this:

1705886045588.png


Any clue?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Thread 2 gives us valuable leads: how did you setup the SSH connection?
Solved. I noticed that on the SSH connection, the user was set to use Truenas CLI instead of zsh. Changing to zsh fixed the issue.

Now I have another error:
Re-encrypting already encrypted source dataset 'Homepool' while preserving its properties is not supported.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Last edited:

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Try unticking the Inherit Replication checkmark.
if i untick "inherit encryption" nothing changes

if i untick "encryption" this is the error:
Unable to send encrypted dataset 'Homepool' to existing unencrypted or unrelated dataset 'storagepool'.

I suspect that is something that has to do with a wrong dataset nesting?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Joined
Oct 22, 2019
Messages
3,641
I suspect that is something that has to do with a wrong dataset nesting?
Because you're trying to "overwrite" a non-encrypted top-level root dataset (storagepool).

This is the problem with how ZFS hierarchies were designed. I still disagree with it. (That's for another topic.)

You have two options:
  1. Replicate to storagepool/child as the target, which will essentially nest EVERYTHING one level down on your new pool. (This can look tacky.)
  2. Don't replicate Homepool. Replicate each child one level under Homepool, one by one, into their respective targets.
    • Homepool/downloads -> storagepool/downloads
    • Homepool/media -> storagepool/media
    • Etc...
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Because you're trying to "overwrite" a non-encrypted top-level root dataset (storagepool).

This is the problem with how ZFS hierarchies were designed. I still disagree with it. (That's for another topic.)

You have two options:
  1. Replicate to storagepool/child as the target, which will essentially nest EVERYTHING one level down on your new pool. (This can look tacky.)
  2. Don't replicate Homepool. Replicate each child one level under Homepool, one by one, into their respective targets.
    • Homepool/downloads -> storagepool/downloads
    • Homepool/media -> storagepool/media
    • Etc...
Thanks.

Is the problem that the dataset is "non-encrypted" or that it's "top level" or both?

If it's because it is non-encrypted, what if i simply encrypt that?

The main issue in approach 2 is that currently Homepool is not empty (there's data in the top level dataset - when you are young you make mistakes). So i have to replicate the top level

What are the countersides of the approach 1?
 
Joined
Oct 22, 2019
Messages
3,641
It's because a top-level root dataset is automatically created upon pool creation (using the same name as the pool.)

Thus, it's impossible to "replace" a top-level root dataset "in-place". (You might as well create a brand new pool.)

In your case, you want to replicate the entirety of your old pool to a new pool. The problem is, replication does not create a new pool: it requires an existing pool to "point to" as the target. (And guess what? To point to a "pool" essentially means you are pointing to a "dataset".) Now you're back to the circular problem of an "unreplaceable" top-level root dataset.

So the only way around that is to nest one level under the new pool's root dataset (tacky, redundant); or to one by one replicate each child under the new pool's root dataset.

EDIT: For what it's worth, the TrueNAS GUI does not make it obvious that you can manually enter the destination dataset. (They make it seem like you need to "click" on your desired target.) But you can manually type in the name of the target dataset even if (and especially if) it doesn't exist!

For the source you might select: Homepool/multimedia
For the destination you can click "storagepool" and then manually type the rest: /multimedia

This will create the non-extant dataset "multimedia" the first time the replication is run.
 
Last edited:

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
It's because a top-level root dataset is automatically created upon pool creation (using the same name as the pool.)

Thus, it's impossible to "replace" a top-level root dataset "in-place". (You might as well create a brand new pool.)

In your case, you want to replicate the entirety of your old pool to a new pool. The problem is, replication does not create a new pool: it requires an existing pool to "point to" as the target. (And guess what? To point to a "pool" essentially means you are pointing to a "dataset".) Now you're back to the circular problem of an "unreplaceable" top-level root dataset.

So the only way around that is to nest one level under the new pool's root dataset (tacky, redundant); or to one by one replicate each child under the new pool's root dataset.

EDIT: For what it's worth, the TrueNAS GUI does not make it obvious that you can manually enter the destination dataset. (They make it seem like you need to "click" on your desired target.) But you can manually type in the name of the target dataset even if (and especially if) it doesn't exist!

For the source you might select: Homepool/multimedia
For the destination you can click "storagepool" and then manually type the rest: /multimedia

This will create the non-extant dataset "multimedia" the first time the replication is run.
Ok, I'm trying with the option 1 even if it's not recommended. I want to try because it's the one that involves minimal effort and risk considering that i have data in top level Homepool.

Anyway the error is persisting:

Unable to send encrypted dataset 'Homepool' to existing unencrypted or unrelated dataset 'storagepool/dataset01'.
 
Joined
Oct 22, 2019
Messages
3,641
Did you create "dataset01" on the target pool?
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
EDIT: For what it's worth, the TrueNAS GUI does not make it obvious that you can manually enter the destination dataset. (They make it seem like you need to "click" on your desired target.) But you can manually type in the name of the target dataset even if (and especially if) it doesn't exist!

For the source you might select: Homepool/multimedia
For the destination you can click "storagepool" and then manually type the rest: /multimedia

This will create the non-extant dataset "multimedia" the first time the replication is run.
Oh ok! i created manually the dataset01 before selecting it in the GUI. Is that an error?
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Ok, i deleted the dataset and i let him create that. Now it's syncing.
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
While it copies data, sometimes i've got this error when clicking on storagepool in GUI:

Error: concurrent.futures.process._RemoteTraceback: """ Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 75, in get_quota with libzfs.ZFS() as zfs: File "libzfs.pyx", line 529, in libzfs.ZFS.__exit__ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 77, in get_quota quotas = resource.userspace(quota_props) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "libzfs.pyx", line 3642, in libzfs.ZFSResource.userspace libzfs.ZFSException: cannot get used/quota for storagepool: dataset is busy During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker r = call_item.fn(*call_item.args, **call_item.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 112, in main_worker res = MIDDLEWARE._run(*call_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 46, in _run return self._call(name, serviceobj, methodobj, args, job=job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 34, in _call with Client(f'ws+unix://{MIDDLEWARE_RUN_DIR}/middlewared-internal.sock', py_exceptions=True) as c: File "/usr/lib/python3/dist-packages/middlewared/worker.py", line 40, in _call return methodobj(*params) ^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/zfs_/dataset_quota.py", line 79, in get_quota raise CallError(f'Failed retreiving {quota_type} quotas for {ds}') middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for storagepool """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/usr/lib/python3/dist-packages/middlewared/main.py", line 201, in call_method result = await self.middleware._call(message['method'], serviceobj, methodobj, params, app=self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1342, in _call return await methodobj(*prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/schema/processor.py", line 177, in nf return await func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/plugins/pool_/dataset_quota_and_perms.py", line 223, in get_quota quota_list = await self.middleware.call( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1399, in call return await self._call( ^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1350, in _call return await self._call_worker(name, *prepared_call.args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1356, in _call_worker return await self.run_in_proc(main_worker, name, args, job) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1267, in run_in_proc return await self.run_in_executor(self.__procpool, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib/python3/dist-packages/middlewared/main.py", line 1251, in run_in_executor return await loop.run_in_executor(pool, functools.partial(method, *args, **kwargs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ middlewared.service_exception.CallError: [EFAULT] Failed retreiving GROUP quotas for storagepool

What is this?
 
Joined
Oct 22, 2019
Messages
3,641
That's a different issue. I would start a new thread.

As a matter of fact, don't try to "do" anything with the new pool until it's finished replicating.
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
That's a different issue. I would start a new thread.

As a matter of fact, don't try to "do" anything with the new pool until it's finished replicating.
Ok i'll do. The replication finished. Once done i unlocked the pool with my encryption key

Anyway now i have apps folder 'ix-applications' in a nested location. How can i mount it in the app service?
 

quasarlex

Dabbler
Joined
Dec 19, 2023
Messages
33
Ok i'll do. The replication finished. Once done i unlocked the pool with my encryption key

Anyway now i have apps folder 'ix-applications' in a nested location. How can i mount it in the app service?
Anyone?
 
Top