Importing Pools Created in SCALE in CORE

c0re

Dabbler
Joined
Feb 11, 2013
Messages
26
Does anybody have experience with migrating pools created in SCALE back to CORE?

In my case, pools that were created in CORE work fine in SCALE and can be accessed, edited, etc. Everything works fine.

The reverse, however, is not true. When a pool was created in SCALE and I export it and then try to import it into CORE, the import works and it is visible. However, the pool shows 0 B free.

e.g. 16.11 TiB (100%) Used | 0 B Free

Even though this drive has free space and works fine in SCALE.

I've checked all the ZFS settings with zfs get and working pools appear the same in all respects to non-working pools. I tried changing permission type manually via CLI to NFSV4 and still shows 0 B Free.

I can see the files, but not perform any operations on the disk as everything reports that there are 0 bytes free.

Any thoughts as to how this could be rectified?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
import works and it is visible. However, the pool shows 0 B free.
So it imported as read-only due to having a higher ZFS version than CORE knows about.

Any thoughts as to how this could be rectified?
Wait for CORE to know the newer ZFS version.
 

c0re

Dabbler
Joined
Feb 11, 2013
Messages
26
Interesting, thanks!

I was wondering about that but assumed they would all be the same because SCALE isn't asking for a zpool upgrade. Perhaps that's not in the GUI/

I also found this thread yesterday and was trying to check the version:

zpool get version <poolname>

But it returns a - (dash) which I understood to mean it was using feature flags. So I couldn't figure out how to know which versions were in use.

Anyways, thanks for the insight!
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194

c0re

Dabbler
Joined
Feb 11, 2013
Messages
26
OK thanks for the link. Explains the concept of feature flags vs. version well and is great to know!

I'm pretty certain I'm not using edonr. Once I was unable to find the version number last night, I did try to compare feature flags and they are identical between pools that work and those that don't which left me even more perplexed.

I pasted the output of the features below. Pool-6 is a pool that is fully working. Pool-7 is a pool that shows 100% utilization and 0 B available. As you say, perhaps got marked as read-only, just no explanation as to why.

Code:
zpool get all Pool-6 | grep feature

Pool-6  feature@async_destroy          enabled                        local
Pool-6  feature@empty_bpobj            active                         local
Pool-6  feature@lz4_compress           active                         local
Pool-6  feature@multi_vdev_crash_dump  enabled                        local
Pool-6  feature@spacemap_histogram     active                         local
Pool-6  feature@enabled_txg            active                         local
Pool-6  feature@hole_birth             active                         local
Pool-6  feature@extensible_dataset     active                         local
Pool-6  feature@embedded_data          active                         local
Pool-6  feature@bookmarks              enabled                        local
Pool-6  feature@filesystem_limits      enabled                        local
Pool-6  feature@large_blocks           enabled                        local
Pool-6  feature@large_dnode            enabled                        local
Pool-6  feature@sha512                 enabled                        local
Pool-6  feature@skein                  enabled                        local
Pool-6  feature@edonr                  enabled                        local
Pool-6  feature@userobj_accounting     active                         local
Pool-6  feature@encryption             enabled                        local
Pool-6  feature@project_quota          active                         local
Pool-6  feature@device_removal         enabled                        local
Pool-6  feature@obsolete_counts        enabled                        local
Pool-6  feature@zpool_checkpoint       enabled                        local
Pool-6  feature@spacemap_v2            active                         local
Pool-6  feature@allocation_classes     enabled                        local
Pool-6  feature@resilver_defer         enabled                        local
Pool-6  feature@bookmark_v2            enabled                        local
Pool-6  feature@redaction_bookmarks    enabled                        local
Pool-6  feature@redacted_datasets      enabled                        local
Pool-6  feature@bookmark_written       enabled                        local
Pool-6  feature@log_spacemap           active                         local
Pool-6  feature@livelist               enabled                        local
Pool-6  feature@device_rebuild         enabled                        local
Pool-6  feature@zstd_compress          enabled                        local
Pool-6  feature@draid                  enabled                        local

zpool get all Pool-7 | grep feature

Pool-7  feature@async_destroy          enabled                        local
Pool-7  feature@empty_bpobj            active                         local
Pool-7  feature@lz4_compress           active                         local
Pool-7  feature@multi_vdev_crash_dump  enabled                        local
Pool-7  feature@spacemap_histogram     active                         local
Pool-7  feature@enabled_txg            active                         local
Pool-7  feature@hole_birth             active                         local
Pool-7  feature@extensible_dataset     active                         local
Pool-7  feature@embedded_data          active                         local
Pool-7  feature@bookmarks              enabled                        local
Pool-7  feature@filesystem_limits      enabled                        local
Pool-7  feature@large_blocks           enabled                        local
Pool-7  feature@large_dnode            enabled                        local
Pool-7  feature@sha512                 enabled                        local
Pool-7  feature@skein                  enabled                        local
Pool-7  feature@edonr                  enabled                        local
Pool-7  feature@userobj_accounting     active                         local
Pool-7  feature@encryption             enabled                        local
Pool-7  feature@project_quota          active                         local
Pool-7  feature@device_removal         enabled                        local
Pool-7  feature@obsolete_counts        enabled                        local
Pool-7  feature@zpool_checkpoint       enabled                        local
Pool-7  feature@spacemap_v2            active                         local
Pool-7  feature@allocation_classes     enabled                        local
Pool-7  feature@resilver_defer         enabled                        local
Pool-7  feature@bookmark_v2            enabled                        local
Pool-7  feature@redaction_bookmarks    enabled                        local
Pool-7  feature@redacted_datasets      enabled                        local
Pool-7  feature@bookmark_written       enabled                        local
Pool-7  feature@log_spacemap           active                         local
Pool-7  feature@livelist               enabled                        local
Pool-7  feature@device_rebuild         enabled                        local
Pool-7  feature@zstd_compress          enabled                        local
Pool-7  feature@draid                  enabled                        local
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
At this stage, we have only committed to pool imports from CORE to SCALE... since 99.99% of users are starting from CORE.
 

c0re

Dabbler
Joined
Feb 11, 2013
Messages
26
So I have an update on this. Finally figured out what the issue was.

The disks coming from TrueNAS SCALE were quite full. It seems that CORE by default has a higher amount of "slop space" allocated, so disks that had free space in SCALE showed has having 0B free in CORE.

The solution was to add a tunable in CORE with the following setting:

Variable: vfs.zfs.spa.slop_shift
Value: 9
Type: SYSCTL

This reduced slop space and got back the available free space (and more). The default setting in CORE was 5. I'm not sure exactly how slop space works in ZFS, but it seems that higher numbers leave less slop space.

Apparently slop space is crucial for ZFS to work properly in many capacities so it's probably not a good idea to reduce slop space too much. In my case, these disks were used to just fill archived data once and are never written to again so I don't think there will be negative impact.

EDIT: You can also find the current slop setting with the following console command to verify that your change has taken effect:

Code:
sysctl -a | grep "slop"

So if anybody runs into this again, this was the full solution and the explanation. Hopefully it can help somebody in future!
 
Last edited:

eduardooliveira

Dabbler
Joined
Feb 10, 2023
Messages
13
I was looking for this exact information! Thanks for posting and providing the solution / workaround!
I am not sure if now in 2023 Core already fully supports pools that were created original on Scale but at least this workaround is an alternative.
 
Top