Migration of CORE to SCALE - pool disks screwed up in UI only

holeydood3

Cadet
Joined
Dec 24, 2018
Messages
3
I had an uneventful migration from the latest version of CORE to SCALE 23.10.0.1 using the CORE UI manual update route, but the one issue I'm running into with the UI not recognizing/displaying the pool disks correctly. I've tried exporting and re-importing the pool via the UI, but the same issue persists. zpool status -v shows everything the way I would expect it to look, but the UI seems to be looking for disk serial numbers rather than their newly assigned names in SCALE.

All shares and rebuilt apps can use the pool datasets just fine, so I think my issue is limited to some kind of configuration in the UI.

Code:
root@freenas[~]# zpool status -v
  pool: Citadel
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 05:00:16 with 0 errors on Sun Nov  5 04:00:16 2023
config:

        NAME        STATE     READ WRITE CKSUM
        Citadel     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdc2    ONLINE       0     0     0
            sdh2    ONLINE       0     0     0
            sda2    ONLINE       0     0     0
            sdf2    ONLINE       0     0     0
            sdd2    ONLINE       0     0     0
            sde2    ONLINE       0     0     0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 01:40:50 with 0 errors on Fri Nov 17 05:25:50 2023
config:

        NAME          STATE     READ WRITE CKSUM
        freenas-boot  ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            sdb2      ONLINE       0     0     0
            sdg2      ONLINE       0     0     0

errors: No known data errors


Here's what I see in the UI:
Storage > Citadel (my pool) > Manage Devices
Code:
Data VDEVs

    RAIDZ2 - online - no errors
        14368839540223023382 - online - no errors
            Details > Disk Info: Disk is unavailable
        9769702306787169184 - online - no errors
            Details > Disk info: Disk is unavailable
        ...
        ...
        ...
        ...


Additionally under Storage > Disks, four of the six pool disks show "N/A" for the pool they're associated with.

Thanks in advance for any thoughts on the topic, I'm probably doing something dumb, but hopefully simple to fix.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Have you created the pool manually instead of using the UI?
The CORE output is actually not normal: One should see something like "gptuid/2abcdef0-1234-5678-9abc-0123456789abc" rather than "sda2".
 

holeydood3

Cadet
Joined
Dec 24, 2018
Messages
3
Well shoot. No, it was created via the UI back in FreeNAS 11.1.

The only change to the pool since creation was swapping out hard drives one by one over time with new ones following the documentation here. I'm wondering if that might be it. I'll do some digging, but the hint that it should be showing gptids in zpool status might be the missing link I'm looking for!
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
Replacing from the UI should have retained a pool by GPTID. That's certainly the missing link, but I have no idea where it went wrong.
 

holeydood3

Cadet
Joined
Dec 24, 2018
Messages
3
And I can confirm that I can still do everything I need to do via the UI. I can still offline and replace disks, and it is capable of doing it as expected. So while what the UI shows me isn't entirely connected to what I see in the shell, the functionality is still there. I'll plug away at this when I get more free time to see if there's an easy to fix underlying issue without having to replace each disk and resilver them again, but as it stands now, it doesn't look like it'll get in the way of the actual functionality of the appliance. I'll update if I find a solution. Thank you for your help!
 
Top