UI Not Reporting HDD Pool Correctly

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
I get various email status updates every night. I have a HDD (da2 / WD-WCC7K1ER5K03) that is reported to be in zpool DuffleBag ... but the UI has it in zpool N/A.

root@NASProd[~]# glabel status
Code:
                                      Name  Status  Components
gptid/bb7e2890-fbcc-11e9-abbe-0cc47aac270a     N/A  ada0p1
gptid/bb806365-fbcc-11e9-abbe-0cc47aac270a     N/A  ada0p2
gptid/bb84c298-fbcc-11e9-abbe-0cc47aac270a     N/A  ada1p1
gptid/bb873858-fbcc-11e9-abbe-0cc47aac270a     N/A  ada1p2
gptid/fa407967-57af-11eb-a9c6-ac1f6ba054d6     N/A  da0p2
gptid/d4a62654-50c0-11ec-9568-ac1f6ba054d6     N/A  da1p2
gptid/91b2181f-ca1c-11eb-a78b-ac1f6ba054d6     N/A  da3p2
gptid/70c0ff97-fc59-11e9-ad4a-0cc47aac270a     N/A  da4p2
gptid/6a92d558-fc59-11e9-ad4a-0cc47aac270a     N/A  da5p2
gptid/ee9cd55e-d926-11eb-9786-ac1f6ba054d6     N/A  da6p2
gptid/72e3ca68-fc59-11e9-ad4a-0cc47aac270a     N/A  da7p2
gptid/28db87eb-0e11-11ea-8458-ac1f6ba054d6     N/A  da8p2
gptid/66b03e04-fc59-11e9-ad4a-0cc47aac270a     N/A  da9p2
gptid/eeb4eec5-d926-11eb-9786-ac1f6ba054d6     N/A  da10p2
gptid/4c0c5e61-de73-11ec-9c54-ac1f6ba054d6     N/A  da2p2    <-- this is the HDD I am looking at
gptid/4be8ac64-de73-11ec-9c54-ac1f6ba054d6     N/A  da2p1
gptid/d48b038b-50c0-11ec-9568-ac1f6ba054d6     N/A  da1p1

root@NASProd[~]# zdb -l /dev/da2p2
Code:
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'DuffleBag'
    state: 0
    txg: 16273664
    pool_guid: 438870739903369260
    errata: 0
    hostid: 2981848535
    hostname: 'NASProd.local'
    top_guid: 13011214539249804843
    guid: 9016634064566564743
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 13011214539249804843
        nparity: 2
        metaslab_array: 45
        metaslab_shift: 38
        ashift: 12
        asize: 31989077901312
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 9016634064566564743
            path: '/dev/gptid/4c0c5e61-de73-11ec-9c54-ac1f6ba054d6'          <-- this is the HDD I am looking at
            DTL: 504
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 10704762979084935942
            path: '/dev/gptid/66b03e04-fc59-11e9-ad4a-0cc47aac270a'
            whole_disk: 1
            DTL: 442
            create_txg: 4
        children[2]:
            type: 'disk'
            id: 2
            guid: 6456522218879920214
            path: '/dev/gptid/fa407967-57af-11eb-a9c6-ac1f6ba054d6'
            phys_path: 'id1,enc@n3061686369656d30/type@0/slot@3/elmdesc@Slot_02/p2'
            DTL: 137
            create_txg: 4
        children[3]:
            type: 'disk'
            id: 3
            guid: 9143478264579565473
            path: '/dev/gptid/6a92d558-fc59-11e9-ad4a-0cc47aac270a'
            whole_disk: 1
            DTL: 440
            create_txg: 4
        children[4]:
            type: 'disk'
            id: 4
            guid: 9189548570320806041
            path: '/dev/gptid/91b2181f-ca1c-11eb-a78b-ac1f6ba054d6'
            DTL: 522
            create_txg: 4
        children[5]:
            type: 'disk'
            id: 5
            guid: 12857757802064212906
            path: '/dev/gptid/28db87eb-0e11-11ea-8458-ac1f6ba054d6'
            whole_disk: 1
            DTL: 438
            create_txg: 4
        children[6]:
            type: 'disk'
            id: 6
            guid: 9394497215113233676
            path: '/dev/gptid/70c0ff97-fc59-11e9-ad4a-0cc47aac270a'
            whole_disk: 1
            DTL: 437
            create_txg: 4
        children[7]:
            type: 'disk'
            id: 7
            guid: 10106660962108376847
            path: '/dev/gptid/72e3ca68-fc59-11e9-ad4a-0cc47aac270a'
            whole_disk: 1
            DTL: 436
            create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 0 1 2 3

Here is the output from the UI ...

LXWPkR7.jpg


Background: Don't worry about HDD da1 ... it is failing a Long SMART test and da2 (the missing HDD from above) was slivered in yesterday as a replacement.

Should I be concerned about this?
Time to worry? Reboot? Other?
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
Hi. I'm noticing the same thing, did you figure out a resolution?

No change. DA2 still reports as N/A for pool but is included in the HDD list under DuffleBag Pool Status. This is my prod system running TrueNAS-12.0-U8. My dev is running TrueNAS-13.0-U1 and everything is looking ok there. I might move Prod to 13.0 U1 this month. I will check if the HDD situation changes.
 

hansenc

Cadet
Joined
Jan 17, 2022
Messages
5
No change. DA2 still reports as N/A for pool but is included in the HDD list under DuffleBag Pool Status. This is my prod system running TrueNAS-12.0-U8. My dev is running TrueNAS-13.0-U1 and everything is looking ok there. I might move Prod to 13.0 U1 this month. I will check if the HDD situation changes.
thanks for the reply!
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
Tried to upgrade production to 13.0 but I lost access to a pool / reverted back to TrueNAS-12.0-U8. I also took out / removed that 612N disk (da1 from above - it seems to be ok, but I am not using it).

On reboot back to 12.0-U8 and removal of HDD, UI is now showing correct HDDs in correct pools.

The UI issue might be due to a HDD that is pool-less (if you know what I mean). I will put in a random HDD and check again.
 

hansenc

Cadet
Joined
Jan 17, 2022
Messages
5
Thanks for the update. I dont have any unallocated disks and i have the issue but wondering what your results will be.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@Ruff.Hi What's the output of zpool status?
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
I am increasing the size of one of my pools (2 disks to 3) so I have an extra pool at the moment and I have put HDDs in each of my open slots. I have 2 x SDDs (boot) and 16 HDDs. The UI is reporting it all accurately.

DuffleBag - 8 HDDs
BankOld - 2 HDDs
Bank - 3 HDDs
N/A - 3 HDDs

DuffleBag looks like this ...
Code:
pool: DuffleBag
 state: ONLINE
  scan: scrub repaired 0B in 04:41:31 with 0 errors on Thu Jun 23 05:41:33 2022
config:

        NAME                                            STATE     READ WRITE CKSUM
        DuffleBag                                       ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/4c0c5e61-de73-11ec-9c54-ac1f6ba054d6  ONLINE       0     0     0
            gptid/66b03e04-fc59-11e9-ad4a-0cc47aac270a  ONLINE       0     0     0
            gptid/fa407967-57af-11eb-a9c6-ac1f6ba054d6  ONLINE       0     0     0
            gptid/6a92d558-fc59-11e9-ad4a-0cc47aac270a  ONLINE       0     0     0
            gptid/91b2181f-ca1c-11eb-a78b-ac1f6ba054d6  ONLINE       0     0     0
            gptid/28db87eb-0e11-11ea-8458-ac1f6ba054d6  ONLINE       0     0     0
            gptid/70c0ff97-fc59-11e9-ad4a-0cc47aac270a  ONLINE       0     0     0
            gptid/72e3ca68-fc59-11e9-ad4a-0cc47aac270a  ONLINE       0     0     0

errors: No known data errors
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I thought you said the UI did not? Anyway, this pool looks good. A common cause for the UI to get out of sync is when people replace disks on the CLI but use devices like ada0 instead of the mandatory gptid/<uuid>. Keep that in mind, folks. You must create a proper partition table and use the gptid devices when working on the command line. All the TrueNAS middleware and consequently the UI depend on these IDs.
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
Thx for the info. I do some stuff on the CLI, but try and limit my involvement to the UI. I definitely use the UI for swapping drives into / out of a pool.

That said, I do have a bunch of cron tasks that use bash ... I suppose that is just an auto CLI stuff.
 

hansenc

Cadet
Joined
Jan 17, 2022
Messages
5
I installed 13.0 couple days ago and almost all my drives now report the pool they are in with the exception of 3. very odd. I've done no work in the CLI for these drives/pool.
 
Top