Zpool status not using gptid for 1 disk

anaxagorasbc

Dabbler
Joined
Dec 3, 2021
Messages
15
migrated from solaris, created a pool in solaris using zfs v28, moved all the data over from my solaris pool, then imported the disks into truenas. For some reason the first disk is not being listed by it's gptid. How do i fix this?
Code:
root@truenas[~]# zpool status Archive2

  pool: Archive2

 state: ONLINE

config:



        NAME                                            STATE     READ WRITE CKSUM

        Archive2                                        ONLINE       0     0     0

          raidz2-0                                      ONLINE       0     0     0

            gpt/zfs                                     ONLINE       0     0     0

            gptid/7320ab18-040e-2444-b84b-90bbe0c45d5d  ONLINE       0     0     0

            gptid/481750df-43ea-5f43-80f0-bc78e278ded1  ONLINE       0     0     0

            gptid/9ee258ee-8e08-ce40-a7f8-ea48c1b42712  ONLINE       0     0     0

            gptid/6c13d703-4eba-4d4d-b19c-fb8383a3fa20  ONLINE       0     0     0

            gptid/4c446531-1235-744f-88e6-ddbe94bdc9c2  ONLINE       0     0     0



errors: No known data errors


Code:
root@truenas[~]# glabel status

                                      Name  Status  Components

gptid/c59da722-4afd-11ec-bc09-00505699cff7     N/A  da0p1

                                   gpt/zfs     N/A  da1p1

gptid/9f85c3bc-7216-074b-9faf-da0fc47135f1     N/A  da1p9

gptid/7320ab18-040e-2444-b84b-90bbe0c45d5d     N/A  da2p1

gptid/24ca6d8b-81ba-8244-9db1-f408622504f6     N/A  da2p9

gptid/9ee258ee-8e08-ce40-a7f8-ea48c1b42712     N/A  da3p1

gptid/0652cfda-b059-fa49-9a1c-c77ebaf6b41c     N/A  da3p9

gptid/6c13d703-4eba-4d4d-b19c-fb8383a3fa20     N/A  da4p1

gptid/8f67e9d8-7625-4149-aa3b-923d1509dada     N/A  da4p9

gptid/4c446531-1235-744f-88e6-ddbe94bdc9c2     N/A  da5p1

gptid/cf09a274-b51d-3b40-8a3c-acb3cd00dab3     N/A  da5p9

gptid/481750df-43ea-5f43-80f0-bc78e278ded1     N/A  da6p1

gptid/41751224-502c-0e4a-bb5c-d28f8558bf39     N/A  da6p9



EDITED: to use code tags instead of quote tags
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Please post the output of gpart list da1.
If you use code tags instead of quote tags, the readability of your command output improves greatly. Thank you.
 

anaxagorasbc

Dabbler
Joined
Dec 3, 2021
Messages
15
Code:
Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 35156656094
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 18000199400960 (16T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(1,GPT,b25497f0-ab5e-2a41-971d-b2ace23b1a3b,0x100,0x82f7fbedf)
   rawuuid: b25497f0-ab5e-2a41-971d-b2ace23b1a3b
   rawtype: 6a898cc3-1dd2-11b2-99a6-080020736631
   label: zfs
   length: 18000199400960
   offset: 131072
   type: !6a898cc3-1dd2-11b2-99a6-080020736631
   index: 1
   end: 35156639710
   start: 256
2. Name: da1p9
   Mediasize: 8388608 (8.0M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 3584
   Mode: r0w0e0
   efimedia: HD(9,GPT,9f85c3bc-7216-074b-9faf-da0fc47135f1,0x82f7fbfdf,0x4000)
   rawuuid: 9f85c3bc-7216-074b-9faf-da0fc47135f1
   rawtype: 6a945a3b-1dd2-11b2-99a6-080020736631
   label: (null)
   length: 8388608
   offset: 18000199532032
   type: !6a945a3b-1dd2-11b2-99a6-080020736631
   index: 9
   end: 35156656094
   start: 35156639711
Consumers:
1. Name: da1
   Mediasize: 18000207937536 (16T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3




also here is da2, it looks like solaris sets the gpt label for p1 to zfs for all devices?

Code:
Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 35156656094
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
   Mediasize: 18000199400960 (16T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(1,GPT,7320ab18-040e-2444-b84b-90bbe0c45d5d,0x100,0x82f7fbedf)
   rawuuid: 7320ab18-040e-2444-b84b-90bbe0c45d5d
   rawtype: 6a898cc3-1dd2-11b2-99a6-080020736631
   label: zfs
   length: 18000199400960
   offset: 131072
   type: !6a898cc3-1dd2-11b2-99a6-080020736631
   index: 1
   end: 35156639710
   start: 256
2. Name: da2p9
   Mediasize: 8388608 (8.0M)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 3584
   Mode: r0w0e0
   efimedia: HD(9,GPT,24ca6d8b-81ba-8244-9db1-f408622504f6,0x82f7fbfdf,0x4000)
   rawuuid: 24ca6d8b-81ba-8244-9db1-f408622504f6
   rawtype: 6a945a3b-1dd2-11b2-99a6-080020736631
   label: (null)
   length: 8388608
   offset: 18000199532032
   type: !6a945a3b-1dd2-11b2-99a6-080020736631
   index: 9
   end: 35156656094
   start: 35156639711
Consumers:
1. Name: da2
   Mediasize: 18000207937536 (16T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e3
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Check if /dev/gptid/b25497f0-ab5e-2a41-971d-b2ace23b1a3b exists. If yes, try a zpool replace Archive2 gpt/zfs gptid/b25497f0-ab5e-2a41-971d-b2ace23b1a3b.

You might need to zpool offline Archive2 gpt/zfs first, and of course you are increasing your risk of damaging the pool by one disk that way. So probably do a zpool scrub Archive2 first and after that check the SMART data for all disks. Only if everything is OK do the replace operation.

HTH,
Patrick
 

anaxagorasbc

Dabbler
Joined
Dec 3, 2021
Messages
15
It does not exist.


Code:
root@truenas[~]# ls /dev/gptid/b25497f0-ab5e-2a41-971d-b2ace23b1a3b
ls: /dev/gptid/b25497f0-ab5e-2a41-971d-b2ace23b1a3b: No such file or directory

root@truenas[~]# ls /dev/gptid
0652cfda-b059-fa49-9a1c-c77ebaf6b41c    4c446531-1235-744f-88e6-ddbe94bdc9c2    9ee258ee-8e08-ce40-a7f8-ea48c1b42712
24ca6d8b-81ba-8244-9db1-f408622504f6    6c13d703-4eba-4d4d-b19c-fb8383a3fa20    9f85c3bc-7216-074b-9faf-da0fc47135f1
41751224-502c-0e4a-bb5c-d28f8558bf39    7320ab18-040e-2444-b84b-90bbe0c45d5d    c59da722-4afd-11ec-bc09-00505699cff7
481750df-43ea-5f43-80f0-bc78e278ded1    8f67e9d8-7625-4149-aa3b-923d1509dada    cf09a274-b51d-3b40-8a3c-acb3cd00dab3
root@truenas[~]# 
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
That's probably because the device is open by that other path. So you will have to offline it from the pool. I'd do the scrub and SMART check before that.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
gpt/zfs? I don't think I'd ever seen that one. Something to do with an unpartitioned disk used for ZFS?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
That's clearly a GPT label that Solaris put in there, not an unpartitioned disk. See the "label" field in his gpart list output.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Indeed, nothing quite as exotic as I thought.

A day's worth of struggling with Java messes with my concentration. And I don't even have to mop up the log4j mess.
 

anaxagorasbc

Dabbler
Joined
Dec 3, 2021
Messages
15
That's clearly a GPT label that Solaris put in there, not an unpartitioned disk. See the "label" field in his gpart list output.
running the scrub now, it'll take a day or so. Could i just export the pool, manually clear the gpt label on all the drives, then import the pool?
 

anaxagorasbc

Dabbler
Joined
Dec 3, 2021
Messages
15
Well, that seemed to work, exported the pool, ran
Code:
gpart modify -i 1 -l "" da1
for all 6 drives, then imported the pool.
 
Top