How to set 'missing' GPTID for disk in volume that shows device id (da0) only (TrueNAS-12.0-U1)

jafin

Explorer
Joined
May 30, 2011
Messages
51
I replaced a failed disk in a raidz2 volume.
After this occurred, the drive is no longer shows as using a gptid, and instead is being shown as ada0 in the zpool pool status.

This appears to have the problem in the Truenas GUI not being able to act upon the drive, (Edit/Offline/Replace cannot act upon this drive, the functions do nothing).

In Chrome debugger when clicking an option against /dev/ada0 it console errors:
Code:
ERROR TypeError: Cannot read property 'identifier' of undefined


In the TrueNAS GUI it shows as /dev/ada0

Is there a way to apply a gptid to this device? (in an attempt to let the GUI operate with this device?)

Code:
        NAME                                            STATE     READ WRITE CKSUM
        vol1                                            ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/6838364e-5018-11eb-9efa-3ca82a4ba544  ONLINE       0     0     0
            gptid/a987dc98-5204-11eb-ab4d-3ca82a4ba544  ONLINE       0     0     0
            gptid/4a1fd222-b266-11e7-8c66-3ca82a4ba544  ONLINE       0     0     0
            ada0                                        ONLINE       0     0     0
            gptid/962a2678-4e36-11eb-9efa-3ca82a4ba544  ONLINE       0     0     0
            gptid/5448fa00-241f-11e7-8724-3ca82a4ba544  ONLINE       0     0     0


1610170937501.png
 
Last edited:

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
Online the disk, then "replace" to add it to a pool.
 

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
Oops, I misread your post. I see that its online. Havent finished my coffee yet lol
 

G8One2

Patron
Joined
Jan 2, 2017
Messages
248
I dont think it will get an ID until its assigned to a pool. Right now, its just seeing it as a disk in system, not being used.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You need to offline the disk, create a partition table, then add a partition of type freebsd-zfs ...
You added the whole device to the pool instead of partition 2. Don't do that.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Hmmm - I have 3 such disks in a four-disk pool. Four disks were successfully transplanted from my irrigated FreeNAS Mini after our fire, and their pool imported, one disk subsequently replaced on failure. That one has the gptid.

1610205191264.png


1610205127180.png


While I have a full backup FreeNAS box, I'd prefer to keep this data pool. What process might I follow to have the three outliers assigned gptid's with no data loss?

Advice much appreciated.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@Redcoat Output of gpart list, please. You can trim it to only contain all partitions of da0, da1, da2, da3.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Thank you @Patrick M. Hausen. Here goes (untrimmed as I was unsure precisly what is pertinent):

Code:
root@NAS3:~ # gpart list
Geom name: ada0
modified: false
state: OK
fwheads: 16
fwsectors: 63
last: 78165319
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,69c2d68b-3e91-11ea-bff0-002590aab191,0x28,0x400)
   rawuuid: 69c2d68b-3e91-11ea-bff0-002590aab191
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: (null)
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: ada0p2
   Mediasize: 40013660160 (37G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 544768
   Mode: r1w1e1
   efimedia: HD(2,GPT,69c775c2-3e91-11ea-bff0-002590aab191,0x428,0x4a88000)
   rawuuid: 69c775c2-3e91-11ea-bff0-002590aab191
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 40013660160
   offset: 544768
   type: freebsd-zfs
   index: 2
   end: 78152743
   start: 1064
Consumers:
1. Name: ada0
   Mediasize: 40020664320 (37G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r0w0e0
   efimedia: HD(1,GPT,9bd6d6a1-bd6a-11e4-8c34-d0509946c5e6,0x80,0x400000)
   rawuuid: 9bd6d6a1-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da0p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,9be4e735-bd6a-11e4-8c34-d0509946c5e6,0x400080,0x1d180be08)
   rawuuid: 9be4e735-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da0
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,9cc82260-bd6a-11e4-8c34-d0509946c5e6,0x80,0x400000)
   rawuuid: 9cc82260-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da1p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e2
   efimedia: HD(2,GPT,9cd608c6-bd6a-11e4-8c34-d0509946c5e6,0x400080,0x1d180be08)
   rawuuid: 9cd608c6-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da1
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e5

Geom name: da2
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037134
first: 34
entries: 128
scheme: GPT
Providers:
1. Name: da2p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,9c4fe8b9-bd6a-11e4-8c34-d0509946c5e6,0x80,0x400000)
   rawuuid: 9c4fe8b9-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da2p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,9c5d498b-bd6a-11e4-8c34-d0509946c5e6,0x400080,0x1d180be08)
   rawuuid: 9c5d498b-bd6a-11e4-8c34-d0509946c5e6
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da2
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e4

Geom name: da3
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 7814037127
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(1,GPT,36c8c193-009a-11e9-b4e4-d0509964219e,0x80,0x400000)
   rawuuid: 36c8c193-009a-11e9-b4e4-d0509964219e
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147483648
   offset: 65536
   type: freebsd-swap
   index: 1
   end: 4194431
   start: 128
2. Name: da3p2
   Mediasize: 3998639460352 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r1w1e1
   efimedia: HD(2,GPT,36decf7b-009a-11e9-b4e4-d0509964219e,0x400080,0x1d180be08)
   rawuuid: 36decf7b-009a-11e9-b4e4-d0509964219e
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 3998639460352
   offset: 2147549184
   type: freebsd-zfs
   index: 2
   end: 7814037127
   start: 4194432
Consumers:
1. Name: da3
   Mediasize: 4000787030016 (3.6T)
   Sectorsize: 512
   Stripesize: 4096
   Stripeoffset: 0
   Mode: r2w2e4
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So I broke it down to what is essentially relevant:
Code:
Geom name: da0
1. Name: da0p1
   Mediasize: 2147483648 (2.0G)
   rawuuid: 9bd6d6a1-bd6a-11e4-8c34-d0509946c5e6
2. Name: da0p2
   Mediasize: 3998639460352 (3.6T)
   rawuuid: 9be4e735-bd6a-11e4-8c34-d0509946c5e6

Geom name: da1
1. Name: da1p1
   Mediasize: 2147483648 (2.0G)
   rawuuid: 9cc82260-bd6a-11e4-8c34-d0509946c5e6
2. Name: da1p2
   Mediasize: 3998639460352 (3.6T)
   rawuuid: 9cd608c6-bd6a-11e4-8c34-d0509946c5e6

Geom name: da2
1. Name: da2p1
   Mediasize: 2147483648 (2.0G)
   rawuuid: 9c4fe8b9-bd6a-11e4-8c34-d0509946c5e6
2. Name: da2p2
   Mediasize: 3998639460352 (3.6T)
   rawuuid: 9c5d498b-bd6a-11e4-8c34-d0509946c5e6

Geom name: da3
1. Name: da3p1
   Mediasize: 2147483648 (2.0G)
   rawuuid: 36c8c193-009a-11e9-b4e4-d0509964219e
2. Name: da3p2
   Mediasize: 3998639460352 (3.6T)
   rawuuid: 36decf7b-009a-11e9-b4e4-d0509964219e


So all your four disks are partitioned alike (good!) and you have a RAIDZ2 so the procedure is not too risky.

First disk:
Code:
zpool offline Volume1 da0p2
zpool replace Volume1 da0p2 gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6

If that does not work, look at zpool status and replace da0p2 with the numeric ID shown in the output.

Wait for the resilver to finish!

Then repeat for the remaining disks:

Code:
zpool offline Volume1 da2p2
zpool replace Volume1 da2p2 gptid/9c5d498b-bd6a-11e4-8c34-d0509946c5e6

# Wait!

zpool offline Volume1 da3p2
zpool replace Volume1 da3p2 gptid/36decf7b-009a-11e9-b4e4-d0509964219e
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
@Patrick M. Hausen , thanks so much for the tutorial with the abstraction plus the procedure - now clear on the process and the issue being addressed. I'll work it tomorrow morning and report.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Well @Patrick M. Hausen , I haven't been successful. I tried the commands as written and using the alternative with the numeric identifier. The first gave me an "invalid vdev specification", the second gave the same with "use -f to override the following errors: /dev/gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 is part of active pool 'Volume1'".

Any ideas?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
Yes, thanks, I already had tried that, here's the result of it:

Code:
root@NAS3:~ # zpool replace -f Volume1 10196974891462840790 gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6
invalid vdev specification
the following errors must be manually repaired:
/dev/gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 is part of active pool 'Volume1'
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
And the GPTID you are using is the disk you offlined? Please verify. If that is the case (correct disk) try zpool labelclear gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 and then replace again.
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
And the GPTID you are using is the disk you offlined? Please verify.
Thanks, I believe that I had already made those checks but now I have gone through again:

Offlined with the GUI:
1610300485218.png


Double check with CLI:
1610300551361.png


Try again (just in case...) then move on with your last suggestion:

Code:
root@NAS3:~ # zpool replace Volume1 da0p2 gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6
invalid vdev specification
use '-f' to override the following errors:
/dev/gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 is part of active pool 'Volume1'
root@NAS3:~ # zpool replace -f Volume1 da0p2 gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6
invalid vdev specification
the following errors must be manually repaired:
/dev/gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 is part of active pool 'Volume1'
root@NAS3:~ # zpool labelclear gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6
/dev/gptid/9be4e735-bd6a-11e4-8c34-d0509946c5e6 is a member (ACTIVE) of pool "Volume1"


Hmmm - what now?

And thank you so much for your help, Patrick!

EDIT: Also, I could not earlier get/find a "numeric identifier" from the output of "zpool status" - the one that I used in the earlier attempts came from the GUI (but didn't seem to produce any different result than the use of da0p2):

1610301664906.png
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
CLI is truth. UI tries to display "user friendly" IDs and might do some nonsense in your inconsistent state.

Try zpool labelclear -f /dev/da0p2
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
dd if=/dev/zero of=/dev/da0p2 count=1; dd if=/dev/zero of=/dev/da0p2 oseek=7809842695
 

Redcoat

MVP
Joined
Feb 18, 2014
Messages
2,925
dd if=/dev/zero of=/dev/da0p2 count=1; dd if=/dev/zero of=/dev/da0p2 oseek=7809842695
Code:
root@NAS3:~ # dd if=/dev/zero of=/dev/da0p2 count=1; dd if=/dev/zero of=/dev/da0p2 oseek=7809842695
1+0 records in
1+0 records out
512 bytes transferred in 0.934779 secs (548 bytes/sec)
dd: /dev/da0p2: Operation not permitted
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Try oseek=7809842694 - possibly I miscalculated. The intention is to clear the first and last sector of the partition.
 
Top