SOLVED GPT rejected -- may not be recoverable.

Status
Not open for further replies.
Joined
Feb 2, 2016
Messages
574
We have 12, 3TB SAS drives in an external HP SAS array that were just plugged into (SAS9207-4i4e) our newish FreeNAS-9.10 server. These were previously on a FreeNAS-9.3 server as a single ZFS volume group. They have security video on them but that data is rolling and more or less obsolete so we're not especially concerned about keeping the volume group when adding it to the new server.

FreeNAS sees all the drives from the command line but not the GUI. The error we see is 'GEOM: multipath/diskNN corrupt or invalid GPT detected' and 'GEOM: multipath/diskNN: GPT rejected -- may not be recoverable.'. (Logs below if you care.)

Running 'zpool status' and 'zpool import' doesn't show any evidence of the previous volume group.

I suspect we can dd if=/dev/zero each of the drives (the raw drive or the multipath?) and then start from scratch. Aside from being time consuming and inelegant, we're wondering if there is a more correct way or efficient way to prepare the disks for reuse in FreeNAS.

Cheers,
Matt




Code:
ses0 at mps0 bus 0 scbus0 target 29 lun 0
ses0: <HP D2600 SAS AJ940A 0103> Fixed Enclosure Services SPC-3 SCSI device
ses0: Serial Number CN8035P2L3     
ses0: 600.000MB/s transfers
ses0: Command Queueing enabled
ses0: SCSI-3 ENC Device
da17 at mps0 bus 0 scbus0 target 21 lun 0
da17: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da17: Serial Number         YHJ7M2JD
da17: 600.000MB/s transfers
da17: Command Queueing enabled
da17: 2861588MB (5860533168 512 byte sectors)
da18 at mps0 bus 0 scbus0 target 20 lun 0
da18: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da18: Serial Number         YHJ8WYGD
da18: 600.000MB/s transfers
da18: Command Queueing enabled
da18: 2861588MB (5860533168 512 byte sectors)
da15 at mps0 bus 0 scbus0 target 18 lun 0
da15: <HITACHI HUS723030ALS640 A222> Fixed Direct Access SPC-4 SCSI device
da15: Serial Number         YHKA5RHG
da15: 600.000MB/s transfers
da15: Command Queueing enabled
da15: 2861588MB (5860533168 512 byte sectors)
da20 at mps0 bus 0 scbus0 target 22 lun 0
da20: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da20: Serial Number         YHJ6XEJD
da20: 600.000MB/s transfers
da20: Command Queueing enabled
da20: 2861588MB (5860533168 512 byte sectors)
da16 at mps0 bus 0 scbus0 target 19 lun 0
da16: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da16: Serial Number         YHJ7DJND
da16: 600.000MB/s transfers
da16: Command Queueing enabled
da16: 2861588MB (5860533168 512 byte sectors)
da14 at mps0 bus 0 scbus0 target 17 lun 0
da14: <HITACHI HUS723030ALS640 A222> Fixed Direct Access SPC-4 SCSI device
da14: Serial Number         YHKAMUDG
da14: 600.000MB/s transfers
da14: Command Queueing enabled
da14: 2861588MB (5860533168 512 byte sectors)
da24 at mps0 bus 0 scbus0 target 26 lun 0
da24: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da24: Serial Number         YHJ7TAUD
da24: 600.000MB/s transfers
da24: Command Queueing enabled
da24: 2861588MB (5860533168 512 byte sectors)
da19 at mps0 bus 0 scbus0 target 23 lun 0
da19: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da19: Serial Number         YHJ8XNXD
da19: 600.000MB/s transfers
da19: Command Queueing enabled
da19: 2861588MB (5860533168 512 byte sectors)
GEOM_MULTIPATH: disk2 created
da25 at mps0 bus 0 scbus0 target 28 lun 0
da25: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da25: Serial Number         YHJ6YL9D
da25: 600.000MB/s transfers
da25: Command Queueing enabled
da25: 2861588MB (5860533168 512 byte sectors)
GEOM_MULTIPATH: da14 added to disk2
GEOM_MULTIPATH: da14 is now active path in disk2
da21 at mps0 bus 0 scbus0 target 25 lun 0
da21: <HITACHI HUS723030ALS640 A120> Fixed Direct Access SPC-4 SCSI device
da21: Serial Number         YHJ7RYGD
da21: 600.000MB/s transfers
da21: Command Queueing enabled
da21: 2861588MB (5860533168 512 byte sectors)
GEOM: multipath/disk2: corrupt or invalid GPT detected.
GEOM: multipath/disk2: GPT rejected -- may not be recoverable.
GEOM_MULTIPATH: disk5 created
GEOM_MULTIPATH: da15 added to disk5
GEOM_MULTIPATH: da15 is now active path in disk5
GEOM_MULTIPATH: disk8 created
GEOM_MULTIPATH: da16 added to disk8
GEOM_MULTIPATH: da16 is now active path in disk8
GEOM_MULTIPATH: disk14 created
GEOM_MULTIPATH: da17 added to disk14
GEOM_MULTIPATH: da17 is now active path in disk14
GEOM_MULTIPATH: disk10 created
GEOM_MULTIPATH: da18 added to disk10
GEOM_MULTIPATH: da18 is now active path in disk10
GEOM_MULTIPATH: disk20 created
GEOM_MULTIPATH: da19 added to disk20
GEOM_MULTIPATH: da19 is now active path in disk20
GEOM_MULTIPATH: disk19 created
GEOM_MULTIPATH: da20 added to disk19
GEOM_MULTIPATH: da20 is now active path in disk19
GEOM_MULTIPATH: disk22 created
GEOM_MULTIPATH: da21 added to disk22
GEOM_MULTIPATH: da21 is now active path in disk22
da22 at mps0 bus 0 scbus0 target 24 lun 0
da22: <HITACHI HUS723030ALS640 A222> Fixed Direct Access SPC-4 SCSI device
da22: Serial Number         YHKAG8DG
da22: 600.000MB/s transfers
da22: Command Queueing enabled
da22: 2861588MB (5860533168 512 byte sectors)
GEOM_MULTIPATH: disk21 created
da23 at mps0 bus 0 scbus0 target 27 lun 0
da23: <HITACHI HUS723030ALS640 A222> Fixed Direct Access SPC-4 SCSI device
da23: Serial Number         YHKD6SEG
da23: 600.000MB/s transfers
da23: Command Queueing enabled
da23: 2861588MB (5860533168 512 byte sectors)
GEOM_MULTIPATH: da22 added to disk21
GEOM_MULTIPATH: da22 is now active path in disk21
GEOM_MULTIPATH: disk24 created
GEOM_MULTIPATH: da23 added to disk24
GEOM_MULTIPATH: da23 is now active path in disk24
GEOM_MULTIPATH: disk23 created
GEOM_MULTIPATH: da24 added to disk23
GEOM_MULTIPATH: da24 is now active path in disk23
GEOM_MULTIPATH: disk25 created
GEOM_MULTIPATH: da25 added to disk25
GEOM_MULTIPATH: da25 is now active path in disk25
GEOM: multipath/disk5: corrupt or invalid GPT detected.
GEOM: multipath/disk5: GPT rejected -- may not be recoverable.
GEOM: multipath/disk8: corrupt or invalid GPT detected.
GEOM: multipath/disk8: GPT rejected -- may not be recoverable.
GEOM: multipath/disk14: corrupt or invalid GPT detected.
GEOM: multipath/disk14: GPT rejected -- may not be recoverable.
GEOM: multipath/disk10: corrupt or invalid GPT detected.
GEOM: multipath/disk10: GPT rejected -- may not be recoverable.
GEOM: multipath/disk20: corrupt or invalid GPT detected.
GEOM: multipath/disk20: GPT rejected -- may not be recoverable.
GEOM: multipath/disk19: corrupt or invalid GPT detected.
GEOM: multipath/disk19: GPT rejected -- may not be recoverable.
GEOM: multipath/disk22: corrupt or invalid GPT detected.
GEOM: multipath/disk22: GPT rejected -- may not be recoverable.
GEOM: multipath/disk21: corrupt or invalid GPT detected.
GEOM: multipath/disk21: GPT rejected -- may not be recoverable.
GEOM: multipath/disk24: corrupt or invalid GPT detected.
GEOM: multipath/disk24: GPT rejected -- may not be recoverable.
GEOM: multipath/disk23: corrupt or invalid GPT detected.
GEOM: multipath/disk23: GPT rejected -- may not be recoverable.
GEOM: multipath/disk25: corrupt or invalid GPT detected.
GEOM: multipath/disk25: GPT rejected -- may not be recoverable.
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
If you're not concerned about the data, then you may wipe the MBR or GPT on the drive before creating a new pool. Zeroing the first few Kb on the drive using DD should be sufficient to wipe it.
 
Joined
Feb 2, 2016
Messages
574
you may wipe the MBR or GPT on the drive before creating a new pool. Zeroing the first few Kb on the drive using DD should be sufficient to wipe it.

I've read in a few places that GPT writes to the start and end of the disk as does ZFS's GEOM RAID. Is simply zeroing out the first megabyte on each drive enough? Some sites recommend 'gpart destroy' or other tools.

Cheers,
Matt
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So your issue isn't what you think it is.

gpart destroy is the "most proper" way to resolve bad/corrupt partition tables.

The cheap-and-dirty is to overwrite the first and last 1MB of disk space. This is often not easy to do, so you may just want to do gpart destroy.

If the multipath is broken, you'll need to break the multipath. To do that you'll need to wipe the raw disks and not the multipath disk. For that you'll need to wipe the first and last 8MB of disk space.

In your case, it looks like your disks are multipath (if they aren't supposed to be then you have hardware to reconfigure) and gpart is simply finding random data that is in the locations where the partition table is now located. Multipath puts a "container" over the disk, and uses the first few MB to store multipath data. The gpt tables then are stored after the multipath data.

In your case I would zero out the disks completely (you only need to write to one of the path disks), reboot (don't forget this step), and then try to create your zpool. It should work fine.
 
Joined
Feb 2, 2016
Messages
574
You may be right. On the previous FreeNAS system, the drives were hung on an SAS 9200-8e and looped with another enclosure. In the interim configuration, they are hung from a single SAS9207-4i4e.

I went ahead and wiped the first and last megabyte from each disk and was able to use them to create a new zfs volume but I'm still getting a multipath error...

"The following multipaths are not optimal: disk25, disk23, disk24, disk21, disk22, disk19, disk20, disk10, disk14, disk8, disk5, disk2"

It sounds like I need to go back and wipe at least eight megabytes at each end.

Thanks to you both. I'll see if that solves my multipath errors.

Cheers,
Matt
 
Joined
Feb 2, 2016
Messages
574
You were right, @cyberjock, there was a multipath component as well.

We went ahead and wiped the first and last 100MB of each drive and rebooted. Instead of showing up as broken multipath links, they just showed up in the disk list.

Cheers,
Matt
 
Status
Not open for further replies.
Top