Multiple drive failure at the same time, how to "reattach" them into zfs pool?

Knogle

Dabbler
Joined
Jan 25, 2014
Messages
28
Ahoy friends.
I have recently purchased 8 new hard disk drives in order to build up a new zfs pool. They are behind a LSI 9211 HBA, being passed-through using KVM. So FreeNAS is running in a vm.
Now i am experiencing some issue, and i don't know what happened. According to zpool status and FreeNAS alerts, 3 drives failed at the same time, at least it says
"* Pool data state is DEGRADED: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state.."

That's my zpool status output. What may have caused this issue? SMART values of each HDD are OK. Is there a way to re-add the removed devices again?


Code:
  pool: data
state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sun Mar  7 12:51:30 2021
    8.42T scanned at 1.30G/s, 198G issued at 408M/s, 9.86T total
    23.8G resilvered, 1.96% done, 0 days 06:54:14 to go
config:

    NAME                                                  STATE     READ WRITE CKSUM
    data                                                  DEGRADED     0     0     0
      raidz2-0                                            DEGRADED     0     0     0
        gptid/8d39104b-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
        gptid/9155711b-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
        gptid/86b9fa2f-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
        gptid/9285ff04-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
        38866582036668272                                 REMOVED      0     0     0  was /dev/gptid/9411f31f-6596-11eb-a75c-8f89b0c061c4.eli
        spare-5                                           REMOVED      0     0     0
          12219767935346166654                            REMOVED      0     0     0  was /dev/gptid/9b4a1344-6596-11eb-a75c-8f89b0c061c4.eli
          gptid/0cea51b3-6caf-11eb-aa19-19444082b40a.eli  ONLINE       0     0     0
        gptid/aab972e4-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
        gptid/ae626c03-6596-11eb-a75c-8f89b0c061c4.eli    ONLINE       0     0     0
    logs
      gptid/080a5406-7e88-11eb-bffa-b95a0a9e4218.eli      ONLINE       0     0     0
    cache
      gptid/9aaca0f4-7a0d-11eb-8c2d-9959dcb5e6a3.eli      ONLINE       0     0     0
    spares
      336223762456561297                                  INUSE     was /dev/gptid/0cea51b3-6caf-11eb-aa19-19444082b40a.eli

errors: No known data errors

  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0 days 00:00:05 with 0 errors on Wed Mar  3 03:45:05 2021
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      vtbd0p2   ONLINE       0     0     0

errors: No known data errors


dmesg

Code:
mps0: mpssas_prepare_remove: Sending reset for target ID 25
mps0: mpssas_prepare_remove: Sending reset for target ID 27
mps0: Unfreezing devq for target ID 25
mps0: Unfreezing devq for target ID 27
da5 at mps0 bus 0 scbus2 target 25 lun 0
da5: <ATA WDC WD40EFRX-68N 0A82> s/n WD-WCC7K0PY9JXR detached
da7 at mps0 bus 0 scbus2 target 27 lun 0
GEOM_MIRRORda7: <ATA WDC WD40EFRX-68N 0A82> s/n WD-WCC7K6SY1409 detached
: Device swap1: provider da5p1 disconnected.
GEOM_ELI: Device gptid/9b4a1344-6596-11eb-a75c-8f89b0c061c4.eli destroyed.
GEOM_ELI: Detached gptid/9b4a1344-6596-11eb-a75c-8f89b0c061c4.eli on last close.
GEOM_MIRROR: Device swap0: provider da7p1 disconnected.
(da5:mps0:0:25:0): Periph destroyed
GEOM_ELI: Device gptid/9411f31f-6596-11eb-a75c-8f89b0c061c4.eli destroyed.
GEOM_ELI: Detached gptid/9411f31f-6596-11eb-a75c-8f89b0c061c4.eli on last close.
(da7:mps0:0:27:0): Periph destroyed
mps0: SAS Address for SATA device = 4f626165bfaadf92
mps0: SAS Address for SATA device = 4f686465b794b779
mps0: SAS Address from SATA device = 4f626165bfaadf92
mps0: SAS Address from SATA device = 4f686465b794b779
ses0: da5,pass6 in 'SLOT 002', SAS Slot: 1+ phys
da5 at mps0 bus 0 scbus2 target 25 lun 0
ses0:  phy 0: SATA device
ses0:  phy 0: parent 500262d0cd87dd40 addr 500262d0cd87dd43
da7 at mps0 bus 0 scbus2 target 27 lun 0
da5: <ATA WDC WD40EFRX-68N 0A82> Fixed Direct Access SPC-4 SCSI device
da5: Serial Number WD-WCC7K0PY9JXR
da5: 600.000MB/s transfers
da5: Command Queueing enabled
da5: 3815447MB (7814037168 512 byte sectors)
da5: quirks=0x8<4K>
da7: <ATA WDC WD40EFRX-68N 0A82> Fixed Direct Access SPC-4 SCSI device
da7: Serial Number WD-WCC7K6SY1409
da7: 600.000MB/s transfers
da7: Command Queueing enabled
da7: 3815447MB (7814037168 512 byte sectors)
da7: quirks=0x8<4K>
ses0: da7,pass8 in 'SLOT 013', SAS Slot: 1+ phys
ses0:  phy 0: SATA device
ses0:  phy 0: parent 500262d0cd87dd40 addr 500262d0cd87dd4e
GEOM_ELI: Device mirror/swap1.eli destroyed.
GEOM_MIRROR: Device swap1: provider destroyed.
GEOM_MIRROR: Device swap1 destroyed.
GEOM_ELI: Device mirror/swap0.eli destroyed.
GEOM_MIRROR: Device swap0: provider destroyed.
GEOM_MIRROR: Device swap0 destroyed.
GEOM_MIRROR: Device mirror/swap0 launched (2/2).
GEOM_MIRROR: Device mirror/swap1 launched (2/2).
GEOM_ELI: Device mirror/swap0.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware
GEOM_ELI: Device mirror/swap1.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: hardware


EDIT:
It seems like i have had a power problem on my location. On different physical systems hdds also were reported as faulty at exactly the same time.


Thanks in advance!
 
Last edited:
Top