FreeNAS 9.3 inaccessible during disk failure

Status
Not open for further replies.

AndersG

Dabbler
Joined
Oct 1, 2015
Messages
13
Hi,

Recently I have had some not so pleasant issues with one of our FreeNAS servers during disk failure. The system gets almost completely inaccessible when one of 6 disks in a raid z2 pool fails. I've described our issues and hardware below. I would expect the pool to continue working in a degraded state. We have previously been using NexentaStor on the same hardware without problems when disks were failing. Any ideas what is causing FreeNAS to stall in this way? Yes, I know we have crappy disks, will make another post under hardware for suggestions. :smile:


During the first failure I got an alert email stating that "The volume pool1 (ZFS) state is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected." After that NFS mounts were inaccessible, like the web UI and ssh login. I managed to use the console over IPMI, but it was slow and flooded with error messages. After a reboot all came up and the system worked properly with a degraded pool. After running "zpool clear" the pool was healthy again.

Logs from the disk fail are flooded with
Code:
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): Info: 0x31599031
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 26 13:34:58 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): READ(10). CDB: 28 00 30 3d 73 20 00 00 40 00
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): Info: 0x31599032
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 26 13:34:59 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): READ(10). CDB: 28 00 3e b2 c3 58 00 00 40 00
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): Info: 0x31599033
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 26 13:35:00 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)


After a while these things started to show up
Code:
Sep 26 13:39:15 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)
Sep 26 13:40:04 storage3     (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 00 40 04 50 00 00 08 00 length 4096 SMID 86 command timeout cm 0xffffff800110ee30 ccb 0xfffffe0078fa4000
Sep 26 13:40:04 storage3     (noperiph:mps0:0:4294967295:0): SMID 2 Aborting command 0xffffff800110ee30
Sep 26 13:40:04 storage3 (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 00 40 04 50 00 00 08 00
Sep 26 13:40:05 storage3 (da2:mps0:0:2:0): CAM status: Command timeout
Sep 26 13:40:05 storage3 (da2:mps0:0:2:0): Retrying command
Sep 26 13:40:05 storage3     (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 68 cb 9a 50 00 00 08 00 length 4096 SMID 628 command timeout cm 0xffffff800113a4a0 ccb 0xfffffe000f5fa800
Sep 26 13:40:05 storage3     (noperiph:mps0:0:4294967295:0): SMID 3 Aborting command 0xffffff800113a4a0
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 68 cb 9a 50 00 00 08 00
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): CAM status: Command timeout
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): Retrying command
Sep 26 13:40:06 storage3     (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 68 cb 9c 50 00 00 08 00 length 4096 SMID 702 command timeout cm 0xffffff8001140370 ccb 0xfffffe007903b000
Sep 26 13:40:06 storage3     (noperiph:mps0:0:4294967295:0): SMID 4 Aborting command 0xffffff8001140370
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): WRITE(10). CDB: 2a 00 68 cb 9c 50 00 00 08 00
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): CAM status: Command timeout
Sep 26 13:40:06 storage3 (da2:mps0:0:2:0): Retrying command
Sep 26 13:40:07 storage3     (da2:mps0:0:2:0): READ(10). CDB: 28 00 28 5c 08 20 00 00 40 00 length 32768 SMID 904 command timeout cm 0xffffff8001150640 ccb 0xfffffe000f5f5800
Sep 26 13:40:07 storage3     (noperiph:mps0:0:4294967295:0): SMID 5 Aborting command 0xffffff8001150640
Sep 26 13:40:07 storage3 (da2:mps0:0:2:0): READ(10). CDB: 28 00 28 5c 08 20 00 00 40 00
Sep 26 13:40:07 storage3 (da2:mps0:0:2:0): CAM status: Command timeout
Sep 26 13:40:07 storage3 (da2:mps0:0:2:0): Retrying command
Sep 26 13:40:08 storage3     (da2:mps0:0:2:0): READ(10). CDB: 28 00 28 5c 0c 60 00 00 40 00 length 32768 SMID 480 command timeout cm 0xffffff800112e700 ccb 0xfffffe074c86a000
Sep 26 13:40:08 storage3     (noperiph:mps0:0:4294967295:0): SMID 6 Aborting command 0xffffff800112e700
Sep 26 13:40:08 storage3 (da2:mps0:0:2:0): READ(10). CDB: 28 00 28 5c 0c 60 00 00 40 00
Sep 26 13:40:08 storage3 (da2:mps0:0:2:0): CAM status: Command timeout
Sep 26 13:40:08 storage3 (da2:mps0:0:2:0): Retrying command



Yesterday morning the same disk failed again. The alert mail stated that "The volume pool1 (ZFS) state is DEGRADED: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state." Everything worked properly with a degraded pool.

The same message in the logs but just for 1 minute an 30 seconds, after that the system continued quietly
Code:
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): Info: 0x31bee4fd
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 30 03:02:03 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): Info: 0x31bee4fe
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 30 03:02:04 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): CAM status: SCSI Status Error
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): SCSI status: Check Condition
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): SCSI sense: Deferred error: HARDWARE FAILURE asc:3,0 (Peripheral device write fault)
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): Info: 0x31bee4ff
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): Field Replaceable Unit: 8
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): Actual Retry Count: 24
Sep 30 03:02:05 storage3 (da2:mps0:0:2:0): Retrying command (per sense data)
Sep 30 03:02:06 storage3 (da2:mps0:0:2:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00 



When I got to work I decided to switch to a spare disk. I accidentally replaced it with a disk with previous issues, so after the resilver finished things got bad again. FreeNAS was inaccessible like the first time. This time we got no email, and after a reboot it did not come up again. I guess it was busy reading the bad disk. After we pulled the bad disk the machine started as normal and we could replace the bad disk with the correct spare.

Logs were filled with these messages, which continued in the console after the reboot
Code:
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): READ(10). CDB: 28 00 00 40 02 a0 00 00 e0 00
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): CAM status: SCSI Status Error
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): SCSI status: Check Condition
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): SCSI sense: HARDWARE FAILURE asc:32,0 (No defect spare location available)
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): Info: 0x4002d3
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): Field Replaceable Unit: 157
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): Command Specific Info: 0xa1615169
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): Actual Retry Count: 255
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): Retrying command (per sense data)
Sep 30 15:43:24 storage3 (da10:mps2:0:0:0): READ(10). CDB: 28 00 00 40 02 a0 00 00 e0 00
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): CAM status: SCSI Status Error
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI status: Check Condition
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI sense: HARDWARE FAILURE asc:32,0 (No defect spare location available)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Info: 0x4002d3
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Field Replaceable Unit: 157
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Command Specific Info: 0xa1615169
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Actual Retry Count: 255
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Retrying command (per sense data)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): READ(10). CDB: 28 00 00 40 02 a0 00 00 e0 00
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): CAM status: SCSI Status Error
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI status: Check Condition
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI sense: HARDWARE FAILURE asc:32,0 (No defect spare location available)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Info: 0x4002c1
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Field Replaceable Unit: 157
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Command Specific Info: 0xa1615169
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Actual Retry Count: 255
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Retrying command (per sense data)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): READ(10). CDB: 28 00 00 40 02 a0 00 00 e0 00
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): CAM status: SCSI Status Error
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI status: Check Condition
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI sense: HARDWARE FAILURE asc:32,0 (No defect spare location available)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Info: 0x4002c1
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Field Replaceable Unit: 157
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Command Specific Info: 0xa1615169
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Actual Retry Count: 255
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Retrying command (per sense data)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): READ(10). CDB: 28 00 00 40 02 a0 00 00 e0 00
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): CAM status: SCSI Status Error
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI status: Check Condition
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): SCSI sense: HARDWARE FAILURE asc:32,0 (No defect spare location available)
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Info: 0x4002d0
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Field Replaceable Unit: 157
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Command Specific Info: 0xa1615169
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Actual Retry Count: 255
Sep 30 15:43:57 storage3 (da10:mps2:0:0:0): Error 5, Retries exhausted



Hardware
  • Supermicro X8DTH
  • 2x E5620 @ 2.40GHz
  • 48GB DDR3 RAM
  • 2x LSI SAS9211-8i (IT) + built in SMC2008IT
  • 4x Intel 311 20GB SSD (syspool + ZIL)
  • 8x Seagate Savvio 10K.5 64MB 900GB SAS (6 disk raid z2 + 2 spare)
  • 2x Intel 510 Series 2.5" SSD 250GB (cache)
We are running FreeNAS-9.3-STABLE-201506162331
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Give me the output of:

sas2flash -listall
sas2flash -list -c 0
sas2flash -list -c 1
sas2flash -list -c 2
smartctl -a /dev/da2
smartctl -a /dev/da10
 

AndersG

Dabbler
Joined
Oct 1, 2015
Messages
13
sas2flash -listall
Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

    Adapter Selected is a LSI SAS: SAS2008(B2) 

Num   Ctlr            FW Ver        NVDATA        x86-BIOS         PCI Addr
----------------------------------------------------------------------------

0  SAS2008(B2)     16.00.00.00    10.00.00.06    07.31.00.00     00:03:00:00
1  SAS2008(B1)     16.00.01.00    10.00.00.04    07.31.00.00     00:05:00:00
2  SAS2008(B2)     16.00.00.00    10.00.00.06    07.31.00.00     00:86:00:00

    Finished Processing Commands Successfully.
    Exiting SAS2Flash.


sas2flash -list -c 0
Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

   Adapter Selected is a LSI SAS: SAS2008(B2)  

   Controller Number  : 0
   Controller  : SAS2008(B2)  
   PCI Address  : 00:03:00:00
   SAS Address  : 500605b-0-035b-5a70
   NVDATA Version (Default)  : 10.00.00.06
   NVDATA Version (Persistent)  : 10.00.00.06
   Firmware Product ID  : 0x2213 (IT)
   Firmware Version  : 16.00.00.00
   NVDATA Vendor  : LSI
   NVDATA Product ID  : SAS9211-8i
   BIOS Version  : 07.31.00.00
   UEFI BSD Version  : N/A
   FCODE Version  : N/A
   Board Name  : SAS9211-8i
   Board Assembly  : H3-25250-02B
   Board Tracer Number  : SP11407589

   Finished Processing Commands Successfully.
   Exiting SAS2Flash.


sas2flash -list -c 1
Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

   Adapter Selected is a LSI SAS: SAS2008(B1)  

   Controller Number  : 1
   Controller  : SAS2008(B1)  
   PCI Address  : 00:05:00:00
   SAS Address  : 5003048-0-0075-b700
   NVDATA Version (Default)  : 10.00.00.04
   NVDATA Version (Persistent)  : 10.00.00.04
   Firmware Product ID  : 0x2213 (IT)
   Firmware Version  : 16.00.01.00
   NVDATA Vendor  : LSI
   NVDATA Product ID  : SAS2008-IT
   BIOS Version  : 07.31.00.00
   UEFI BSD Version  : N/A
   FCODE Version  : N/A
   Board Name  : SMC2008-IT
   Board Assembly  : N/A
   Board Tracer Number  : N/A

   Finished Processing Commands Successfully.
   Exiting SAS2Flash.


sas2flash -list -c 2
Code:
LSI Corporation SAS2 Flash Utility
Version 16.00.00.00 (2013.03.01)
Copyright (c) 2008-2013 LSI Corporation. All rights reserved

   Adapter Selected is a LSI SAS: SAS2008(B2)  

   Controller Number  : 2
   Controller  : SAS2008(B2)  
   PCI Address  : 00:86:00:00
   SAS Address  : 500605b-0-035b-6fd0
   NVDATA Version (Default)  : 10.00.00.06
   NVDATA Version (Persistent)  : 10.00.00.06
   Firmware Product ID  : 0x2213 (IT)
   Firmware Version  : 16.00.00.00
   NVDATA Vendor  : LSI
   NVDATA Product ID  : SAS9211-8i
   BIOS Version  : 07.31.00.00
   UEFI BSD Version  : N/A
   FCODE Version  : N/A
   Board Name  : SAS9211-8i
   Board Assembly  : H3-25250-02B
   Board Tracer Number  : SP11407368

   Finished Processing Commands Successfully.
   Exiting SAS2Flash.


smartctl -a /dev/da2
Code:
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p16 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:  SEAGATE
Product:  ST9900805SS
Revision:  0002
Compliance:  SPC-4
User Capacity:  900,185,481,216 bytes [900 GB]
Logical block size:  512 bytes
Rotation Rate:  10000 rpm
Form Factor:  2.5 inches
Logical Unit id:  0x5000c5003c433847
Serial number:  6XS13XRQ0000M151E14S
Device type:  disk
Transport protocol:  SAS (SPL-3)
Local Time is:  Thu Oct  1 16:50:34 2015 CEST
SMART support is:  Available - device has SMART capability.
SMART support is:  Enabled
Temperature Warning:  Enabled

=== START OF READ SMART DATA SECTION ===
SMART Health Status: OK

Current Drive Temperature:  26 C
Drive Trip Temperature:  68 C

Manufactured in week 32 of year 2011
Specified cycle count over device lifetime:  10000
Accumulated start-stop cycles:  21
Specified load-unload count over device lifetime:  300000
Accumulated load-unload cycles:  64
Elements in grown defect list: 32

Vendor (Seagate) cache information
  Blocks sent to initiator = 13648
  Blocks received from initiator = 3434921
  Blocks read from cache and sent to initiator = 38653
  Number of read and write commands whose size <= segment size = 13
  Number of read and write commands whose size > segment size = 0

Vendor (Seagate/Hitachi) factory information
  number of hours powered up = 33775.22
  number of minutes until next internal SMART test = 10

Error counter log:
  Errors Corrected by  Total  Correction  Gigabytes  Total
  ECC  rereads/  errors  algorithm  processed  uncorrected
  fast | delayed  rewrites  corrected  invocations  [10^9 bytes]  errors
read:  24697  0  0  24697  0  0.007  0
write:  0  0  0  0  0  1.783  0

Non-medium error count:  5


[GLTSD (Global Logging Target Save Disable) set. Enable Save with '-S on']
No self-tests have been logged


/dev/da10 is no longer in the system, but I remember it had over 2000 elements in grown defect list, and several uncorrected errors. It should have been removed long time ago, but got some lime light time again. :) It just struck me that there still may be some warranty on these drives.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If I understand it correctly, that's what TLER avoids?

Yes, it is supposed to help mitigate such problems.

@AndersG

Well, if you lose 2 disks in the zpool, then the whole thing comes crashing down. The .system dataset is used as scratch space as well as storage for config files for system services. Pull the rug out from under the system services and the system gets cranky. Totally expected, which is why you are supposed to replace failing disks when they start failing ;)

That being said, da10 and da2 should be replaced. I don't know the type of zpool you have, or if other disks are failing. I'm guessing you are running RAIDZ1?

Can you post the output of "zpool status"?
 

AndersG

Dabbler
Joined
Oct 1, 2015
Messages
13
Yep, we do run raid z2. We're not bold enough to do z1. Also, note that only one disk at the time was in a bad state, and things did get better when the disk was removed so that only 5 disks was used for the pool.

I can see that we have the system dataset on pool1. Would it make sense to move it to boot, or is more space needed? Boot consists of mirrored Intel 311 20GB SSD.

From what I can see of the logs, the system was quite happy apart from the bad disk. I'm not sure, but my guess is that the system was busy trying to use the bad disk.

zpool status
Code:
  pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Mon Sep 28 03:45:08 2015
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da5p2   ONLINE       0     0     0
        da6p2   ONLINE       0     0     0

errors: No known data errors

  pool: pool1
state: ONLINE
  scan: resilvered 228G in 5h2m with 0 errors on Wed Sep 30 23:33:30 2015
config:

    NAME                                            STATE     READ WRITE CKSUM
    pool1                                           ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/9d2746e6-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
        gptid/9d925045-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
        gptid/af888062-6790-11e5-8458-002590093c1c  ONLINE       0     0     0
        gptid/9e6136e9-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
        gptid/9eccc8d3-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
        gptid/9f3cc82d-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
    logs
      mirror-1                                      ONLINE       0     0     0
        gptid/a07c638e-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
        gptid/a0bff56d-132d-11e5-b6b5-002590093c1c  ONLINE       0     0     0
    cache
      gptid/9f8e1a88-132d-11e5-b6b5-002590093c1c    ONLINE       0     0     0
      gptid/9fc496cd-132d-11e5-b6b5-002590093c1c    ONLINE       0     0     0

errors: No known data errors
 
Status
Not open for further replies.
Top