Recurring persistent errors on disks

Status
Not open for further replies.

warswe

Cadet
Joined
Nov 16, 2015
Messages
4
Hi All,

I have ongoing disks showing up with read/write errors and therefore automatically switching to offline.

My setup: Supermicro chassis, with 12 disks in raid-z2, Freenas 9.3

First time I thought about a faulty disk and therefore replaced it.
Next two times, I was more dubious and decided to run a full analysis on one of the two replaced faulty disks, which didn't show up any wrong result. So the disks are perfectly healthy.
This morning, I've had again a fauly disk, this is number 4, in slightly more than 1 month.

The next step is to replace a faulty disk with a disk I've previously removed. And I'm pretty sure that it will fix the pool and resilver properly.

I'm however looking at how to finally fix this recurring problem.

Any hint on where it could come from? I suspect eventually a hardware problem with the chassis. Since the disk seems perfectly healthy, is there anyway I could restore its status to "healthy" without taking it out / replacing it?

The 12 disks are brand new WD RE series.
Supermicro chassis is a SuperStorage Server 5028R-E1CR12L with Mainboard Super X10SRH-CLN4F and 32GB of ECC RAM, + a Xeon CPU


zpool status -v
pool: fnas_pool_01
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub in progress since Mon Nov 16 09:55:50 2015
816G scanned out of 1.59T at 53.3M/s, 4h20m to go
0 repaired, 50.08% done
config:

NAME STATE READ WRITE CKSUM
fnas_pool_01 DEGRADED 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/ba21daa9-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/ba719d1a-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bac20780-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bb0fd6d3-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bb5debf6-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bbad54eb-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
raidz2-1 DEGRADED 0 0 0
gptid/43340ff5-6f7a-11e5-ab29-002590fd33bc ONLINE 0 0 0
gptid/bc5490c3-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bca6bd1d-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/bcf82bda-5af7-11e5-b362-002590fd33bc ONLINE 0 0 0
gptid/ac126516-8a09-11e5-9c0e-002590fd33bc FAULTED 68 344 0 too many errors
gptid/67ebb8c6-8493-11e5-9c0e-002590fd33bc ONLINE 0 0 0

errors: No known data errors

pool: freenas-boot
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Oct 20 03:45:31 2015
config:

NAME STATE READ WRITE CKSUM
freenas-boot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
da12p2 ONLINE 0 0 0
da13p2 ONLINE 0 0 0

errors: No known data errors





/var/log/messages (right before losing one disk)

Nov 16 12:59:38 nvfs004 (noperiph:mpr0:0:4294967295:0): SMID 6 Aborting command 0xffffff8000a16bf0
Nov 16 12:59:39 nvfs004 (da2:mpr0:0:12:0): READ(10). CDB: 28 00 00 90 2b 38 00 00 18 00 length 12288 SMID 401 terminated ioc 804b scsi 0 state c xfer 0
Nov 16 12:59:39 nvfs004 (da2:mpr0:0:12:0): READ(10). CDB: 28 00 00 4f ae 18 00 00 08 00 length 4096 SMID 708 terminated ioc 804b scsi 0 state c xfer 0
Nov 16 12:59:39 nvfs004 (da2:mpr0:0:12:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Nov 16 12:59:39 nvfs004 (da2:mpr0:0:12:0): CAM status: Command timeout
Nov 16 12:59:39 nvfs004 (da2:mpr0:0:12:0): Retrying command
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): SYNCHRONIZE CACHE(10). CDB: 35 00 00 00 00 00 00 00 00 00
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): CAM status: SCSI Status Error
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): SCSI status: Check Condition
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): SCSI sense: UNIT ATTENTION asc:29,0 (Power on, reset, or bus device reset occurred)
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): Error 6, Retries exhausted
Nov 16 12:59:40 nvfs004 (da2:mpr0:0:12:0): Invalidating pack
Nov 16 12:59:41 nvfs004 GEOM_ELI: g_eli_read_done() failed da2p1.eli[READ(offset=57344, length=8192)]
Nov 16 12:59:41 nvfs004 swap_pager: I/O error - pagein failed; blkno 6815771,size 8192, error 6
Nov 16 12:59:41 nvfs004 vm_fault: pager read error, pid 3499 (python2.7)
Nov 16 12:59:41 nvfs004 kernel: pid 3499 (python2.7), uid 0: exited on signal 11




smart check of the latest failed disk:
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p16 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital RE4 (SATA 6Gb/s)
Device Model: WDC WD4000FYYZ-01UL1B2
Serial Number: WD-WCC136NY96HL
LU WWN Device Id: 5 0014ee 2609aab85
Firmware Version: 01.01K03
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Size: 512 bytes logical/physical
Rotation Rate: 7200 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Mon Nov 16 16:06:55 2015 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART Status not supported: Incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.

General SMART Values:
Offline data collection status: (0x80) Offline data collection activity
was never started.

Auto Offline Data Collection: Enabled.
Self-test execution status: ( 25) The self-test routine was aborted by
the host.


Total time to complete Offline
data collection: (46140) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 498) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 100 253 021 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 1
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 73
10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 1
183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 0
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1
194 Temperature_Celsius 0x0022 127 117 000 Old_age Always - 25
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Aborted by host 90% 72 -


SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.




# camcontrol devlist
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 10 lun 0 (pass0,da0)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 11 lun 0 (pass1,da1)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 12 lun 0 (da2,pass2)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 13 lun 0 (da3,pass3)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 14 lun 0 (pass4,da4)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 15 lun 0 (pass5,da5)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 16 lun 0 (pass6,da6)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 17 lun 0 (pass7,da7)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 18 lun 0 (pass8,da8)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 19 lun 0 (pass9,da9)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 20 lun 0 (pass10,da10)
<ATA WDC WD4000FYYZ-0 1K03> at scbus0 target 21 lun 0 (pass11,da11)
<LSI SAS3x28 0601> at scbus0 target 22 lun 0 (pass12,ses0)
<SanDisk Ultra Fit 1.00> at scbus12 target 0 lun 0 (pass13,da12)
<SanDisk Ultra Fit 1.00> at scbus13 target 0 lun 0 (pass14,da13)
 

warswe

Cadet
Joined
Nov 16, 2015
Messages
4
I hate answering my own post and doing things in the wrong orders... but there's apparently an issue with the LSI SAS3 driver. Although I'm not using any SAS drive, I'll update to 9.3.1 asap to see if this improve the reliability.
 
Status
Not open for further replies.
Top