Unrecoverable Device Error

Status
Not open for further replies.

mjk79

Explorer
Joined
Nov 4, 2014
Messages
67
I've been running my freenas for about 6 months, the other day I received this.



Checking status of zfs pools:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 7.19G 5.63G 1.56G - - 78% 1.00x ONLINE -
vol1 21.8T 5.82T 15.9T - 5% 26% 1.00x ONLINE /mnt

pool: vol1
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: resilvered 228K in 0h0m with 0 errors on Sat Jul 18 01:33:06 2015
config:
NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/179f1b3d-fa50-11e4-958a-d05099374ad7 ONLINE 0 0 0
gptid/49f9306e-f9e9-11e4-958a-d05099374ad7 ONLINE 0 0 0
gptid/441b72af-c3df-11e4-9e96-d05099374ad7 ONLINE 0 0 0
gptid/dcea2496-c3ac-11e4-9e96-d05099374ad7 ONLINE 0 1 0
cache
gptid/27aa82a7-66f7-11e4-a386-d05099374ad6 ONLINE 0 0 0
rrors: No known data errors
-- End of daily output --

M9v2fHZ.jpg


My question about this is, at what point should I be looking at replacing this drive? Is one error ok? Should I replace it if I get more or just replace it now?

All my disks are under warranty so replacing it isn't a problem.
Thanks for any input you guys have.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Looks like this error was "fixed". The question remains, why did you have this error?

What does "smartctl -x /dev/da0" say? Let us have a look at that. If that looks OK, then we'll zpool clear the error, and go about our day, never to solve the mystery. if not, then we can counsel you to begin to replace the drive.
 

mjk79

Explorer
Joined
Nov 4, 2014
Messages
67
[root@freenas] ~# smartctl -x /dev/da0
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p16 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD60EFRX-68MYMN1
Serial Number: WD-WX11DC4FKAAH
LU WWN Device Id: 5 0014ee 20b917fce
Firmware Version: 82.00A82
User Capacity: 6,001,175,126,016 bytes [6.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5700 rpm
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Jul 22 23:30:12 2015 MDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
AAM feature is: Unavailable
APM feature is: Unavailable
Rd look-ahead is: Enabled
Write cache is: Enabled
ATA Security is: Disabled, NOT FROZEN [SEC1]
Wt Cache Reorder: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 5984) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 713) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0
3 Spin_Up_Time POS--K 100 253 021 - 0
4 Start_Stop_Count -O--CK 100 100 000 - 6
5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0
7 Seek_Error_Rate -OSR-K 200 200 000 - 0
9 Power_On_Hours -O--CK 096 096 000 - 3355
10 Spin_Retry_Count -O--CK 100 253 000 - 0
11 Calibration_Retry_Count -O--CK 100 253 000 - 0
12 Power_Cycle_Count -O--CK 100 100 000 - 6
192 Power-Off_Retract_Count -O--CK 200 200 000 - 3
193 Load_Cycle_Count -O--CK 200 200 000 - 98
194 Temperature_Celsius -O---K 113 107 000 - 39
196 Reallocated_Event_Count -O--CK 200 200 000 - 0
197 Current_Pending_Sector -O--CK 200 200 000 - 0
198 Offline_Uncorrectable ----CK 100 253 000 - 0
199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0
200 Multi_Zone_Error_Rate ---R-- 100 253 000 - 0
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning

General Purpose Log Directory Version 1
SMART Log Directory Version 1 [multi-sector log support]
Address Access R/W Size Description
0x00 GPL,SL R/O 1 Log Directory
0x01 SL R/O 1 Summary SMART error log
0x02 SL R/O 5 Comprehensive SMART error log
0x03 GPL R/O 6 Ext. Comprehensive SMART error log
0x06 SL R/O 1 SMART self-test log
0x07 GPL R/O 1 Extended self-test log
0x09 SL R/W 1 Selective self-test log
0x10 GPL R/O 1 NCQ Command Error log
0x11 GPL R/O 1 SATA Phy Event Counters
0x21 GPL R/O 1 Write stream error log
0x22 GPL R/O 1 Read stream error log
0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log
0x80-0x9f GPL,SL R/W 16 Host vendor specific log
0xa0-0xa7 GPL,SL VS 16 Device vendor specific log
0xa8-0xb6 GPL,SL VS 1 Device vendor specific log
0xb7 GPL,SL VS 40 Device vendor specific log
0xbd GPL,SL VS 1 Device vendor specific log
0xc0 GPL,SL VS 1 Device vendor specific log
0xc1 GPL VS 93 Device vendor specific log
0xe0 GPL,SL R/W 1 SCT Command/Status
0xe1 GPL,SL R/W 1 SCT Data Transfer

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
Device Error Count: 1
CR = Command Register
FEATR = Features Register
COUNT = Count (was: Sector Count) Register
LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8
LH = LBA High (was: Cylinder High) Register ] LBA
LM = LBA Mid (was: Cylinder Low) Register ] Register
LL = LBA Low (was: Sector Number) Register ]
DV = Device (was: Device/Head) Register
DC = Device Control Register
ER = Error register
ST = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 1 [0] occurred at disk power-on lifetime: 3237 hours (134 days + 21 hours)
When the command that caused the error occurred, the device was active or idle.

After command completion occurred, registers were:
ER -- ST COUNT LBA_48 LH LM LL DV DC
-- -- -- == -- == == == -- -- -- -- --
10 -- 51 00 00 00 01 b2 e5 72 50 40 00 Error: IDNF at LBA = 0x1b2e57250 = 7296348752

Commands leading to the command that caused the error were:
CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name
-- == -- == -- == == == -- -- -- -- -- --------------- --------------------
61 00 08 00 00 00 01 b2 e5 72 50 40 00 38d+11:56:33.985 WRITE FPDMA QUEUED
ea 00 00 00 00 00 00 00 00 00 00 40 00 38d+11:56:29.070 FLUSH CACHE EXT
61 00 08 00 00 00 02 ba a0 f4 70 40 00 38d+11:56:29.070 WRITE FPDMA QUEUED
61 00 08 00 10 00 02 ba a0 f2 70 40 00 38d+11:56:29.070 WRITE FPDMA QUEUED
61 00 08 00 08 00 00 00 40 04 70 40 00 38d+11:56:29.070 WRITE FPDMA QUEUED

SMART Extended Self-test Log Version: 1 (1 sectors)
No self-tests have been logged. [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

SCT Status Version: 3
SCT Version (vendor specific): 258 (0x0102)
SCT Support Level: 1
Device State: Active (0)
Current Temperature: 39 Celsius
Power Cycle Min/Max Temperature: 28/44 Celsius
Lifetime Min/Max Temperature: 2/45 Celsius
Under/Over Temperature Limit Count: 0/0

SCT Temperature History Version: 2
Temperature Sampling Period: 1 minute
Temperature Logging Interval: 1 minute
Min/Max recommended Temperature: 0/60 Celsius
Min/Max Temperature Limit: -41/85 Celsius
Temperature History Size (Index): 478 (387)

Index Estimated Time Temperature Celsius
388 2015-07-22 15:33 40 *********************
... ..( 2 skipped). .. *********************
391 2015-07-22 15:36 40 *********************
392 2015-07-22 15:37 39 ********************
... ..( 37 skipped). .. ********************
430 2015-07-22 16:15 39 ********************
431 2015-07-22 16:16 38 *******************
... ..( 41 skipped). .. *******************
473 2015-07-22 16:58 38 *******************
474 2015-07-22 16:59 39 ********************
... ..(187 skipped). .. ********************
184 2015-07-22 20:07 39 ********************
185 2015-07-22 20:08 40 *********************
... ..(201 skipped). .. *********************
387 2015-07-22 23:30 40 *********************

SCT Error Recovery Control:
Read: 70 (7.0 seconds)
Write: 70 (7.0 seconds)

Device Statistics (GP Log 0x04) not supported

SATA Phy Event Counters (GP Log 0x11)
ID Size Value Description
0x0001 2 0 Command failed due to ICRC error
0x0002 2 0 R_ERR response for data FIS
0x0003 2 0 R_ERR response for device-to-host data FIS
0x0004 2 0 R_ERR response for host-to-device data FIS
0x0005 2 0 R_ERR response for non-data FIS
0x0006 2 0 R_ERR response for device-to-host non-data FIS
0x0007 2 0 R_ERR response for host-to-device non-data FIS
0x0008 2 0 Device-to-host non-data FIS retries
0x0009 2 9 Transition from drive PhyRdy to drive PhyNRdy
0x000a 2 24 Device-to-host register FISes sent due to a COMRESET
0x000b 2 0 CRC errors within host-to-device FIS
0x000f 2 0 R_ERR response for host-to-device data FIS, CRC
0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC
0x8000 4 3750405 Vendor specific
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
OK, so here is my analysis:

  1. No SMART test of any kind has ever performed on this drive. Why not? You must not have read the documentation.
  2. You definitely show a hard-failed write event. That's not good.
  3. I would have you do a long SMART test on this drive, but it would take an interminable amount of time.
  4. Set up periodic "long" smart tests, say, every 2 weeks for these drives. These are in your "tasks" menu.
  5. Whatever is wrong with this drive, which we can't really tell because the drive has been not been properly tested and maintained, is bad. I would RMA it, personally.
  6. The temperatures of this drive are a bit toasty. This drive's history reports that it frequently gets above 40C. That's higher than any of us would recommend for a 6TB WD Red, and may be the reason for the premature failure, or at least a contributing factor. You should investigate and mitigate the higher-than-desired temperatures of your drives.
  7. This drive may fail a "conveyance" test, and that would be short to run. You can try it: "smartctl -t conveyance /dev/da0"
NOW, before you do this, I notice you only have a few drives here, and they're marked as "da0, da1, ..." etc. which means that you are NOT using SATA ports, but some other controller. If you were doing THAT wrong, you could get these errors even though your hard drives were fine. Can you tell us the complete rundown of your hardware, sir, including the make/model of the motherboard, and what disk controller is in use here?
 

mjk79

Explorer
Joined
Nov 4, 2014
Messages
67
Damn, Ok now I see where I screwed up, I had smart tests turned on but apparently on only one device. (ada0)

Intel Xeon E3-1241v3 at 3.5ghz
32gb Crucial 240pin DDR3 SDRAM ECC
ASRock C226m WS Micro ATX motherboard
Crucial M550 M.2 2280 256gb SATA SSD (Caching)
IBM M1015 flashed to LSI-9211-8i IT MODE
Fractal Design Node 804
Noctua NH-U125 CPU Cooler
4x 6TB WD Red Hard Drives
This case has a fan controller on the HDD side, it was set to low for noise but I increased it to high. It's fully populated with fans so hopefully that will help any temp issues.

How often should I be running short tests on these?
Assuming you think all is good, i'll go ahead and run the long smart test.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Most of us schedule daily short tests and weekly long tests with 1 scrub maybe once a month .
I don't know where you get this. Certainly none of the guides or guidance I or Cyberjock has written in the forum on this would suggest this, and I'm not aware of any other guides suggesting this. None of the most active guys in the forum do this, to my knowledge. We schedule short tests once every 4-7 days, and we schedule scrubs and long SMART tests in alternating weeks, for consumer grade equipment. And we are pretty sure this is near the boundary of overkill. You, on the other hand, are suggesting a schedule which is far more rigorous than that.

Daily short tests? I mean, you won't hurt anything...but...yeah. Weekly long tests? Again, probably won't hurt anything (even though that does exercise every sector on the drive), but probably overkill. 1 scrub per month? That's probably on the outer edge of what is reasonable; I'd suggest something more frequent, in distinction to your other two, which seem too frequent.

I guess I just need to dispute the statement "most of us" implying that this is somehow de rigueur, and that, in fact, most of the most active posters in the forum would be doing this. To my knowledge, we are not. The schedule you suggest is overkill on the smart tests for certain, and borderline underkill for the scrubs.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Daily is just too much in my opinion.

Short test every 3 days.
Long test on 14th and 28th of the month.
Scrubs on 1st and 15th.

No clue what you think "most of us" are doing, but you're wrong in every way in my opinion.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'm a home user!
 

mjk79

Explorer
Joined
Nov 4, 2014
Messages
67
I went ahead and ordered a replacement drive and set the smart test/scrub schedule to match Cyberjock's guide.

One more question, is there a way to run a test now or should i just set a new schedule if i want to run one immediately?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Oh is this like one of those meetings. Hi, I'm a home user and I don't perform daily SMART short tests.
Hi, Anodos.

I'm a home user and I do run daily SMART short tests. Why, you ask? Why not, best case, they accomplish something. Worst case, nothing really happens, unless smartd decides that the config is too stupid and I end up wasting valuable developer time on fixing edge cases with the smartd config file. Oh, wait, the worst case did happen... :oops:
 
Joined
Oct 2, 2014
Messages
925
Daily is just too much in my opinion.

Short test every 3 days.
Long test on 14th and 28th of the month.
Scrubs on 1st and 15th.

No clue what you think "most of us" are doing, but you're wrong in every way in my opinion.
If a scrub should be run on the 1st and 15th, is doing it every 35 days wrong/bad?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If a scrub should be run on the 1st and 15th, is doing it every 35 days wrong/bad?

Depends on what your thought process is. The ZFS technical documents recommend weekly for consumer grade drives and monthly for server-grade. If you follow that advice then even my schedule is wrong.

It's about what you consider your threshold for pain to be. I consider my schedule to be a good happy-medium when considering all aspects of everything that is involved/affected by SMART tests and scrubs.

There are things you can do where I could argue that you are going over the top (such as hourly or daily SMART tests). Since the SMART logs only keep logs for the previous 21 tests, it's nice to have some resemblance of history that is more than a day old.

There are also things where you could argue that I'm not aggressive enough (such as not doing weekly scrubs because the ZFS technical docs say to).

There's plenty of thought processes to this, and there is no true "right" answer. The 35 day schedule is inserted because too many people weren't scheduling scrubs at all, and that is obviously very dangerous for your data. So having a 35 day scrub was meant to be a CYA in case a user (or iXsystems customer) never actually sets up a scrub. If they do have the knowledge/experience/wisdom to setup their own scrub schedule, it is trivially easy to simply delete the 35 day schedule if it conflicts with your other schedule. Overall, it's better to have a default 35 day scrub than no scrub at all.

I'm tired, but I hope this makes sense. :P
 
Last edited:

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
If a scrub should be run on the 1st and 15th, is doing it every 35 days wrong/bad?
Just be aware that if you schedule it twice a month but leave the 35-day threshold in place, it won't actually run twice a month.
 
Status
Not open for further replies.
Top