Disk failing SMART test

lewisxy

Cadet
Joined
Jul 12, 2023
Messages
5
I am new to TrueNAS and have been using it for almost 1 year. I recently felt my server (hard drive) is making strange noise and I got the following alerts when I logged in.
Code:
CRITICAL Device: /dev/ada1, 13 Currently unreadable (pending) sectors.
CRITICAL Device: /dev/ada1, 13 Offline uncorrectable sectors.

I did some research online and performed the SMART short test and long/extended test. Here is the result after the test has finished.

Code:
root@truenas[~]# smartctl -a /dev/ada1
smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org


=== START OF INFORMATION SECTION ===
Device Model:     ST18000NM000J-2TV103
Serial Number:    ZRXXXXXX
LU WWN Device Id: 5 000c50 0e34b9bc2
Firmware Version: SN01
User Capacity:    18,000,207,937,536 bytes [18.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Wed Jul 12 20:08:24 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled


=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED


General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      ( 116) The previous self-test completed having
                                        the read element of the test failed.
Total time to complete Offline
data collection:                (  559) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (1543) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x70bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.


SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   084   064   044    Pre-fail  Always       -       231541083
  3 Spin_Up_Time            0x0003   090   090   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       85
  5 Reallocated_Sector_Ct   0x0033   076   076   010    Pre-fail  Always       -       4020
  7 Seek_Error_Rate         0x000f   077   060   045    Pre-fail  Always       -       50217697
  9 Power_On_Hours          0x0032   093   093   000    Old_age   Always       -       6621
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       80
 18 Unknown_Attribute       0x000b   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   087   087   000    Old_age   Always       -       13
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   047   043   000    Old_age   Always       -       53 (Min/Max 38/57)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       55
193 Load_Cycle_Count        0x0032   084   084   000    Old_age   Always       -       32679
194 Temperature_Celsius     0x0022   053   057   000    Old_age   Always       -       53 (0 17 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       13
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       13
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       3984 (6 247 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       10404755145
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       45079713739


SMART Error Log Version: 1
No Errors Logged


SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       40%      6617         -
# 2  Short offline       Completed without error       00%      6601         -


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing


I also got another alert once the test finished.
Code:
CRITICAL Device: /dev/ada1, Self-Test Log error count increased from 0 to 1.


In my setup, there are 2 Seagate EXOS 18TB drives running in mirror mode. The hardware is an old Dell workstation (Precision E1650) running VMware EXSi 7 with HBA passthrough. The server is running almost continuously over the past 9 months (I turn it off for maintenance occasionally). It is also plugged into an UPS, so there is not many hard shutdown.

Other than the Critical alerts, I don't think there are any data errors reported by ZFS so far. But the unusual noise of the machine makes me kinda uncomfortable (it sounds like the disk is keep seeking or performing large amount of random IO, but I am sure it doesn't as our workload hasn't changed, and the machine sounds normal during the SMART long test). What should I do next? Should I replace the disk? Thanks
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Based on that information, I would stop trusting that disk and replace it.
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
Other than the Critical alerts, I don't think there are any data errors reported by ZFS so far. But the unusual noise of the machine makes me kinda uncomfortable (it sounds like the disk is keep seeking or performing large amount of random IO, but I am sure it doesn't as our workload hasn't changed, and the machine sounds normal during the SMART long test). What should I do next? Should I replace the disk? Thanks

The noises you are hearing from the drive are caused by it failing to find the track when it is asked to seek. If you look at the SMART report, there are many seek errors logged. The drive is damaged and you need to replace it.

While you are there, you should also look at why the drive is running so hot. It was at 53°C when you collected the log and has been as hot as 57°C, which is way too hot. The absolute maximum temperature allowed for the Exos X18 drives is 60°C. Given the high temperatures, if the drive has been running very hot for a long time, that could well explain why it has failed and if both drives have been running equally as hot, I would say there is a significant risk of you losing the other drive fairly soon as well. I hope you have a backup!
 

lewisxy

Cadet
Joined
Jul 12, 2023
Messages
5
The noises you are hearing from the drive are caused by it failing to find the track when it is asked to seek. If you look at the SMART report, there are many seek errors logged. The drive is damaged and you need to replace it.

While you are there, you should also look at why the drive is running so hot. It was at 53°C when you collected the log and has been as hot as 57°C, which is way too hot. The absolute maximum temperature allowed for the Exos X18 drives is 60°C. Given the high temperatures, if the drive has been running very hot for a long time, that could well explain why it has failed and if both drives have been running equally as hot, I would say there is a significant risk of you losing the other drive fairly soon as well. I hope you have a backup!
Thanks for the suggestion. The temperature it reported is actually accurate as it felt quite hot touching the casing of the drive immediately after powering off the machine. The other drive unfortunately is at similar temperature as they are physically near each other. According to the temperature graph provided by the system, it runs cool (around 37 Celsius) until around 2 month ago which I moved and installed the server to a different place as we were moving, we also did a clean up of the case fan.
I will run a SMART test on the other drive later.
One thing I am not sure is that why does ZFS does not report any issue when the hardware is failing? Is it because I had another drive in mirror configuration or there are some redundancy built into the file system that can tolerate some amount of hardware failure?
I will replace the failing drive as I still have active warranty.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
One thing I am not sure is that why does ZFS does not report any issue when the hardware is failing? Is it because I had another drive in mirror configuration or there are some redundancy built into the file system that can tolerate some amount of hardware failure?
ZFS will tell you about the integrity of the pool and the data you have stored on it.

If you have some early signs that a disk is failing (from SMART), but none of the issues with that disk have caused problems for ZFS to read or write data, then ZFS has nothing to say about it.

When a disk has failed completely (or causes enough issues with reads and writes that ZFS wants to do), ZFS will obviously tell you that the pool is degraded and/or will FAULT the disk out of the pool.
 

unseen

Contributor
Joined
Aug 25, 2017
Messages
103
Thanks for the suggestion. The temperature it reported is actually accurate as it felt quite hot touching the casing of the drive immediately after powering off the machine. The other drive unfortunately is at similar temperature as they are physically near each other. According to the temperature graph provided by the system, it runs cool (around 37 Celsius) until around 2 month ago which I moved and installed the server to a different place as we were moving, we also did a clean up of the case fan.
I will run a SMART test on the other drive later.
One thing I am not sure is that why does ZFS does not report any issue when the hardware is failing? Is it because I had another drive in mirror configuration or there are some redundancy built into the file system that can tolerate some amount of hardware failure?
I will replace the failing drive as I still have active warranty.

As sretalla has already pointed out, ZFS doesn't know to fault the drive unless it has a problem reading from or writing to it. When you do an extended SMART test, the drive reads every track and sector, regardless of whether they contain data or not. Your extended test failed after 60% and if you don't have any data stored at that part of the disk, there's nothing for ZFS to complain about (yet).
If you want to test the pool from ZFS' point of view, you should run a scrub. That will make ZFS read all of the data stored in the pool and if it has a problem running the scrub, it will then fail out the drive.

If things have become much hotter since you moved the machine, you need to check things like whether you now have a wall behind the machine which is preventing the fan from working properly. There needs to be enough room for the fan to expel hot air from the case without it being obstructed. If there's a wall just inches from the rear of the machine, it can case the air flow to stall and not actually move air through the case.

An increase from 37 to 57 is a very large difference - my machine runs a little hotter during the summer as the ambient air temperature is higher, but it's a difference of a few degrees, not 20.
 
Top