Hard Drive Testing

NASbox

Guru
Joined
May 8, 2012
Messages
644
I had one of my drives show 4 pending sectors. I replaced the drive and I am now running a test on the drive with badblocks. After the first pass (witing AA), the pending sectors cleared, and have not come back after the second pass (55). I was more than 99% the first write pass, before the pending sectors changed from 4 to 0, so I know the bad spots are in the last 1%. The self test stopped very quickly with the errors shown in the log below.

Since a complete pass takes about 23 hours, I would like to "spot test" the area around the previous bad sectors. The badblocks statement I used and the relevent parts of the output from smartctl are shown below. I tried calculating an address based on 512 bytes and 4096 bytes, but neither value seems to generate an offset consistent with ~99% of the total disk space. Can someone tell me how to calculate the parameters to modify the badblocks so I just tests the last 1% of the disk.

badblocks statement used:
Code:
badblocks -b 4096 -c 65536 -wsv /dev/$devid

Relevnt sections of smartctl output:
Code:
Model Family:     Western Digital Red
Device Model:     WDC WD60EFRX-68MYMN1
User Capacity:    6,001,175,126,016 bytes [6.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical

Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   195   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   206   193   021    Pre-fail  Always       -       8700
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       271
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   042   042   000    Old_age   Always       -       42694
10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       98
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       70
193 Load_Cycle_Count        0x0032   187   187   000    Old_age   Always       -       39240
194 Temperature_Celsius     0x0022   109   101   000    Old_age   Always       -       43
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   100   253   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%     42658         3088571888
# 2  Extended offline    Completed: read failure       90%     39369         3088575408
# 3  Extended offline    Completed without error       00%     17065         -
# 4  Extended offline    Aborted by host               90%     17051         -
# 5  Extended offline    Aborted by host               90%     17051         -
# 6  Extended offline    Completed without error       00%     16961         -
# 7  Short offline       Completed without error       00%     16918         -


Is this type of error likely to be caused by a power failure? FreeNAS experienced an unexpected power outage due to a simulataneous power failure and UPS battery failure. Assuming the drive passes a few tests, how likely is the drive to be safe?

Any input much appreciated.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702
Is this type of error likely to be caused by a power failure?

This message indicates that a long SMART test was aborted due to a reboot, but was over 20'000 hours ago in the disk's lifetime.
# 4 Extended offline Aborted by host

There are thousands of hours between your SMART tests on that disk. Are you running the regular SMART tests?
That disk is approaching 5 years of spinning... probably not a good idea to trust it from here on in. Any error on it now should be considered a warning of imminent failure.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
This message indicates that a long SMART test was aborted due to a reboot, but was over 20'000 hours ago in the disk's lifetime.
Yes, I know... those tests had nothing to do with the situation I am talking about. I experienced the power fialure, noticed errors during a scrub, and then ran a couple of tests. The tests in question are here:
Code:
# 1  Extended offline    Completed: read failure       90%     42658         3088571888
# 2  Extended offline    Completed: read failure       90%     39369         3088575408


How would I calculate the address for spot testing? The number 3088575408 is supposed to be the LBA of first error - how do I relate this to something I can use with badblocks?

I had another disk fail, and when I tested it, there were a ton of errors, so I junked it. If it passes, I'll likely keep it for some application where I'm prepared to have it fail. I'm wondering if a glitch would cause a few pending sectors that cleared, or if it is definately a surface defect?

Back in the day (when disks were <2TB) I used a utility called spinrite, on Windows which detetect and (often) corrected bit rot and refreshed an entire drive by reformatting and rewriting the data in place. Now that drives are bigger the program doesn't work anymore.

There are thousands of hours between your SMART tests on that disk. Are you running the regular SMART tests?
That disk is approaching 5 years of spinning... probably not a good idea to trust it from here on in. Any error on it now should be considered a warning of imminent failure.
I don't run regular smart tests, I tend to be guided by scrubs and I look at the info from time to time. Given the light workload on these drives, I feel that the extra wear/tear of testing is likely to create problems, and the tests take so damn long that they may interfere with normal usage. I agree though that 5 years is likely near end of life. I've had WD Blacks, do 7 and 8 years, but Reds are not as good. I think having them constantly running in an even temperature with very few power cycles really helps with lifespan.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
To be clear here, you do not have any errors in your log of any concern other than the last extended test which occurred 36 hours earlier. My advice, run another SMART Extended test and ensure it goes to completion with zero errors. If it passes then your drive is in good condition.

Now some information for you... SMART tests are Read Only. When you had Pending Sector Errors, that means that the drive had an issue reading the data in general, it's not always a write operation. When you conducted Bad Blocks you were writing data to the drive and thus refreshed the data in the problematic area, once the data could be read faithfully again that will clear those Pending Sector errors in most cases. Since ID5 Reallocated Sectors is still a zero value, this also makes what I say true. If it would have incremented then that would indicate platter surface damage.

Could this be caused by a power failure? Yes, if writing data and a power failure occurs then you "could" have a problem but I would not hang my hat on that. Possible does not mean likely.

I would recommend that you run Daily SMART Short tests on all your drives and Weekly SMART Extended/Long tests on your drives, that is how I treat my system and I recommend this to everyone.

Now lets say you want to do some specific testing for the LBAs indicated in your posting, you can use Badblocks to test a section out, I've done that myself many times but my drives were failing, I just wanted to see if I could force a drive to map out all the bad areas. It works but once you have surface damage it will only grow over time. When you run it I'd run it for several several hours. If ID's 5, 196, and 197 remain at a zero value then all is good.

But keep in mind that the drive is almost 5 years of age so not too bad. the drive will eventually go bad and if now is a good time to replace it, then I would, but only due to age not due to failure.

Good Luck.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
To be clear here, you do not have any errors in your log of any concern other than the last extended test which occurred 36 hours earlier. My advice, run another SMART Extended test and ensure it goes to completion with zero errors. If it passes then your drive is in good condition.

Now some information for you... SMART tests are Read Only. When you had Pending Sector Errors, that means that the drive had an issue reading the data in general, it's not always a write operation. When you conducted Bad Blocks you were writing data to the drive and thus refreshed the data in the problematic area, once the data could be read faithfully again that will clear those Pending Sector errors in most cases. Since ID5 Reallocated Sectors is still a zero value, this also makes what I say true. If it would have incremented then that would indicate platter surface damage.

Could this be caused by a power failure? Yes, if writing data and a power failure occurs then you "could" have a problem but I would not hang my hat on that. Possible does not mean likely.

I would recommend that you run Daily SMART Short tests on all your drives and Weekly SMART Extended/Long tests on your drives, that is how I treat my system and I recommend this to everyone.

Now lets say you want to do some specific testing for the LBAs indicated in your posting, you can use Badblocks to test a section out, I've done that myself many times but my drives were failing, I just wanted to see if I could force a drive to map out all the bad areas. It works but once you have surface damage it will only grow over time. When you run it I'd run it for several several hours. If ID's 5, 196, and 197 remain at a zero value then all is good.

But keep in mind that the drive is almost 5 years of age so not too bad. the drive will eventually go bad and if now is a good time to replace it, then I would, but only due to age not due to failure.

Good Luck.
Thanks... that's pretty close to what I was thinking. Any idea how I would calculate LastBlock/FirstBlock for badblocks to test that last 1%?
I'd like to test that area 10-15 times, so I need to only test that area since each test pass is almost a day.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
Thanks... that's pretty close to what I was thinking. Any idea how I would calculate LastBlock/FirstBlock for badblocks to test that last 1%?
I'd like to test that area 10-15 times, so I need to only test that area since each test pass is almost a day.
Sure, that is easy. The Hard Drive Troubleshooting Guide has a section on how to do this, look at it. But the main command would be this for example and is for drive ada0, change as desired. Note that the first listed LBA value is actually the end LBA and the second value is the starting LBA, it's backwards but I didn't write the badblocks program. I also start testing well before the problem area and well past the problem area just because in reality that is physically such a small surface area.

badblocks -b 4096 -wsv -c 64 -p 15 /dev/ada0 3088590000 3088550000

EDIT: Sorry, that is not the last 1%, it's just the area around the failure. To do the last 1% you would need to know how many LBAs are on the drive and then do some simple math. Right now you have no idea if the failure is in the last 1% or the last 10%.

Also, your drive failed in the first 10% of the drive space, not the last 10%.
 
Last edited:

NASbox

Guru
Joined
May 8, 2012
Messages
644
Sure, that is easy. The Hard Drive Troubleshooting Guide has a section on how to do this, look at it. But the main command would be this for example and is for drive ada0, change as desired. Note that the first listed LBA value is actually the end LBA and the second value is the starting LBA, it's backwards but I didn't write the badblocks program. I also start testing well before the problem area and well past the problem area just because in reality that is physically such a small surface area.

Thanks @joeschmuck - I looked by I couldn't find The Hard Drive Troubleshooting Guide - please point me in the right direction.

badblocks -b 4096 -wsv -c 64 -p 15 /dev/ada0 3088590000 3088550000

EDIT: Sorry, that is not the last 1%, it's just the area around the failure. To do the last 1% you would need to know how many LBAs are on the drive and then do some simple math. Right now you have no idea if the failure is in the last 1% or the last 10%.

Also, your drive failed in the first 10% of the drive space, not the last 10%.
Thanks for the reply, I find it hard to correlate the different addressing between badblocks and smartctl. I was actually watching (using cmdwatch with smartctl) as the badblocks was doing the first pass on the drive. When badblocks was showing % completed at 99.xx% (don't remember exact xx, but I believe it was something like .8x), I actually saw the 4 pending sectors flip to 0. My best guess is that badblocks starts at the inner edge of the platter and the smart test starts at the outer edge or vice versa.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,702

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,972
My best guess is that badblocks starts at the inner edge of the platter and the smart test starts at the outer edge or vice versa.
As far as I know, they both start from the center and work their way out. The lower sector/LBAs are near the center and the higher are on the outer edge. But you could be correct about the badblocks starting out with the higher LBA and working towards the lower LBA. Now I'm going to research this, I'm curious. SMART does start at the center.
 

NASbox

Guru
Joined
May 8, 2012
Messages
644
It's in the resources section:
Thanks @sretalla that helps a lot.
As far as I know, they both start from the center and work their way out. The lower sector/LBAs are near the center and the higher are on the outer edge. But you could be correct about the badblocks starting out with the higher LBA and working towards the lower LBA. Now I'm going to research this, I'm curious. SMART does start at the center.
Thanks @joeschmuck please let me know what you find.

I had another drive go bad (really bad... after I replaced it I did a badblocks and the bad sectors started pouring out, so it was obviously the drive, but I'm wondering if there could have been some sort of spill over. So far the second drive is doing well on tests which makes me wonder if there might have been another issue... either caused by software in response to the other failed drive or possibly oxidation on the contacts. (Although this drive was stable for a couple of months before I was able to swap it out.)

I also did a bit more digging with a -x option. Does anyone know what these mean;

Code:
=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Red
Device Model:     WDC WD60EFRX-68MYMN1
Firmware Version: 82.00A82
User Capacity:    6,001,175,126,016 bytes [6.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5700 rpm
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Fri Dec  4 13:19:45 2020 EST

--------------------------------------------------------------------------------

Current Power on Hours:
  9 Power_On_Hours          -O--CK   042   042   000    -    42727

--------------------------------------------------------------------------------

SMART Extended Comprehensive Error Log Version: 1 (6 sectors)
Device Error Count: 193 (device log contains only the most recent 24 errors)
        CR     = Command Register
        FEATR  = Features Register
        COUNT  = Count (was: Sector Count) Register
        LBA_48 = Upper bytes of LBA High/Mid/Low Registers ]  ATA-8
        LH     = LBA High (was: Cylinder High) Register    ]   LBA
        LM     = LBA Mid (was: Cylinder Low) Register      ] Register
        LL     = LBA Low (was: Sector Number) Register     ]
        DV     = Device (was: Device/Head) Register
        DC     = Device Control Register
        ER     = Error register
        ST     = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

        DV     = Device (was: Device/Head) Register
        DC     = Device Control Register
        ER     = Error register
        ST     = Status register
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 193 [0] occurred at disk power-on lifetime: 41917 hours (1746 days + 13 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 2e dc f8 40 00  Error: UNC at LBA = 0x2b82edcf8 = 11680013560

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 68 00 02 b8 2e e6 f8 40 00 31d+12:22:32.628  READ FPDMA QUEUED
  60 01 00 00 10 00 02 b8 2e e5 f8 40 00 31d+12:22:32.628  READ FPDMA QUEUED
  60 01 00 00 60 00 02 b8 2e e4 f8 40 00 31d+12:22:32.626  READ FPDMA QUEUED
  60 01 00 00 40 00 02 b8 2e e3 f8 40 00 31d+12:22:32.626  READ FPDMA QUEUED
  60 01 00 00 38 00 02 b8 2e e2 f8 40 00 31d+12:22:32.626  READ FPDMA QUEUED

Error 192 [23] occurred at disk power-on lifetime: 41917 hours (1746 days + 13 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 2e a5 b8 40 00  Error: UNC at LBA = 0x2b82ea5b8 = 11679999416

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 40 00 02 b8 2e a7 00 40 00 31d+12:22:15.538  READ FPDMA QUEUED
  60 01 00 00 38 00 02 b8 2e a6 00 40 00 31d+12:22:15.538  READ FPDMA QUEUED
  60 01 00 00 30 00 02 b8 2e a5 00 40 00 31d+12:22:15.537  READ FPDMA QUEUED
  60 01 00 00 28 00 02 b8 2e a4 00 40 00 31d+12:22:15.537  READ FPDMA QUEUED
  60 01 00 00 20 00 02 b8 2e a3 00 40 00 31d+12:22:15.537  READ FPDMA QUEUED

Error 191 [22] occurred at disk power-on lifetime: 41917 hours (1746 days + 13 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 2d 1b a8 40 00  Error: UNC at LBA = 0x2b82d1ba8 = 11679898536

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 01 00 00 58 00 02 b8 2d 1f e0 40 00 31d+12:20:32.190  READ FPDMA QUEUED
  60 01 00 00 50 00 02 b8 2d 1e e0 40 00 31d+12:20:32.189  READ FPDMA QUEUED
  60 01 00 00 48 00 02 b8 2d 1d e0 40 00 31d+12:20:32.189  READ FPDMA QUEUED
  60 01 00 00 40 00 02 b8 2d 1c e0 40 00 31d+12:20:32.189  READ FPDMA QUEUED
  60 01 00 00 00 00 02 b8 2d 1b e0 40 00 31d+12:20:32.189  READ FPDMA QUEUED

Error 190 [21] occurred at disk power-on lifetime: 41916 hours (1746 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 18 4b 40 40 00  Error: UNC at LBA = 0x2b8184b40 = 11678534464

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 68 00 02 b8 18 57 60 40 00 31d+10:48:49.310  READ FPDMA QUEUED
  60 01 00 00 18 00 02 b8 18 56 60 40 00 31d+10:48:49.310  READ FPDMA QUEUED
  60 01 00 00 60 00 02 b8 18 55 60 40 00 31d+10:48:49.309  READ FPDMA QUEUED
  60 01 00 00 58 00 02 b8 18 54 60 40 00 31d+10:48:49.309  READ FPDMA QUEUED
  60 01 00 00 50 00 02 b8 18 53 60 40 00 31d+10:48:49.309  READ FPDMA QUEUED
Error 189 [20] occurred at disk power-on lifetime: 41916 hours (1746 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 18 14 68 40 00  Error: UNC at LBA = 0x2b8181468 = 11678520424

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 40 00 02 b8 18 20 00 40 00 31d+10:48:44.648  READ FPDMA QUEUED
  60 01 00 00 38 00 02 b8 18 1f 00 40 00 31d+10:48:44.648  READ FPDMA QUEUED
  60 01 00 00 30 00 02 b8 18 1e 00 40 00 31d+10:48:44.648  READ FPDMA QUEUED
  60 01 00 00 28 00 02 b8 18 1d 00 40 00 31d+10:48:44.648  READ FPDMA QUEUED
  60 01 00 00 20 00 02 b8 18 1c 00 40 00 31d+10:48:44.647  READ FPDMA QUEUED

Error 188 [19] occurred at disk power-on lifetime: 41916 hours (1746 days + 12 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 17 dd f0 40 00  Error: UNC at LBA = 0x2b817ddf0 = 11678506480

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 68 00 02 b8 17 ea 68 40 00 31d+10:47:45.801  READ FPDMA QUEUED
  60 01 00 00 60 00 02 b8 17 e9 68 40 00 31d+10:47:45.801  READ FPDMA QUEUED
  60 01 00 00 48 00 02 b8 17 e8 68 40 00 31d+10:47:45.800  READ FPDMA QUEUED
  60 01 00 00 20 00 02 b8 17 e7 68 40 00 31d+10:47:45.800  READ FPDMA QUEUED
  60 01 00 00 38 00 02 b8 17 e6 68 40 00 31d+10:47:45.799  READ FPDMA QUEUED

Error 187 [18] occurred at disk power-on lifetime: 41078 hours (1711 days + 14 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 2d d6 a8 40 00  Error: UNC at LBA = 0x2b82dd6a8 = 11679946408

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 38 00 02 b8 2d d7 38 40 00 46d+06:15:41.260  READ FPDMA QUEUED
  60 01 00 00 30 00 02 b8 2d d6 38 40 00 46d+06:15:41.260  READ FPDMA QUEUED
  60 01 00 00 28 00 02 b8 2d d5 38 40 00 46d+06:15:41.260  READ FPDMA QUEUED
  60 01 00 00 20 00 02 b8 2d d4 38 40 00 46d+06:15:41.260  READ FPDMA QUEUED
  60 01 00 00 18 00 02 b8 2d d3 38 40 00 46d+06:15:41.260  READ FPDMA QUEUED

Error 186 [17] occurred at disk power-on lifetime: 41078 hours (1711 days + 14 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER -- ST COUNT  LBA_48  LH LM LL DV DC
  -- -- -- == -- == == == -- -- -- -- --
  40 -- 51 00 00 00 02 b8 2d 6e e0 40 00  Error: UNC at LBA = 0x2b82d6ee0 = 11679919840

  Commands leading to the command that caused the error were:
  CR FEATR COUNT  LBA_48  LH LM LL DV DC  Powered_Up_Time  Command/Feature_Name
  -- == -- == -- == == == -- -- -- -- --  ---------------  --------------------
  60 00 e8 00 48 00 02 b8 2d 79 58 40 00 46d+06:15:08.635  READ FPDMA QUEUED
  60 01 00 00 78 00 02 b8 2d 78 58 40 00 46d+06:15:08.101  READ FPDMA QUEUED
  60 01 00 00 70 00 02 b8 2d 77 58 40 00 46d+06:15:08.101  READ FPDMA QUEUED
  60 01 00 00 68 00 02 b8 2d 76 58 40 00 46d+06:15:08.101  READ FPDMA QUEUED
  60 01 00 00 40 00 02 b8 2d 75 58 40 00 46d+06:15:08.101  READ FPDMA QUEUED


SMART Extended Self-test Log Version: 1 (1 sectors)
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed: read failure       90%     42658         11678506480
# 2  Extended offline    Completed: read failure       90%     39369         11678510000
# 3  Extended offline    Completed without error       00%     17065         -
# 4  Extended offline    Aborted by host               90%     17051         -
# 5  Extended offline    Aborted by host               90%     17051         -
# 6  Extended offline    Completed without error       00%     16961         -
# 7  Short offline       Completed without error       00%     16918         -
# 8  Extended offline    Completed without error       00%        85         -
# 9  Extended offline    Completed without error       00%        51         -
#10  Extended offline    Completed without error       00%        12         -
#11  Short offline       Completed without error       00%         0         -
#12  Short offline       Completed without error       00%         0         -
#13  Conveyance offline  Completed without error       00%         0         -

--------------------------------------------------------------------------------

SCT Error Recovery Control:
           Read:     70 (7.0 seconds)
          Write:     70 (7.0 seconds)

Device Statistics (GP/SMART Log 0x04) not supported

Pending Defects log (GP Log 0x0c) not supported

SATA Phy Event Counters (GP Log 0x11)
ID      Size     Value  Description
0x0001  2            0  Command failed due to ICRC error
0x0002  2            0  R_ERR response for data FIS
0x0003  2            0  R_ERR response for device-to-host data FIS
0x0004  2            0  R_ERR response for host-to-device data FIS
0x0005  2            0  R_ERR response for non-data FIS
0x0006  2            0  R_ERR response for device-to-host non-data FIS
0x0007  2            0  R_ERR response for host-to-device non-data FIS
0x0008  2            0  Device-to-host non-data FIS retries
0x0009  2            0  Transition from drive PhyRdy to drive PhyNRdy
0x000a  2            1  Device-to-host register FISes sent due to a COMRESET
0x000b  2            0  CRC errors within host-to-device FIS
0x000f  2            0  R_ERR response for host-to-device data FIS, CRC
0x0012  2            0  R_ERR response for host-to-device non-data FIS, CRC
0x8000  4       249447  Vendor specific

--------------------------------------------------------------------------------
 
Last edited:
Top