New Disk but with Device Error Count

Darwin@PH

Cadet
Joined
Apr 4, 2023
Messages
3
Greetings everyone,

Long time lurker but firs time posting.

We have a FREENAS-MINI-3.0-XL+ with 8 drives running on FreeNAS-11.2-U8, We recently has a DEGRADED pool and we replaced the disk with issues with a fresh one. Pool is now HEALTHY but running smartctl on the new disk resulted with the following:

smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.2-STABLE amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD60EFAX-68JH4N1 Serial Number: WD-WXN2AA2KWCDF LU WWN Device Id: 5 0014ee 26ae38497 Firmware Version: 83.00A83 User Capacity: 6,001,175,126,016 bytes [6.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Form Factor: 3.5 inches Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-3 T13/2161-D revision 5 SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue Apr 25 08:01:25 2023 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled AAM feature is: Unavailable APM feature is: Unavailable Rd look-ahead is: Enabled Write cache is: Enabled DSN feature is: Unavailable ATA Security is: Disabled, NOT FROZEN [SEC1] Wt Cache Reorder: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 464) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 776) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x3039) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE 1 Raw_Read_Error_Rate POSR-K 200 200 051 - 0 3 Spin_Up_Time POS--K 100 253 021 - 0 4 Start_Stop_Count -O--CK 100 100 000 - 2 5 Reallocated_Sector_Ct PO--CK 200 200 140 - 0 7 Seek_Error_Rate -OSR-K 200 200 000 - 0 9 Power_On_Hours -O--CK 100 100 000 - 277 10 Spin_Retry_Count -O--CK 100 253 000 - 0 11 Calibration_Retry_Count -O--CK 100 253 000 - 0 12 Power_Cycle_Count -O--CK 100 100 000 - 2 192 Power-Off_Retract_Count -O--CK 200 200 000 - 0 193 Load_Cycle_Count -O--CK 200 200 000 - 1 194 Temperature_Celsius -O---K 114 110 000 - 36 196 Reallocated_Event_Count -O--CK 200 200 000 - 0 197 Current_Pending_Sector -O--CK 200 200 000 - 0 198 Offline_Uncorrectable ----CK 100 253 000 - 0 199 UDMA_CRC_Error_Count -O--CK 200 200 000 - 0 200 Multi_Zone_Error_Rate ---R-- 200 200 000 - 0 ||||||_ K auto-keep |||||__ C event count ||||___ R error rate |||____ S speed/performance ||_____ O updated online |______ P prefailure warning General Purpose Log Directory Version 1 SMART Log Directory Version 1 [multi-sector log support] Address Access R/W Size Description 0x00 GPL,SL R/O 1 Log Directory 0x01 SL R/O 1 Summary SMART error log 0x02 SL R/O 5 Comprehensive SMART error log 0x03 GPL R/O 6 Ext. Comprehensive SMART error log 0x04 GPL R/O 256 Device Statistics log 0x04 SL R/O 8 Device Statistics log 0x06 SL R/O 1 SMART self-test log 0x07 GPL R/O 1 Extended self-test log 0x09 SL R/W 1 Selective self-test log 0x0c GPL R/O 2048 Pending Defects log 0x10 GPL R/O 1 NCQ Command Error log 0x11 GPL R/O 1 SATA Phy Event Counters log 0x24 GPL R/O 294 Current Device Internal Status Data log 0x30 GPL,SL R/O 9 IDENTIFY DEVICE data log 0x80-0x9f GPL,SL R/W 16 Host vendor specific log 0xa0-0xa7 GPL,SL VS 16 Device vendor specific log 0xa8-0xb6 GPL,SL VS 1 Device vendor specific log 0xb7 GPL,SL VS 78 Device vendor specific log 0xb9 GPL,SL VS 4 Device vendor specific log 0xbd GPL,SL VS 1 Device vendor specific log 0xc0 GPL,SL VS 1 Device vendor specific log 0xc1 GPL VS 93 Device vendor specific log 0xe0 GPL,SL R/W 1 SCT Command/Status 0xe1 GPL,SL R/W 1 SCT Data Transfer SMART Extended Comprehensive Error Log Version: 1 (6 sectors) Device Error Count: 38908 (device log contains only the most recent 24 errors) CR = Command Register FEATR = Features Register COUNT = Count (was: Sector Count) Register LBA_48 = Upper bytes of LBA High/Mid/Low Registers ] ATA-8 LH = LBA High (was: Cylinder High) Register ] LBA LM = LBA Mid (was: Cylinder Low) Register ] Register LL = LBA Low (was: Sector Number) Register ] DV = Device (was: Device/Head) Register DC = Device Control Register ER = Error register ST = Status register Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 38908 [3] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 02 ba a0 f4 18 40 00 Error: IDNF at LBA = 0x2baa0f418 = 11721045016 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 d0 00 02 ba a0 f4 18 40 08 7d+04:34:57.925 WRITE FPDMA QUEUED 61 00 08 00 c8 00 02 ba a0 f2 18 40 08 7d+04:34:57.925 WRITE FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 00 08 7d+04:34:57.922 READ LOG EXT 61 00 08 00 b8 00 02 ba a0 f4 18 40 08 7d+04:34:50.843 WRITE FPDMA QUEUED 61 00 08 00 b0 00 02 ba a0 f2 18 40 08 7d+04:34:50.843 WRITE FPDMA QUEUED Error 38907 [2] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 02 ba a0 f2 18 40 00 Error: IDNF at LBA = 0x2baa0f218 = 11721044504 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 b8 00 02 ba a0 f4 18 40 08 7d+04:34:50.843 WRITE FPDMA QUEUED 61 00 08 00 b0 00 02 ba a0 f2 18 40 08 7d+04:34:50.843 WRITE FPDMA QUEUED 61 00 08 00 a8 00 00 00 40 04 18 40 08 7d+04:34:50.843 WRITE FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 00 08 7d+04:34:50.841 READ LOG EXT 61 00 08 00 98 00 02 ba a0 f4 18 40 08 7d+04:34:43.546 WRITE FPDMA QUEUED Error 38906 [1] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 00 40 04 18 40 00 Error: IDNF at LBA = 0x00400418 = 4195352 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 98 00 02 ba a0 f4 18 40 08 7d+04:34:43.546 WRITE FPDMA QUEUED 61 00 08 00 90 00 02 ba a0 f2 18 40 08 7d+04:34:43.546 WRITE FPDMA QUEUED 61 00 08 00 88 00 00 00 40 04 18 40 08 7d+04:34:43.546 WRITE FPDMA QUEUED 61 00 08 00 80 00 00 00 40 02 18 40 08 7d+04:34:43.546 WRITE FPDMA QUEUED ea 00 00 00 00 00 00 00 00 00 00 40 08 7d+04:34:41.706 FLUSH CACHE EXT Error 38905 [0] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 02 32 89 5e 28 40 00 Error: IDNF at LBA = 0x232895e28 = 9437797928 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 38 00 a0 00 02 32 89 5f 28 40 08 7d+04:34:25.398 WRITE FPDMA QUEUED 61 01 00 00 98 00 02 32 89 5e 28 40 08 7d+04:34:25.398 WRITE FPDMA QUEUED 61 00 58 00 90 00 02 32 89 5d c8 40 08 7d+04:34:25.397 WRITE FPDMA QUEUED 61 00 58 00 88 00 02 32 89 65 50 40 08 7d+04:34:25.397 WRITE FPDMA QUEUED 61 00 30 00 80 00 02 32 89 65 20 40 08 7d+04:34:25.396 WRITE FPDMA QUEUED Error 38904 [23] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 02 ba a0 f4 08 40 00 Error: IDNF at LBA = 0x2baa0f408 = 11721045000 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 00 00 02 ba a0 f4 08 40 08 7d+04:34:13.570 WRITE FPDMA QUEUED 61 00 08 00 f8 00 02 ba a0 f2 08 40 08 7d+04:34:13.570 WRITE FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 00 08 7d+04:34:13.568 READ LOG EXT 61 00 08 00 e8 00 02 ba a0 f4 08 40 08 7d+04:34:06.412 WRITE FPDMA QUEUED 61 00 08 00 e0 00 02 ba a0 f2 08 40 08 7d+04:34:06.412 WRITE FPDMA QUEUED Error 38903 [22] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 02 ba a0 f2 08 40 00 Error: IDNF at LBA = 0x2baa0f208 = 11721044488 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 e8 00 02 ba a0 f4 08 40 08 7d+04:34:06.412 WRITE FPDMA QUEUED 61 00 08 00 e0 00 02 ba a0 f2 08 40 08 7d+04:34:06.412 WRITE FPDMA QUEUED 61 00 08 00 d8 00 00 00 40 04 08 40 08 7d+04:34:06.412 WRITE FPDMA QUEUED 2f 00 00 00 01 00 00 00 00 00 10 00 08 7d+04:34:06.409 READ LOG EXT 61 00 08 00 c8 00 02 ba a0 f4 08 40 08 7d+04:33:58.302 WRITE FPDMA QUEUED Error 38902 [21] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 00 00 00 00 00 40 04 08 40 00 Error: IDNF at LBA = 0x00400408 = 4195336 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 c8 00 02 ba a0 f4 08 40 08 7d+04:33:58.302 WRITE FPDMA QUEUED 61 00 08 00 c0 00 02 ba a0 f2 08 40 08 7d+04:33:58.302 WRITE FPDMA QUEUED 61 00 08 00 b8 00 00 00 40 04 08 40 08 7d+04:33:58.302 WRITE FPDMA QUEUED 61 00 08 00 b0 00 00 00 40 02 08 40 08 7d+04:33:58.302 WRITE FPDMA QUEUED ea 00 00 00 00 00 00 00 00 00 00 40 08 7d+04:33:56.091 FLUSH CACHE EXT Error 38901 [20] occurred at disk power-on lifetime: 172 hours (7 days + 4 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER -- ST COUNT LBA_48 LH LM LL DV DC -- -- -- == -- == == == -- -- -- -- -- 10 -- 51 01 00 00 01 1b 9d 8f 30 40 00 Error: IDNF at LBA = 0x11b9d8f30 = 4758277936 Commands leading to the command that caused the error were: CR FEATR COUNT LBA_48 LH LM LL DV DC Powered_Up_Time Command/Feature_Name -- == -- == -- == == == -- -- -- -- -- --------------- -------------------- 61 00 08 00 38 00 01 1b 9d 8f 30 40 08 7d+04:33:40.271 WRITE FPDMA QUEUED 35 00 00 01 00 00 01 1b 9d 8e 30 40 08 7d+04:33:40.101 WRITE DMA EXT 35 00 00 01 00 00 01 1b 9d 8e 30 40 08 7d+04:33:33.151 WRITE DMA EXT 06 00 01 00 01 00 00 00 00 00 00 40 08 7d+04:33:33.151 DATA SET MANAGEMENT 61 00 10 00 18 00 01 14 9e b8 08 40 08 7d+04:33:33.055 WRITE FPDMA QUEUED SMART Extended Self-test Log Version: 1 (1 sectors) Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 266 - # 2 Short offline Completed without error 00% 190 - # 3 Short offline Completed without error 00% 13 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. SCT Status Version: 3 SCT Version (vendor specific): 258 (0x0102) SCT Support Level: 1 Device State: Active (0) Current Temperature: 36 Celsius Power Cycle Min/Max Temperature: 34/39 Celsius Lifetime Min/Max Temperature: 26/40 Celsius Under/Over Temperature Limit Count: 0/0 Vendor specific: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 SCT Temperature History Version: 2 Temperature Sampling Period: 1 minute Temperature Logging Interval: 1 minute Min/Max recommended Temperature: 0/65 Celsius Min/Max Temperature Limit: -41/85 Celsius Temperature History Size (Index): 478 (294) Index Estimated Time Temperature Celsius 295 2023-04-25 00:04 35 **************** ... ..(263 skipped). .. **************** 81 2023-04-25 04:28 35 **************** 82 2023-04-25 04:29 34 *************** ... ..( 11 skipped). .. *************** 94 2023-04-25 04:41 34 *************** 95 2023-04-25 04:42 35 **************** ... ..( 12 skipped). .. **************** 108 2023-04-25 04:55 35 **************** 109 2023-04-25 04:56 34 *************** 110 2023-04-25 04:57 35 **************** ... ..( 2 skipped). .. **************** 113 2023-04-25 05:00 35 **************** 114 2023-04-25 05:01 34 *************** ... ..( 3 skipped). .. *************** 118 2023-04-25 05:05 34 *************** 119 2023-04-25 05:06 35 **************** ... ..(166 skipped). .. **************** 286 2023-04-25 07:53 35 **************** 287 2023-04-25 07:54 36 ***************** ... ..( 6 skipped). .. ***************** 294 2023-04-25 08:01 36 ***************** SCT Error Recovery Control: Read: 70 (7.0 seconds) Write: 70 (7.0 seconds) Device Statistics (GP/SMART Log 0x04) not supported Pending Defects log (GP Log 0x0c) supported [please try: '-l defects'] SATA Phy Event Counters (GP Log 0x11) ID Size Value Description 0x0001 2 0 Command failed due to ICRC error 0x0002 2 0 R_ERR response for data FIS 0x0003 2 0 R_ERR response for device-to-host data FIS 0x0004 2 0 R_ERR response for host-to-device data FIS 0x0005 2 0 R_ERR response for non-data FIS 0x0006 2 0 R_ERR response for device-to-host non-data FIS 0x0007 2 0 R_ERR response for host-to-device non-data FIS 0x0008 2 0 Device-to-host non-data FIS retries 0x0009 2 2 Transition from drive PhyRdy to drive PhyNRdy 0x000a 2 3 Device-to-host register FISes sent due to a COMRESET 0x000b 2 0 CRC errors within host-to-device FIS 0x000d 2 0 Non-CRC errors within host-to-device FIS 0x000f 2 0 R_ERR response for host-to-device data FIS, CRC 0x0012 2 0 R_ERR response for host-to-device non-data FIS, CRC 0x8000 4 89766 Vendor specific

One that strikes me is the DEVICE ERROR COUNT. Checked the remaining disks and they don't have this error. This is the first time we had to replace a disk and not sure how to troubleshoot the cause.

Looking for any guidance from the community. Very much appreciate any help.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I did not get far in your post. The Western Digital drive;
Device Model: WDC WD60EFAX
is not compatible with ZFS. It uses SMR technology which has proven time and time again not to be suitable for use with ZFS.
Western Digital pulled fast one, and more than 3 years later we are still paying the price.

If you intend to use Western Digital NAS drives, please order from the WD Red Plus line. (Not all WD Red drives seem to be SMR, but their is no guarantee that won't change.)
 

Darwin@PH

Cadet
Joined
Apr 4, 2023
Messages
3
I did not get far in your post. The Western Digital drive;
Device Model: WDC WD60EFAX
is not compatible with ZFS. It uses SMR technology which has proven time and time again not to be suitable for use with ZFS.
Western Digital pulled fast one, and more than 3 years later we are still paying the price.

If you intend to use Western Digital NAS drives, please order from the WD Red Plus line. (Not all WD Red drives seem to be SMR, but their is no guarantee that won't change.)
Thank you for the insight. Is there any reason as to why the other 7 drives - which are WDC WD60EFAX as well, are not showing the same errors?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Yes, they have gone past the problem. Until the entire drive is written, any new WD SMR drive may experience the problem. Many people perform a comprehensive disk test, which does the full drive write, (perhaps multiple times).

Their is a known firmware bug, (bug in my opinion), that new drives can cause "sector not found" errors when ZFS does a burst read. This is because if ZFS needs 2 blocks that are close together, like blocks 8 & 10, it may submit a read for blocks 8 through 10. If block 9 has not been written once, instead of returning somthing like zeros, the WD Red SMR drives will return "sector not found" because nothing has been written to block 9.

This is utterly stupid firmware, because this is different behavior than normal non-SMR drives. They don't have to have all their blocks written once by the user. It is just that if you read a brand new CMR drive without having written to it, the results are in essence garbage.


This is not the only problem. During drive replacement, it can take 9 times longer for replacement than with a conventional CMR drive. Plus, with less well tested firmware, their could be other bugs hiding, just waiting for a corner case to expose them.


Now I don't know for certain this is your problem. But, we have been fighting this problem for 3 years. So if we see a WD Red SMR drive, we immediately point it out and stop suggesting anything else until someone can prove that is not the cause. Sorry, but I have a bit of burn out on WD Red SMR drives.
 

Darwin@PH

Cadet
Joined
Apr 4, 2023
Messages
3
Yes, they have gone past the problem. Until the entire drive is written, any new WD SMR drive may experience the problem. Many people perform a comprehensive disk test, which does the full drive write, (perhaps multiple times).

Their is a known firmware bug, (bug in my opinion), that new drives can cause "sector not found" errors when ZFS does a burst read. This is because if ZFS needs 2 blocks that are close together, like blocks 8 & 10, it may submit a read for blocks 8 through 10. If block 9 has not been written once, instead of returning somthing like zeros, the WD Red SMR drives will return "sector not found" because nothing has been written to block 9.

This is utterly stupid firmware, because this is different behavior than normal non-SMR drives. They don't have to have all their blocks written once by the user. It is just that if you read a brand new CMR drive without having written to it, the results are in essence garbage.


This is not the only problem. During drive replacement, it can take 9 times longer for replacement than with a conventional CMR drive. Plus, with less well tested firmware, their could be other bugs hiding, just waiting for a corner case to expose them.


Now I don't know for certain this is your problem. But, we have been fighting this problem for 3 years. So if we see a WD Red SMR drive, we immediately point it out and stop suggesting anything else until someone can prove that is not the cause. Sorry, but I have a bit of burn out on WD Red SMR drives.
Thank you, Arwen, for the wisdom. Not immediately but we will surely move away from the current SMR setup that we have. I am guessing that any errors that we see moving forward is not entirely accurate because of the type of HDD that we have. Or is their a way to know if the drives are really failing despite them being SMR?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I have no further knowledge of future errors on SMR.

Part of the problem is that whence the DM-SMR non-SMR cache drive tracks are filled, a SMR drive may become busy. If it remains busy too long, ZFS may declare the drive degraded, offline, failed or whatever. Or it may simply record read or write errors. Thus, maybe not real pool data errors, but recorded errors never the less.

It is this unpredictable behavior that is the problem. Western Digital Red SMR drives do not act normally. ZFS developers have spent 15 years creating stable hard drive interface characteristics. Then along comes Western Digital with their Red drive change that breaks normal behavior.

So, in essence, unless someone else can help if you have future problems, I and some others here will simply say "Replace the WD Red SMR drives with CMR. If the problem still exists, then we can be certain they are not the problem."

As I said, some of us have burn out on the subject...
 

artlessknave

Wizard
Joined
Oct 29, 2016
Messages
1,506
I will pay the 10$ extra to get a seagate ironwolf over giving WD any money.
their drives aren't bad but that utter ludicrous decision to mix SMR with CMR in a drive series that would specifically be hurt by SMR is just insane.
 
Top