"zpool clear pool" forcing resilvering new to Truenas

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Hi there,

One of my disk did cause the pool to become degraded due to READ errors reported under "zpool status"
I have enough redundancy to be on the safe side, however, as good measure, I decided to clear the pool using the "zpool clear poolname" after which I would initiate a scrub.
However, what I got after "zpool clear poolname" is causing the pool to undergo resilvering.
This is something I wasn't prepared for (based on my former experience).

Running TrueNAS core TrueNAS-13.0-U5.1

Obviously, the resilvering process should tell me more, until then what TrueNAS Core user could add to this post?

PS. The pool was scrubed 2 days ago (after the pool became degraded).
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I don't know why your pool started to resilver on it's own after the zpool clear command, that sounds odd to me but I don't have a lot of personal experience with failing hard drives in this respect. Are you sure these are the steps you performed? Nothing else? Why was the pool degraded? I suspect you tried to clear the failing hard drive errors and the system started resilvering a failing hard drive. That is all that makes sense to me.

My advice, provide the output of zpool status poolname and then the smart output for each drive in that pool smartctl -a /dev/adax so we can see if there are any drive failures. If you are knowledgeable on the SMART data, you can examine that yourself.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
A read error is different than a checksum error and, in a way, a more serious error since it's a heavier sign of a drive failure.
On which one of your systems is this happening?

I'd also look into zpool events -v poolname in order to find out exactly what happened.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
The pool is degraded because one of the drive is no longer healthy:

smartctl 7.2 2021-09-14 r5236 [FreeBSD 13.1-RELEASE-p7 amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red
Device Model: WDC WD40EFRX-68N32N0
Serial Number: WD-XXXXXXXXXXXXXX
LU WWN Device Id: 5 0014ee 2b915e7d6
Firmware Version: 82.00A82
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sat Jul 29 11:58:30 2023 PDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 119) The previous self-test completed having
the read element of the test failed.
Total time to complete Offline
data collection: (44700) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 474) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x303d) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 199 051 Pre-fail Always - 39
3 Spin_Up_Time 0x0027 178 161 021 Pre-fail Always - 6066
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 230
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 053 053 000 Old_age Always - 34808
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 230
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 183
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 706
194 Temperature_Celsius 0x0022 115 107 000 Old_age Always - 35
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 1

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 70% 34796 3519062560
# 2 Short offline Completed: read failure 70% 34795 3519062560
# 3 Short offline Completed: read failure 70% 34794 3519062560
# 4 Short offline Completed: read failure 70% 34793 3519062560
# 5 Short offline Completed: read failure 70% 34792 3519062560
# 6 Short offline Completed: read failure 70% 34791 3519062560
# 7 Short offline Completed: read failure 70% 34790 3519062560
# 8 Short offline Completed: read failure 70% 34789 3519062560
# 9 Short offline Completed: read failure 70% 34788 3519062560
#10 Short offline Completed: read failure 70% 34787 3519062560
#11 Short offline Completed: read failure 70% 34786 3519062560
#12 Short offline Completed: read failure 70% 34785 3519062560
#13 Short offline Completed: read failure 70% 34784 3519062560
#14 Short offline Completed: read failure 70% 34783 3519062560
#15 Short offline Completed: read failure 70% 34782 3519062560
#16 Short offline Completed: read failure 70% 34781 3519062560
#17 Short offline Completed: read failure 70% 34780 3519062560
#18 Short offline Completed: read failure 70% 34779 3519062560
#19 Short offline Completed: read failure 70% 34778 3519062560
#20 Short offline Completed: read failure 70% 34777 3519062560
#21 Short offline Completed: read failure 70% 34776 3519062560

SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
as @joeschmuck suspected, I think the resilvering started right after I cleared the pool and ZFS may have tried accessing the disk and ended up in a section affected by the Raw Read error. I did initiate the clear command as I usually do, over ssh, and I did find the prompt hanhing unresponsive for a few seconds, rather than being nearly instantanious. At that point I decided to run "zpool status" which showed the pool being resilvered.

Resilvering has now completed and the pool is back to being healthy:
pool: WD-RAIDZ2
state: ONLINE
scan: resilvered 14.2G in 00:24:13 with 0 errors on Sat Jul 29 00:25:57 2023
config:

NAME STATE READ WRITE CKSUM
WD-RAIDZ2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/802a65df-0c52-11e7-9af1-002590f4b7f8.eli ONLINE 0 0 0
gptid/407b9403-5dd6-11ea-b00b-7085c28f99a9.eli ONLINE 0 0 0
gptid/a9aad811-069b-11e7-a972-002590f4b7f8.eli ONLINE 0 0 0
gptid/aa6d4b0f-069b-11e7-a972-002590f4b7f8.eli ONLINE 0 0 0
gptid/7459f27b-1697-11e7-9554-002590f4b7f8.eli ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/ce323000-069b-11e7-a972-002590f4b7f8.eli ONLINE 0 0 0
gptid/cefaffeb-069b-11e7-a972-002590f4b7f8.eli ONLINE 0 0 0
gptid/cfc694f6-069b-11e7-a972-002590f4b7f8.eli ONLINE 0 0 0
gptid/6ff01cf3-5b4c-11ea-8dad-7085c28f99a9.eli ONLINE 0 0 0
gptid/be79b056-3606-11ec-bc38-7085c28f99a9.eli ONLINE 0 0 0
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
If the drive you listed above is still in the system, your data will have errors again. Also, very odd but you are running a Short test every hour. Was that by accident or intentional? Either way you should have received a lot of emails about the failures. Check all your drives for errors.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
If the drive you listed above is still in the system, your data will have errors again. Also, very odd but you are running a Short test every hour. Was that by accident or intentional? Either way you should have received a lot of emails about the failures. Check all your drives for errors.
I see what you mean about SMART schedule running every hours. I don't recall if this was intentional for testing purposes or a mistake on my part.
Regardless I will reconfigure the schedules accordingly.

I am getting daily emails on server status and the disk did show as faulted just a few days ago. I didn't want to mess with the disk replacement yet as I wanted to get a few things sorted out first.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
My "guess" as to why the pool was re-silvered after a clear, is that the affected drive was offlined due to excessive errors. Thus, when it was re-instated with the clear, ZFS went to bring that drive up to date with any changes since it was offlined.

As to why your drive is now available again, it is possible that a bunch of read errors occurred due to a bunch of defects on the drive. All detected at once or within a short time frame. But, the drive still had spares available for sparing out the bad blocks. So it is working now.

ZFS took a conservative approach to a bunch of defects, and probably offlined the disk. If fewer bad blocks had been detected, then ZFS would have automatically spared them out using SATA write to bad block method. (Any write to a bad block, causes a SATA disk to spare the block out, thus making the write succeed and the block now appear good to the host server.)


All that said, it is a "guess". Take it as you will, but don't hold me responsible for any of my "guesses".
 
Last edited:

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
The disk was reported as FAULTED, not OFFLINED.
I don't know the distinction between the two when caused by read errors.
 
Top