Hello, am new to TrueNas Scale and encountered an issue that I am not sure how to deal with. I've done some research but have not found a solution/what to do.
I have a pool with 2 physical drives running in raid 1.
I noticed that there were a lot of notifications stating that increasing amount of sectors cannot be read. One drive was marked as "Degraded" and the second one marked as "Faulty". When running
I do not understand what can be going on and implications of this behaviour.
What does Faulted and Degraded actually mean? If there is an Online drive why did it not correct the data on the degraded drive? Why did it claim that 3 files are unrecoverable but I could later access them? The server spends most of the time idle, with no reads/writes from me, yet it started to randomly fail 10 days ago. Is this likely a hardware issue or I have not configured the pool correctly to store data safely?
TrueNas Scale version: TrueNAS-SCALE-22.02.4
Errors in the dahsboard:
Result of
Results of
My apologies if this a bit of a waffle, I am just very confused and concerned. Do I just need to replace the drive or is this fixable?
Thanks in advance,
Eugene
I have a pool with 2 physical drives running in raid 1.
I noticed that there were a lot of notifications stating that increasing amount of sectors cannot be read. One drive was marked as "Degraded" and the second one marked as "Faulty". When running
zpool status -v , 3 files were listed that were corrupted. Just to try to check that they are corrupted I tried navigating to them using windows file explorer, and it was horrifically slow. I decided to re-boot the system and after the reboot, one of drive's state changed to "Online" and one of the drive's states changed to "degraded". zpool status -v suggested that silvering has happened and has fixed ~125mb of data with 0 errors and that no files were corrupted anymore. Just to make sure everything was ok, I ran a scrub task and now the previously "degraded" drive is "faulted" again.I do not understand what can be going on and implications of this behaviour.
What does Faulted and Degraded actually mean? If there is an Online drive why did it not correct the data on the degraded drive? Why did it claim that 3 files are unrecoverable but I could later access them? The server spends most of the time idle, with no reads/writes from me, yet it started to randomly fail 10 days ago. Is this likely a hardware issue or I have not configured the pool correctly to store data safely?
TrueNas Scale version: TrueNAS-SCALE-22.02.4
Errors in the dahsboard:
Code:
Pool MasterYoda state is DEGRADED: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. The following devices are not healthy: Disk ST4000VN008-2DR166 ZGYA106E is FAULTED 2023-12-17 07:28:26 (America/Los_Angeles)
Code:
CRITICAL Device: /dev/sdc [SAT], 88 Currently unreadable (pending) sectors. 2023-07-15 19:59:56 (America/Los_Angeles) Dismiss
Result of
zpool status -v:Code:
root@truenas:~# zpool status -v
pool: MasterKenobi
state: ONLINE
scan: scrub repaired 0B in 00:27:01 with 0 errors on Sun Dec 3 00:27:03 2023
config:
NAME STATE READ WRITE CKSUM
MasterKenobi ONLINE 0 0 0
f8b22328-72b1-45e6-8cf8-ed4cb8007e5e ONLINE 0 0 0
errors: No known data errors
pool: MasterYoda
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 4M in 01:54:13 with 0 errors on Sun Dec 17 07:49:51 2023
config:
NAME STATE READ WRITE CKSUM
MasterYoda DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
b72f7dc9-d952-4878-ba38-822a597684ef ONLINE 0 0 3
509cfa77-39f7-46ac-8610-1cbefea6b8ac FAULTED 22 0 16 too many errors
errors: No known data errors
pool: boot-pool
state: ONLINE
status: Some supported and requested features are not enabled on the pool.
The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
the pool may no longer be accessible by software that does not support
the features. See zpool-features(7) for details.
scan: scrub repaired 0B in 00:00:46 with 0 errors on Thu Dec 14 03:45:48 2023
config:
NAME STATE READ WRITE CKSUM
boot-pool ONLINE 0 0 0
sdb3 ONLINE 0 0 0
errors: No known data errorsResults of
smartctl -a /dev/sdc:Code:
=== START OF INFORMATION SECTION ===
Model Family: Seagate IronWolf
Device Model: ST4000VN008-2DR166
Serial Number: ZGYA106E
LU WWN Device Id: 5 000c50 0e36a82be
Firmware Version: SC60
User Capacity: 4,000,787,030,016 bytes [4.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5980 rpm
Form Factor: 3.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-3 T13/2161-D revision 5
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Sun Dec 17 08:50:21 2023 PST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 581) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 612) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x50bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 075 052 044 Pre-fail Always - 157924656
3 Spin_Up_Time 0x0003 094 093 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 46
5 Reallocated_Sector_Ct 0x0033 098 098 010 Pre-fail Always - 1256
7 Seek_Error_Rate 0x000f 079 060 045 Pre-fail Always - 39398134161
9 Power_On_Hours 0x0032 085 085 000 Old_age Always - 13621 (47 88 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 46
184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0
187 Reported_Uncorrect 0x0032 001 001 000 Old_age Always - 281
188 Command_Timeout 0x0032 100 083 000 Old_age Always - 618484859026
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 086 053 040 Old_age Always - 14 (Min/Max 5/33)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 54
193 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 113
194 Temperature_Celsius 0x0022 014 047 000 Old_age Always - 14 (0 3 0 0 0)
197 Current_Pending_Sector 0x0012 070 069 000 Old_age Always - 2512
198 Offline_Uncorrectable 0x0010 070 069 000 Old_age Offline - 2512
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 13618h+39m+44.408s
241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 21977262824
242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 20110909383
SMART Error Log Version: 1
ATA Error Count: 281 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 281 occurred at disk power-on lifetime: 13619 hours (567 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 53 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 21d+09:14:58.669 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:55.430 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:55.347 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:55.315 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:55.312 READ FPDMA QUEUED
Error 280 occurred at disk power-on lifetime: 13619 hours (567 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 53 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 21d+09:14:41.556 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:38.315 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:38.236 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:38.227 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:38.227 READ FPDMA QUEUED
Error 279 occurred at disk power-on lifetime: 13619 hours (567 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 53 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 21d+09:14:25.420 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:22.176 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:22.169 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:22.162 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:14:22.155 READ FPDMA QUEUED
Error 278 occurred at disk power-on lifetime: 13619 hours (567 days + 11 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 53 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 00 ff ff ff 4f 00 21d+09:01:45.718 READ FPDMA QUEUED
60 00 60 ff ff ff 4f 00 21d+09:01:42.227 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:01:42.227 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:01:42.176 READ FPDMA QUEUED
60 00 00 ff ff ff 4f 00 21d+09:01:42.049 READ FPDMA QUEUED
Error 277 occurred at disk power-on lifetime: 13617 hours (567 days + 9 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 53 00 ff ff ff 0f Error: WP at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
61 00 08 ff ff ff 4f 00 21d+07:25:33.718 WRITE FPDMA QUEUED
61 00 20 ff ff ff 4f 00 21d+07:25:33.718 WRITE FPDMA QUEUED
61 00 08 ff ff ff 4f 00 21d+07:25:33.717 WRITE FPDMA QUEUED
61 00 10 ff ff ff 4f 00 21d+07:25:33.717 WRITE FPDMA QUEUED
61 00 18 ff ff ff 4f 00 21d+07:25:33.717 WRITE FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 13620 -
# 2 Short offline Completed without error 00% 7609 -
# 3 Extended offline Completed without error 00% 6953 -
# 4 Short offline Completed without error 00% 6932 -
# 5 Extended offline Completed without error 00% 6769 -My apologies if this a bit of a waffle, I am just very confused and concerned. Do I just need to replace the drive or is this fixable?
Thanks in advance,
Eugene