Lots of errors under heavy load

Status
Not open for further replies.

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
About a year ago, I built a new FreeNAS server for backups. All of the components in it were picked from the HCL.
FreeNAS-9.10.1-U4 (ec9a7d3)
Intel Xeon CPU E3-1220 v3 @ 3.10GHz
32GB RAM Crucial 240-pin DIMM, DDR3 PC3-12800
SUPERMICRO Supermicro X10SLL-F-B
SUPERMICRO 4U 24-BAY 846E1-R900B
24X WD Red 2TB NAS WD20EFRX
Lenovo IBM Intel I350-T2 2XGBE BaseT Ethernet Adapter
LSI Logic Controller Card MegaRAID SAS 9211-8i
Pool configured as Raid Z2.

I use this as an multipathed iSCSI disk presented to VMWare 5.5. I have various VMDK's attached to virtual machines in a farm of 5 ESX servers.
During normal operation, daily backups running and what not, I have no problems. It's only when I turn up the heat, the box starts freaking out. As an example, right now, I'm backing up 7TB from one ESX host and about 1.5 from other. I get checksum errors spread throughout the disks. These will get into the thousands before it will eventually kick the disk offline. It happened last weekend when copying over a server for archiving. I lost that server due to the disk going offline and being corrupted.

I'm at a bit of a loss here. I've noticed this behavior since I first installed it.
I've swapped out a few disks, controller for an identical one
Changed controller to backplane cable.
Interface utilization isn't even close to maxing out either GB interface. 150Mbps on one nic.
CPU is maybe 5%,
System load is less than 1
Disk I/0 is about 25/M
Ram is full, which I thought to be expected.
No swap utilization
iSCSI Read 9M Write 15M
ARC size = 28 GB
ARC hit ratio - 88%
ARC demand_data - 667
ARC demand_metadata 593
ARC prefetch_data 66
ARC prefetch_metadata 187

I'm not sure where else I need to look.

Thanks in advance.
 
Last edited by a moderator:

rs225

Guru
Joined
Jun 28, 2014
Messages
878
Run a memtest first. Consider the backplane could be the problem, perhaps contact Supermicro. You can also consider changing slots of your cards, or changing/removing the net card.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
About a year ago, I built a new freenas server for backups. All of the components in it were picked from the HCL.
FreeNAS-9.10.1-U4 (ec9a7d3)
Intel Xeon CPU E3-1220 v3 @ 3.10GHz
32GB RAM Crucial 240-pin DIMM, DDR3 PC3-12800
SUPERMICRO Supermicro X10SLL-F-B
SUPERMICRO 4U 24-BAY 846E1-R900B
24X WD Red 2TB NAS WD20EFRX
Lenovo IBM Intel I350-T2 2XGBE BaseT Ethernet Adapter
LSI Logic Controller Card MegaRAID SAS 9211-8i
Pool configured as Raid Z2.

I use this as an multipathed iSCSI disk presented to VMWare 5.5. I have various VMDK's attached to virtual machines in a farm of 5 ESX servers.
During normal operation, daily backups running and what not, I have no problems. It's only when I turn up the heat, the box starts freaking out. As an example, right now, I'm backing up 7TB from one ESX host and about 1.5 from other. I get checksum errors spread throughout the disks. These will get into the thousands before it will eventually kick the disk offline. It happened last weekend when copying over a server for archiving. I lost that server due to the disk going offline and being corrupted.

I'm at a bit of a loss here. I've noticed this behavior since I first installed it.
I've swapped out a few disks, controller for an identical one
Changed controller to backplane cable.
Interface utilization isn't even close to maxing out either GB interface. 150Mbps on one nic.
CPU is maybe 5%,
System load is less than 1
Disk I/0 is about 25/M
Ram is full, which I thought to be expected.
No swap utilization
iSCSI Read 9M Write 15M
ARC size = 28 GB
ARC hit ratio - 88%
ARC demand_data - 667
ARC demand_metadata 593
ARC prefetch_data 66
ARC prefetch_metadata 187

I'm not sure where else I need to look.

Thanks in advance.
What is your sync setting on the iSCSI zvol(s)? Also, how many ESXi servers are connecting to the FreeNAS system?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
... Also, how many ESXi servers are connecting to the FreeNAS system?
Never mind: I see that you told us how many ESXi servers you have (5).

I suspect you're experiencing two problems:
  • Not enough memory
  • Wrong topology
According to the User Guide: "For iSCSI, install at least 16 GB of RAM if performance is not critical, or at least 32 GB of RAM if good performance is a requirement." 32GB of RAM is the minimum required for good performance. With 5 ESXi servers hitting the FreeNAS machine, 32GB of RAM may simply be inadequate for the task. Your system board only supports 32GB, so you're stuck with respect to memory.

RAIDZ is not a good choice for providing block storage: mirrors are the optimal topology.

IOPS scale by vdev, so mirrors will always provide the most IOPS for a given number of disks. Other than telling us you used RAIDZ2 for your pool, you didn't give any details about how your 24 disks are configured. I'm guessing you have them set up in 3 x 8-disk RAIDZ2 vdevs or 4 x 6-disk RAIDZ2 vdevs. The optimal iSCSI configuration for 24 disks would be 12 mirrored vdevs, which would give you 4 times the IOPS of the 3-vdev RAIDZ2 array and 3 times the IOPS of the 4-vdev RAIDZ2 array designs mentioned above.

Best practice is to never exceed 50% utilization of your block storage zvol, so 12 mirrors in your case would provide 24TB of total capacity (less overhead) for roughly 12TB of usable capacity.

If redundancy is a concern, ZFS offers 3-way mirrors, but these come at a cost in capacity. Your 24 disks would allow for 8 3-way mirrored vdevs with a total capacity of 16TB, allowing for 8TB of usable capacity.

You may also want to consider disabling synchronous writes on your zvol, and adding a ZIL SLOG device.

Read forum member @jgreco's posts about configuring FreeNAS for optimal iSCSI performance; he's quite knowledgeable about the subject. Here is one thread in which he addresses the same sort of problem you are experiencing:

"Losing connection sporadically to iSCSI"

Good luck! (Oh... and 'Welcome to the forums!')
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
But checksum errors? All other things aside, that suggests some sort of hardware issue.

You have both supplies in the R900B? Are you using the correct firmware on the HBA?
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
But checksum errors? All other things aside, that suggests some sort of hardware issue.

You have both supplies in the R900B? Are you using the correct firmware on the HBA?
Good point!

@prodigi: have you scheduled the system to perform regular short and extended SMART tests?
 

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
Good point!

@prodigi: have you scheduled the system to perform regular short and extended SMART tests?


There are 24x 4TB drives. When going through the config wizard I chose "backup". I needed space over performance as this is just a repository for VM backups. At some point all 5 severs hit it, but at any given time, only one is hitting it. If I kept the volume of data transferred to it low, it held up. Now I've switched backup software, which copies 5TB in 8hrs. That's just making the errors go nuts. I did disable sync for fun once, and that didn't turn out well.
I believe the short smart tests are running. It's the default. It's been a while since I looked at those but I didn't recall anything glaring.
At the time of the original build, I matched the correct firmware to the version of freenas, but I'll check for updates.

This is my output from last night. It's not always enough to degrade the pool, but this time it did.
Code:
Checking status of zfs pools:
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
PRODIGI-SAN17  43.5T  37.2T  6.33T  -  44%  85%  1.00x  DEGRADED  /mnt
freenas-boot  37.2G  709M  36.6G  -  -  1%  1.00x  ONLINE  -

pool: PRODIGI-SAN17
state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
   attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
   using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub in progress since Mon Jul 17 09:12:55 2017
23.2T scanned out of 37.2T at 162M/s, 25h6m to go
234M repaired, 62.47% done
config:

   NAME  STATE  READ WRITE CKSUM
   PRODIGI-SAN17  DEGRADED  0  0  0
	 raidz2-0  DEGRADED  0  0  0
	 gptid/b1da98af-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b2dae43f-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b3ea1209-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b4f2f7ba-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b6076165-bef5-11e6-aeb3-002590b51e7b  DEGRADED  0  0 8.21K  too many errors  (repairing)
	 gptid/b712eabf-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b8218070-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b9425077-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 raidz2-1  DEGRADED  0  0  0
	 gptid/ba4a4911-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0  (repairing)
	 gptid/bb51678d-bef5-11e6-aeb3-002590b51e7b  DEGRADED  0  0 5.47K  too many errors  (repairing)
	 gptid/bc52b23c-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/bd546653-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/be63dabd-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/bf64858a-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c06ab487-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c12272a2-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 raidz2-2  ONLINE  0  0  0
	 gptid/c2363bba-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c3396f05-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c440c899-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c54523ae-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c64d9c46-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c75611de-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c85ccb22-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/f95ccad7-cf77-11e6-9e25-002590b51e7b  ONLINE  0  0  0

errors: No known data errors

-- End of daily output --
 
Last edited by a moderator:

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
Something I noticed while updating the firmware is that I have the IR firmware rather than the IT firmware. Could that be the problem?
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
Something I noticed while updating the firmware is that I have the IR firmware rather than the IT firmware. Could that be the problem?
It's possible. IT firmware is recommended, although un-configured disks in IR mode are presented as normal drives.

I'm also wondering about the PSU and if there is enough power to go round. What size PSU do you have in this system?
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
Conspicuously absent from the hardware list is the power supply. When things get funny under load, power supply is the first thing I suspect.
 

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
They are 1+1 900W power supplies. The IPMI is showing a peak of 127W, but I don't think that's entirely accurate. These are running on 208V circuits.
I'm going to update the IPMI firmware to see if I get an accurate reading.

So I updated the IR firmware to 20. I tried IT, but it said can't flash IT over IR. I did this within FreeNAS, not from a boot disk.
Then for grins, I updated to FreeNAS11.
 
Last edited by a moderator:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
So I updated the IR firmware to 20. I tried IT, but it said can't flash IT over IR. I did this within freenas, not from a boot disk.
Then for grins, I updated to Freenas11.
You need to erase the IR firmware before you can flash IT.
 

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
Does erasing it break my FreeNAS server?
 
Last edited by a moderator:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
You may have disk problems. If so, you need to address them immediately. There's no point in backing up your servers to this machine if it's soon to fail. You'll end up empty-handed...
I believe the short smart tests are running. It's the default. It's been a while since I looked at those but I didn't recall anything glaring.
For future reference, go to Tasks->S.M.A.R.T. Tests in the WebGUI and see if you have any tests scheduled. If you do, edit the test(s) and make sure all 24 of your drives are being tested. The edit form only shows 4 disks at a time, so be sure to scroll through it, ensuring all of your drives are highlighted. I've scheduled daily short and weekly extended tests on my system, as shown here:
freenas-smart-tasks.jpg
freenas-smart-tasks-drive-selection.jpg


In the meantime, you have bigger fish to fry:
Checking status of zfs pools:
Code:
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
PRODIGI-SAN17  43.5T  37.2T  6.33T  -  44%  85%  1.00x  DEGRADED  /mnt
freenas-boot  37.2G  709M  36.6G  -  -  1%  1.00x  ONLINE  -

pool: PRODIGI-SAN17
state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
   attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
   using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub in progress since Mon Jul 17 09:12:55 2017
23.2T scanned out of 37.2T at 162M/s, 25h6m to go
234M repaired, 62.47% done
config:

   NAME  STATE  READ WRITE CKSUM
   PRODIGI-SAN17  DEGRADED  0  0  0
	 raidz2-0  DEGRADED  0  0  0
	 gptid/b1da98af-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b2dae43f-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b3ea1209-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b4f2f7ba-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b6076165-bef5-11e6-aeb3-002590b51e7b  DEGRADED  0  0 8.21K  too many errors  (repairing)
	 gptid/b712eabf-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b8218070-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/b9425077-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 raidz2-1  DEGRADED  0  0  0
	 gptid/ba4a4911-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0  (repairing)
	 gptid/bb51678d-bef5-11e6-aeb3-002590b51e7b  DEGRADED  0  0 5.47K  too many errors  (repairing)
	 gptid/bc52b23c-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/bd546653-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/be63dabd-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/bf64858a-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c06ab487-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c12272a2-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 raidz2-2  ONLINE  0  0  0
	 gptid/c2363bba-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c3396f05-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c440c899-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c54523ae-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c64d9c46-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c75611de-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/c85ccb22-bef5-11e6-aeb3-002590b51e7b  ONLINE  0  0  0
	 gptid/f95ccad7-cf77-11e6-9e25-002590b51e7b  ONLINE  0  0  0

errors: No known data errors

-- End of daily output --
You've got two problem drives, one each in your raidz2-0 and raidz2-1 vdevs. These need immediate attention to determine whether or not they need to be replaced.

You can determine which disks they are with a little spelunking in the WebGUI. Go to Storage, highlight your pool, and click the Volume Status button at the bottom of the form (it's third from the left). This will show your pool layout, including the specific drives in each vdev. Make a note of the degraded drives -- they'll be named something like da14p1, which means 'Drive da14, partition p1'.

For each of these drives, run this command: smartctl -t short /dev/da14, substituting the device name (da14 in my example) with the values you found to be degraded. This will take a few minutes to complete; you'll see an estimate of the test duration when you run command. Wait long enough for the test to complete, then run this command: smartctl -a /dev/da14 and examine the output for reallocated sectors or other indications of problems.

The next step might be to run an extended test on the drives, like this: smartctl -t long /dev/da14. Note that these tests will take hours to complete, but they do more in-depth testing that may reveal problems.

For further invesigation, refer to @joeschmuck's "Hard Drive Troubleshooting Guide (All Versions of FreeNAS)".

The point is that you need to address the hardware problems you seem to have before relying on this system to back up your data.

Good luck!
 

prodigi

Cadet
Joined
Jun 26, 2014
Messages
8
Seems to point to bad drives.
Below is the smartctl output. These seem ok? Unless I'm missing something.
But here's what weirder. It's these two drives now, but it's not always the same drives. So it makes me think it's not actually the drives.


Code:
root@prodigi-san17:/mnt # smartctl -a /dev/da22
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.0-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:	 Western Digital Red
Device Model:	 WDC WD20EFRX-68EUZN0
Serial Number:	WD-WCC4M7LX27DF
LU WWN Device Id: 5 0014ee 2b877ea9d
Firmware Version: 82.00A82
User Capacity:	2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:	 512 bytes logical, 4096 bytes physical
Rotation Rate:	5400 rpm
Device is:		In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:	Wed Jul 19 09:25:46 2017 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:	  (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection:		 (27360) seconds.
Offline data collection
capabilities:			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:			(0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:		(0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time:	 (   2) minutes.
Extended self-test routine
recommended polling time:	 ( 276) minutes.
Conveyance self-test routine
recommended polling time:	 (   5) minutes.
SCT capabilities:			(0x703d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME		  FLAG	 VALUE WORST THRESH TYPE	  UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate	 0x002f   200   200   051	Pre-fail  Always	   -	   35
  3 Spin_Up_Time			0x0027   100   253   021	Pre-fail  Always	   -	   0
  4 Start_Stop_Count		0x0032   100   100   000	Old_age   Always	   -	   5
  5 Reallocated_Sector_Ct   0x0033   200   200   140	Pre-fail  Always	   -	   0
  7 Seek_Error_Rate		 0x002e   200   200   000	Old_age   Always	   -	   0
  9 Power_On_Hours		  0x0032   093   093   000	Old_age   Always	   -	   5267
 10 Spin_Retry_Count		0x0032   100   253   000	Old_age   Always	   -	   0
 11 Calibration_Retry_Count 0x0032   100   253   000	Old_age   Always	   -	   0
 12 Power_Cycle_Count	   0x0032   100   100   000	Old_age   Always	   -	   5
192 Power-Off_Retract_Count 0x0032   200   200   000	Old_age   Always	   -	   3
193 Load_Cycle_Count		0x0032   200   200   000	Old_age   Always	   -	   237
194 Temperature_Celsius	 0x0022   130   110   000	Old_age   Always	   -	   17
196 Reallocated_Event_Count 0x0032   200   200   000	Old_age   Always	   -	   0
197 Current_Pending_Sector  0x0032   200   200   000	Old_age   Always	   -	   0
198 Offline_Uncorrectable   0x0030   100   253   000	Old_age   Offline	  -	   0
199 UDMA_CRC_Error_Count	0x0032   200   200   000	Old_age   Always	   -	   2
200 Multi_Zone_Error_Rate   0x0008   100   253   000	Old_age   Offline	  -	   0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
	1		0		0  Not_testing
	2		0		0  Not_testing
	3		0		0  Not_testing
	4		0		0  Not_testing
	5		0		0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.


root@prodigi-san17:/mnt # smartctl -a /dev/da12
smartctl 6.5 2016-05-07 r4318 [FreeBSD 11.0-STABLE amd64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:	 Western Digital Red
Device Model:	 WDC WD20EFRX-68EUZN0
Serial Number:	WD-WCC4M6HZTNKV
LU WWN Device Id: 5 0014ee 2b7345e67
Firmware Version: 82.00A82
User Capacity:	2,000,398,934,016 bytes [2.00 TB]
Sector Sizes:	 512 bytes logical, 4096 bytes physical
Rotation Rate:	5400 rpm
Device is:		In smartctl database [for details use: -P show]
ATA Version is:   ACS-2 (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:	Wed Jul 19 09:27:20 2017 CDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x00)	Offline data collection activity
					was never started.
					Auto Offline Data Collection: Disabled.
Self-test execution status:	  (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection:		 (26640) seconds.
Offline data collection
capabilities:			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:			(0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:		(0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time:	 (   2) minutes.
Extended self-test routine
recommended polling time:	 ( 269) minutes.
Conveyance self-test routine
recommended polling time:	 (   5) minutes.
SCT capabilities:			(0x703d)	SCT Status supported.
					SCT Error Recovery Control supported.
					SCT Feature Control supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME		  FLAG	 VALUE WORST THRESH TYPE	  UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate	 0x002f   200   200   051	Pre-fail  Always	   -	   0
  3 Spin_Up_Time			0x0027   173   169   021	Pre-fail  Always	   -	   4333
  4 Start_Stop_Count		0x0032   100   100   000	Old_age   Always	   -	   47
  5 Reallocated_Sector_Ct   0x0033   200   200   140	Pre-fail  Always	   -	   0
  7 Seek_Error_Rate		 0x002e   200   200   000	Old_age   Always	   -	   0
  9 Power_On_Hours		  0x0032   093   093   000	Old_age   Always	   -	   5363
 10 Spin_Retry_Count		0x0032   100   253   000	Old_age   Always	   -	   0
 11 Calibration_Retry_Count 0x0032   100   253   000	Old_age   Always	   -	   0
 12 Power_Cycle_Count	   0x0032   100   100   000	Old_age   Always	   -	   27
192 Power-Off_Retract_Count 0x0032   200   200   000	Old_age   Always	   -	   25
193 Load_Cycle_Count		0x0032   200   200   000	Old_age   Always	   -	   320
194 Temperature_Celsius	 0x0022   129   107   000	Old_age   Always	   -	   18
196 Reallocated_Event_Count 0x0032   200   200   000	Old_age   Always	   -	   0
197 Current_Pending_Sector  0x0032   200   200   000	Old_age   Always	   -	   0
198 Offline_Uncorrectable   0x0030   100   253   000	Old_age   Offline	  -	   0
199 UDMA_CRC_Error_Count	0x0032   200   200   000	Old_age   Always	   -	   2
200 Multi_Zone_Error_Rate   0x0008   100   253   000	Old_age   Offline	  -	   0

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description	Status				  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Conveyance offline  Completed without error	   00%		90		 -
# 2  Short offline	   Completed without error	   00%		 0		 -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
	1		0		0  Not_testing
	2		0		0  Not_testing
	3		0		0  Not_testing
	4		0		0  Not_testing
	5		0		0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

root@prodigi-san17:/mnt # 

 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
For the drive results...

You are not running SMART test on these regularly and you should. You have not completed a single smart long test and I recommend you do that first for both drives.

Both drives have errors noted on ID 199. This is a SATA cable issue typically, keep an eye on this value, ensure it does not increase. This value will never clear back to zero and will stay with the drive for life. If the value increases at all, replace the SATA cable(s) first.

But in general, without a SMart Long Test being run to completion, the drives appear physically fine.
 
Status
Not open for further replies.
Top