Does "unreadable (pending) sectors" reduce file system read/write speed ?

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Hello forum,

I have been getting the dreaded "unreadable (pending) sectors" for two of my hard drives (10 disks, RAIDZ2) so am in the process of making sure I have my data backed up via sending zfs snapshots to a second server. The second server is running Proxmox but has 3X3TB drives using ZFS and I backup regularly to these drives from my FreeNAS system. Here is the (possibly?) strange thing. File access time has dropped significantly. I recently did a scrub on the FreeNAS box and it took 113 hours !! Normally this is like single digits, 8 hours or so. Plus the amount of time it is taking to "zfs send -i" (incremental snapshots) over to my backup machine is taking much more time than normal.

Does "unreadable (pending) sectors" reduce file system read/write speed ?

I have attached a couple of "zpool iostat 2" output screen grabs which sow how dismal the read speed is (FREENAS_SOURCE) and how bad the write speed is (DESTINATION)

Also, I forgot to mention that these two systems are connected via d-link switch and according to the connectivity lights on the switch are connected at 1GB.
 

Attachments

  • FREENAS_SOURCE.png
    FREENAS_SOURCE.png
    98.5 KB · Views: 308
  • DESTINATION.png
    DESTINATION.png
    96.8 KB · Views: 299
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Does "unreadable (pending) sectors" reduce file system read/write speed ?
Let me preface what I am about to say with this. Your pool is made of a single vdev (from what you say) so when I am talking about your pool performance, it is with the understanding that it is really vdev performance. If you have two vdevs, each vdev can have different performance levels and the pool performance is the aggregate of vdev performance. It can get quite complicated.

It isn't the unreadable pending sectors exactly, it is likely that the drives that have those sectors are repeatedly trying to read and failing, but making the rest of the pool wait. You didn't say what model drive you are using, (it would be nice to know) but even with drives that honor TLER, there is still a delay when the drive hits an error and every error is an additional delay and the pool can only run as fast as the slowest drive. The best course of action is to go ahead and replace the one drive that appears to be in the worst condition. Once that resilver is done, then replace the second drive. Once you have no bad drives, the pool should run as fast as it ever did.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
If the disks are failing yes it would affect performance since they are failing to read data from disk and having to get it else where.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
and how bad the write speed is (DESTINATION)
Write speed is depending entirely on read speed, so I wouldn't worry about that. It is the read that is killing performance.
 

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Gents,

Thanks for your input. Will be ordering new drives soon, but have a further query after thinking about the hard drives I have installed......

I am using a mixture of drives in the POOL (single VDEV). The two that are failing are both Western Digital's. One is a WD20EARS-00S, the other is a WD20EARS-00M. Both "Greens" however, I don't think the last three characters make any specific distinctions between the drives, though I could be wrong.

Thinking about it now, looking at the "camcontrol devlist" output I am wondering if the "greens" (5400rpm) could be actually be slowing my system down anyway ? The slowest drive on the system will drag other 7200rpm drives down as well ? Probably not optimal to have a mixture of 5400/7200 ?

Thanks

L
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Ideally you have the same speeds drives in each vdev. A slower drive in a vdev will slow everything down.
 
Top