multipath/disk1 degraded

Status
Not open for further replies.
Joined
May 26, 2014
Messages
8
Hello everyone,

I've come across and error with a FreeNAS setup involving one of the paths failing

Hardware specs:

2 x Xeon E5 2620
Supermicro X9DRI-LN4F
128GB ECC DDR3 RAM
LSI 9207-8e

Supermicro SC847 JBOD Chassis 4U

24 x HGST 7200RPM NL-SAS 2TB HDD

Pool is set up with 4 groups of 6 disks in ZFS RAIDZ2

zpool status reports everything is online and no errors reported. Is there something I can do to reset the path so it's back to "optimal" state like the other drives or is this a potential hardware issue that should be addressed?
 

Attachments

  • Screen Shot 2014-09-02 at 2.35.29 PM.png
    Screen Shot 2014-09-02 at 2.35.29 PM.png
    67 KB · Views: 235

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
It may be a hardware issue -- FAIL status is set when some I/O request returned error. You may request failure status clear by running in command line command like `gmultipath restore disk1 da0`. If there are still some problems -- FAIL status will reappear. If it doesn't -- then problem probably was transient.
 
Joined
May 26, 2014
Messages
8
Thanks for the reply mav, rand the command and it cleared the alert for now. I'll continue to monitor it in the meantime. Cheers!
 
Status
Not open for further replies.
Top