Critical Error on one of the drives of new NAS

Status
Not open for further replies.

Marc Locchi

Dabbler
Joined
May 15, 2014
Messages
12
Hi all,
I a new freeNAS user, which I have had installed for 2 weeks now (everything is 2 weeks old). My specs are:

Intel Xeon E5-2630V3
32GB ECC RAM
Intel Server Board DBS2600COE - latest BIOS revision
4x Gigabit NICs with Link Aggregation
10x WD Re HDDs 4TB
10x SATA onboard ports - Intel controller, set up as JBOD.
FreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)
1 ZFS RAIDZ2 volume using all 10 drives
FreeNAS on 16GB USB stick on motherboard
This morning on turning on the NAS one of the drives (ad5) reported could not be started and the ZPOOL was marked as degraded. I shut down and rebooted and all came back fine other than the error reported is still showing and the ZPOOL is not degraded anymore (HEALTHY now).
The command ZPOOL status -v shows:
[root@LSERVER ~]# zpool status -v

pool: LDATA state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P scan: scrub in progress since Sat Jun 28 07:56:47 2014 380G scanned out of 12.4T at 210M/s, 16h40m to go 0 repaired, 3.00% done
config: NAME STATE READ WRITE CKSUM LDATA ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0
gptid/11ccd0dd-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/126b4c32-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/130837b9-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/13a26155-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 1
gptid/143ab38e-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/14d4f692-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/157080f3-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/160ad906-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/16a67084-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
gptid/17443365-dd68-11e3-9ec8-6cf049707adf ONLINE 0 0 0
errors: No known data errors

All drives show as healthy in the Volumes panel, and after resetting the error (since the system restarted without an issue) it all seems fine.
I am currently doing a ZFS scrub but there are 18hrs to go before I get the report.
Any suggestions on what else to check please. All the hw is new and the drives are Enterprise RAID from WD so I am surprised to find this error with this 2 week old system.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Run a SMART long test and then post the SMART values (for the affected drive at least, ideally for all drives).

Did you test the drives before using them?
 
Status
Not open for further replies.
Top