Would appreciate some expert advice if it isn't too late.
My RAIDz2 array was running a scrub this morning, UK temps are unusually high today and for the first time since building this system I received an alarms that the limit of 37degC had been reached by a number of drives.
11:00am I get a more serious warning.
11:10am, appears that the array has crapped itself
checked array status and it looks bad....
Im due to move country on Friday so don't have time to do much troubleshooting.
My RAIDz2 array was running a scrub this morning, UK temps are unusually high today and for the first time since building this system I received an alarms that the limit of 37degC had been reached by a number of drives.
Device: /dev/da6 [SAT], Temperature 37 Celsius reached critical limit of 37 Celsius (Min/Max 25/37!)
11:00am I get a more serious warning.
Code:
The volume RAID (ZFS) state is DEGRADED: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state.
11:10am, appears that the array has crapped itself
Code:
The volume RAID (ZFS) state is UNAVAIL: One or more devices are faulted in response to IO failures.
checked array status and it looks bad....
Code:
[admin@freenas] /% zpool status pool: RAID state: UNAVAIL status: One or more devices are faulted in response to IO failures. action: Make sure the affected devices are connected, then run 'zpool clear'. see: http://illumos.org/msg/ZFS-8000-JQ scan: scrub in progress since Wed Jul 1 04:00:03 2015 10.6T scanned out of 20.4T at 422M/s, 6h47m to go 0 repaired, 51.84% done config: NAME STATE READ WRITE CKSUM RAID UNAVAIL 0 0 0 raidz2-0 UNAVAIL 0 3 0 gptid/4087080c-3a74-11e4-b9cb-90e2ba382e3c ONLINE 0 0 0 gptid/415efb3e-3a74-11e4-b9cb-90e2ba382e3c FAULTED 7 142 0 too many errors 3981687536073005009 REMOVED 0 0 0 was /dev/gptid/42391ba3-3a74-11e4-b9cb-90e2ba382e3c gptid/43109c05-3a74-11e4-b9cb-90e2ba382e3c ONLINE 0 0 0 6808129795312123271 REMOVED 0 0 0 was /dev/gptid/43f12866-3a74-11e4-b9cb-90e2ba382e3c gptid/44ceabd7-3a74-11e4-b9cb-90e2ba382e3c FAULTED 6 8 0 too many errors gptid/45aecb5f-3a74-11e4-b9cb-90e2ba382e3c ONLINE 0 0 0 gptid/4693dff1-3a74-11e4-b9cb-90e2ba382e3c ONLINE 0 0 0 16041530080660729023 REMOVED 0 0 0 was /dev/gptid/477508a3-3a74-11e4-b9cb-90e2ba382e3c gptid/48592025-3a74-11e4-b9cb-90e2ba382e3c ONLINE 0 0 0 errors: 96 data errors, use '-v' for a list pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Wed Jul 1 03:46:06 2015 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 gptid/464811d2-cb24-11e4-9959-90e2ba382e3c ONLINE 0 0 0 errors: No known data errors [admin@freenas] /%
Im due to move country on Friday so don't have time to do much troubleshooting.
Last edited: