FreeNAS 9.3 STABLE on a 'backblaze' v5.0 storage pod from Backuppods
I had been receiving the following 'critical alerts' email for a 15 drive raidz2:
---
---
Then couple days ago I get
---
---
zpool status -v csrpd1 showed
---
---
BTW, /dev/ada3 is gptid/6688829c-d68b-11e5-9dde-0cc47a5eccf4
Note zpool status was not indicating a degraded state.
Then today I received:
---
---
and zpool status -v csrpd1 now shows
---
---
Do I believe my eyes? Have ALL of the drives failed?
What is this spool status telling me?
I had been receiving the following 'critical alerts' email for a 15 drive raidz2:
---
Code:
Device: /dev/ada3, 40 Currently unreadable (pending) sectors Device: /dev/ada3, 9 Offline uncorrectable sectors
---
Then couple days ago I get
---
Code:
Device: /dev/ada3, 40 Currently unreadable (pending) sectors Device: /dev/ada3, 9 Offline uncorrectable sectors The volume csrpd1 (ZFS) state is ONLINE: One or more devices has experienced an error resulting in data corruption. Applications may be affected. The capacity for the volume 'csrpd1' is currently at 92%, while the recommended value is below 80%.
---
zpool status -v csrpd1 showed
---
Code:
pool: csrpd1 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 35h33m with 2 errors on Mon Aug 1 11:33:58 2016 config: NAME STATE READ WRITE CKSUM csrpd1 ONLINE 0 0 2 raidz2-0 ONLINE 0 0 4 gptid/652599a0-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6597aff4-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6610b29c-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6688829c-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6702e919-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/677d928e-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/67fbdb3c-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/687b22bd-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/68ec3f39-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/69699a43-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/69e1fe49-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6a553de4-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6acc9261-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6b4adf15-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 gptid/6bb9e2ea-d68b-11e5-9dde-0cc47a5eccf4 ONLINE 0 0 0 spares gptid/6c348586-d68b-11e5-9dde-0cc47a5eccf4 AVAIL errors: Permanent errors have been detected in the following files: /mnt/csrpd1/NEXRAD/level2/2008/200809/20080928/KTWX/KTWX20080928_210731_V03.bz2
---
BTW, /dev/ada3 is gptid/6688829c-d68b-11e5-9dde-0cc47a5eccf4
Note zpool status was not indicating a degraded state.
Then today I received:
---
Code:
Device: /dev/ada3, 40 Currently unreadable (pending) sectors Device: /dev/ada3, 9 Offline uncorrectable sectors The volume csrpd1 (ZFS) state is DEGRADED: One or more devices has experienced an error resulting in data corruption. Applications may be affected. The capacity for the volume 'csrpd1' is currently at 92%, while the recommended value is below 80%.
---
and zpool status -v csrpd1 now shows
---
Code:
pool: csrpd1 state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://illumos.org/msg/ZFS-8000-8A scan: scrub repaired 0 in 35h33m with 2 errors on Mon Aug 1 11:33:58 2016 config: NAME STATE READ WRITE CKSUM csrpd1 DEGRADED 0 0 38 raidz2-0 DEGRADED 0 0 76 gptid/652599a0-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6597aff4-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6610b29c-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6688829c-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6702e919-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/677d928e-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/67fbdb3c-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/687b22bd-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/68ec3f39-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/69699a43-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/69e1fe49-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6a553de4-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6acc9261-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6b4adf15-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors gptid/6bb9e2ea-d68b-11e5-9dde-0cc47a5eccf4 DEGRADED 0 0 0 too many errors spares gptid/6c348586-d68b-11e5-9dde-0cc47a5eccf4 AVAIL errors: Permanent errors have been detected in the following files: /mnt/csrpd1/NEXRAD/level2/2008/200809/20080928/KTWX/KTWX20080928_210731_V03.bz2
---
Do I believe my eyes? Have ALL of the drives failed?
What is this spool status telling me?
Last edited by a moderator: