Bhoot
Patron
- Joined
- Mar 28, 2015
- Messages
- 241
I got a mail from my freenas
Woke up next morning and checked the basics of the freenas box. One of the disks showed a really long (about 20 digit number) with the status as UNAVAIL. I also tried resetting the connections to no avail. Not sure why but the disk just dropped out of the array. I am ready with a cold spare so that's not an issue. I am just wondering if I was supposed to get a few alerts before this happened.
I did get a few outputs from ssh
//edit: the gui only shows replace option when the said disk is selected.
Code:
Checking status of zfs pools: NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT bhoot 29T 21.6T 7.35T - 39% 74% 1.00x DEGRADED /mnt freenas-boot 14.2G 1.05G 13.2G - - 7% 1.00x ONLINE - pool: bhoot state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 132h59m with 0 errors on Fri Jan 6 16:59:14 2017 config: NAME STATE READ WRITE CKSUM bhoot DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/5663b940-bdde-11e5-9e00- f07959376c84 ONLINE 0 0 0 10479856730608632472 UNAVAIL 0 0 0 was /dev/gptid/cd427285-e4d8-11e4-b39d-f07959376c84 gptid/ec0f7827-2d2c-11e6-b1de-f07959376c84 ONLINE 0 0 0 gptid/ce06b19f-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/ce69a75d-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/b1f3389f-5382-11e6-885d-f07959376c84 ONLINE 0 0 0 gptid/cf2dd08e-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/cf91d6e8-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 errors: No known data errors -- End of daily output --
Woke up next morning and checked the basics of the freenas box. One of the disks showed a really long (about 20 digit number) with the status as UNAVAIL. I also tried resetting the connections to no avail. Not sure why but the disk just dropped out of the array. I am ready with a cold spare so that's not an issue. I am just wondering if I was supposed to get a few alerts before this happened.
I did get a few outputs from ssh
Code:
[root@freenas] ~# zpool status -v pool: bhoot state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub repaired 0 in 132h59m with 0 errors on Fri Jan 6 16:59:14 2017 config: NAME STATE READ WRITE CKSUM bhoot DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/5663b940-bdde-11e5-9e00-f07959376c84 ONLINE 0 0 0 10479856730608632472 UNAVAIL 0 0 0 was /dev/gptid/cd427285-e4d8-11e4-b39d-f07959376c84 gptid/ec0f7827-2d2c-11e6-b1de-f07959376c84 ONLINE 0 0 0 gptid/ce06b19f-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/ce69a75d-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/b1f3389f-5382-11e6-885d-f07959376c84 ONLINE 0 0 0 gptid/cf2dd08e-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 gptid/cf91d6e8-e4d8-11e4-b39d-f07959376c84 ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0h1m with 0 errors on Wed Jan 4 03:46:18 2017 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 gptid/40460acb-cf27-11e5-b12b-f07959376c84 ONLINE 0 0 0 da1p2 ONLINE 0 0 0 errors: No known data errors
Code:
[root@freenas] ~# camcontrol devlist <WDC WD40EFRX-68WT0N0 82.00A82> at scbus1 target 0 lun 0 (ada0,pass0) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus4 target 0 lun 0 (ada1,pass1) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus5 target 0 lun 0 (ada2,pass2) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus6 target 0 lun 0 (ada3,pass3) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus7 target 0 lun 0 (ada4,pass4) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus8 target 0 lun 0 (ada5,pass5) <WDC WD40EFRX-68WT0N0 82.00A82> at scbus9 target 0 lun 0 (ada6,pass6) <SanDisk Ultra Fit 1.00> at scbus11 target 0 lun 0 (pass7,da0) <SanDisk Ultra Fit 1.00> at scbus12 target 0 lun 0 (pass8,da1)
Code:
[root@freenas] ~# gpart show => 34 30031183 da0 GPT (14G) 34 1024 1 bios-boot (512k) 1058 6 - free - (3.0k) 1064 30030152 2 freebsd-zfs (14G) 30031216 1 - free - (512B) => 34 30031183 da1 GPT (14G) 34 1024 1 bios-boot (512k) 1058 6 - free - (3.0k) 1064 30030152 2 freebsd-zfs (14G) 30031216 1 - free - (512B) => 34 7814037101 ada0 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada1 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada2 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada3 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada4 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada5 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k) => 34 7814037101 ada6 GPT (3.7T) 34 94 - free - (47k) 128 4194304 1 freebsd-swap (2.0G) 4194432 7809842696 2 freebsd-zfs (3.7T) 7814037128 7 - free - (3.5k)
Code:
[root@freenas] ~# glabel status Name Status Components gptid/403b7529-cf27-11e5-b12b-f07959376c84 N/A da0p1 gptid/40460acb-cf27-11e5-b12b-f07959376c84 N/A da0p2 gptid/9623e4df-cf29-11e5-a539-f07959376c84 N/A da1p1 gptid/cf91d6e8-e4d8-11e4-b39d-f07959376c84 N/A ada0p2 gptid/b1f3389f-5382-11e6-885d-f07959376c84 N/A ada1p2 gptid/5663b940-bdde-11e5-9e00-f07959376c84 N/A ada2p2 gptid/ec0f7827-2d2c-11e6-b1de-f07959376c84 N/A ada3p2 gptid/ce06b19f-e4d8-11e4-b39d-f07959376c84 N/A ada4p2 gptid/ce69a75d-e4d8-11e4-b39d-f07959376c84 N/A ada5p2 gptid/cf2dd08e-e4d8-11e4-b39d-f07959376c84 N/A ada6p2
//edit: the gui only shows replace option when the said disk is selected.