Hi,
I upgraded from 9.3 to 9.10by creating a new USB ISO and then going through the installation/upgrade process. When I get into the GUI it shows that my storage is degraded. I took a look at it from the CLI and here's what I see:
Seems unlikely that I lost 2 drives after an upgrade. Any ideas on how or if I can bring them online?
thanks.
I upgraded from 9.3 to 9.10by creating a new USB ISO and then going through the installation/upgrade process. When I get into the GUI it shows that my storage is degraded. I took a look at it from the CLI and here's what I see:
Code:
[root@elysium] ~# zpool status -v pool: Lyceum_Volume state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://illumos.org/msg/ZFS-8000-2Q scan: scrub in progress since Fri Mar 31 10:03:30 2017 21.9G scanned out of 8.42T at 121M/s, 20h13m to go 0 repaired, 0.25% done config: NAME STATE READ WRITE CKSUM Lyceum_Volume DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/662623c3-b553-11e3-9b2b-d850e64ec83b ONLINE 0 0 0 gptid/66c6668f-b553-11e3-9b2b-d850e64ec83b ONLINE 0 0 0 7655017483856963116 UNAVAIL 0 0 0 was /dev/gptid/ab7c0ab6-f2d4-11e4-b2ee-d850e64ec83b gptid/6860bb9c-b553-11e3-9b2b-d850e64ec83b ONLINE 0 0 0 gptid/6900d3e8-b553-11e3-9b2b-d850e64ec83b ONLINE 0 0 0 5236701102348180652 UNAVAIL 0 0 0 was /dev/gptid/699d0a61-b553-11e3-9b2b-d850e64ec83b errors: No known data errors pool: freenas-boot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 da6p2 ONLINE 0 0 0 errors: No known data errors [root@elysium] ~# camcontrol devlist <ATA WDC WD30EFRX-68A 0A80> at scbus0 target 0 lun 0 (pass0,da0) <ATA WDC WD30EFRX-68A 0A80> at scbus0 target 1 lun 0 (pass1,da1) <ATA WDC WD30EFRX-68A 0A80> at scbus0 target 4 lun 0 (pass2,da2) <ATA WDC WD30EFRX-68A 0A80> at scbus0 target 5 lun 0 (pass3,da3) <ATA WDC WD30EFRX-68A 0A80> at scbus0 target 6 lun 0 (pass4,da4) <ATA WDC WD30EFRX-68E 0A82> at scbus0 target 7 lun 0 (pass5,da5) <SanDisk Cruzer 1.27> at scbus8 target 0 lun 0 (pass6,da6) [root@elysium] ~# gpart show => 34 5860533101 da0 GPT (2.7T) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) => 34 5860533101 da1 GPT (2.7T) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) => 34 5860533101 da2 GPT (2.7T) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) => 34 5860533101 da3 GPT (2.7T) 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 5856338696 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) => 34 5860533101 da5 GPT (2.7T) 34 6 - free - (3.0K) 40 1024 1 bios-boot (512K) 1064 5860532064 2 freebsd-zfs (2.7T) 5860533128 7 - free - (3.5K) => 34 15633341 da6 GPT (7.5G) 34 1024 1 bios-boot (512K) 1058 6 - free - (3.0K) 1064 15632304 2 freebsd-zfs (7.5G) 15633368 7 - free - (3.5K) [root@elysium] ~# glabel status Name Status Components gptid/662623c3-b553-11e3-9b2b-d850e64ec83b N/A da0p2 gptid/66c6668f-b553-11e3-9b2b-d850e64ec83b N/A da1p2 gptid/6860bb9c-b553-11e3-9b2b-d850e64ec83b N/A da2p2 gptid/6900d3e8-b553-11e3-9b2b-d850e64ec83b N/A da3p2 gptid/a8a756e5-1504-11e7-962d-d850e64ec83b N/A da5p1 gptid/a8b41233-1504-11e7-962d-d850e64ec83b N/A da5p2 gptid/1056f1ff-161f-11e7-9a38-d850e64ec83b N/A da6p1 [root@elysium] ~# zpool online Lyceum_Volume /dev/gptid/ab7c0ab6-f2d4-11e4-b2ee-d850e64ec83b warning: device '/dev/gptid/ab7c0ab6-f2d4-11e4-b2ee-d850e64ec83b' onlined, but remains in faulted state use 'zpool replace' to replace devices that are no longer present
Seems unlikely that I lost 2 drives after an upgrade. Any ideas on how or if I can bring them online?
thanks.