xtracold
Dabbler
- Joined
- Jul 27, 2011
- Messages
- 10
Hi there,
So I updated via the GUI to v8.2 last night. It all looked to have gone well. But then I noticed the alert was yellow so I clicked on the alert and the message is
"WARNING: The volume NAS_1_2 (ZFS) status is UNKNOWN: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state.Attach the missing device and online it using 'zpool online'."
My system is two 1TB drives configured to mirror each other, they run inside a VM on a Linux Mint workstation. They have ran for months quite happily until now...
Rather than panic (which I am on the edge of) I browsed around the GUI to see what information I could gather on the problem.
The volume status is listed as follows
This would say to me that ada2p2 is working fine and the problem is with the other disk. I struggle to believe the coincidence of it failing during the upgrade but i'll run with the theory for now.
From searching around the web it seems "zpool status" and "gpart show" would help guide me to the problem disk and path to a resolution.
Here is the output of zpool status
This seems consistent with the GUI information.
Here is the output from gpart show
I am not fully understanding what this is telling me, is this saying ada2 is corrupt and therefore the problem disk? Is that not in conflict with the zpool information?
Any help or guidance you can given me is much appreciated. If I can locate the real problem disk I would be open to removing it from the pool and trying to reinstate it. I assume if it was recognised I would be able to format it and start cleanly, with the mirror rebuilding against it.
Thanks again for any help, as you can likely tell the ZFS pools etc are all quite new to me.
Jamie
So I updated via the GUI to v8.2 last night. It all looked to have gone well. But then I noticed the alert was yellow so I clicked on the alert and the message is
"WARNING: The volume NAS_1_2 (ZFS) status is UNKNOWN: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state.Attach the missing device and online it using 'zpool online'."
My system is two 1TB drives configured to mirror each other, they run inside a VM on a Linux Mint workstation. They have ran for months quite happily until now...
Rather than panic (which I am on the edge of) I browsed around the GUI to see what information I could gather on the problem.
The volume status is listed as follows

This would say to me that ada2p2 is working fine and the problem is with the other disk. I struggle to believe the coincidence of it failing during the upgrade but i'll run with the theory for now.
From searching around the web it seems "zpool status" and "gpart show" would help guide me to the problem disk and path to a resolution.
Here is the output of zpool status
Code:
[root@freenas ~]# zpool status pool: NAS_1_2 state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-2Q scrub: scrub in progress for 0h33m, 40.98% done, 0h48m to go config: NAME STATE READ WRITE CKS UM NAS_1_2 DEGRADED 0 0 0 mirror DEGRADED 0 0 0 5644546525308100890 UNAVAIL 0 0 0 was /dev/gptid/5f444b11-32c0-11e1-abe8-080027430ca7 gptid/5fc15fbc-32c0-11e1-abe8-080027430ca7 ONLINE 0 0 0
This seems consistent with the GUI information.
Here is the output from gpart show
Code:
[root@freenas ~]# gpart show => 63 6291369 ada0 MBR (3.0G) 63 1930257 1 freebsd (943M) 1930320 63 - free - (32K) 1930383 1930257 2 freebsd [active] (943M) 3860640 3024 3 freebsd (1.5M) 3863664 41328 4 freebsd (20M) 3904992 2386440 - free - (1.1G) => 34 1953519935 ada2 GPT (932G) [CORRUPT] 34 94 - free - (47K) 128 4194304 1 freebsd-swap (2.0G) 4194432 1949325537 2 freebsd-zfs (930G) => 0 1930257 ada0s1 BSD (943M) 0 16 - free - (8.0K) 16 1930241 1 !0 (943M) => 0 1930257 ada0s2 BSD (943M) 0 16 - free - (8.0K) 16 1930241 1 !0 (943M)
I am not fully understanding what this is telling me, is this saying ada2 is corrupt and therefore the problem disk? Is that not in conflict with the zpool information?
Any help or guidance you can given me is much appreciated. If I can locate the real problem disk I would be open to removing it from the pool and trying to reinstate it. I assume if it was recognised I would be able to format it and start cleanly, with the mirror rebuilding against it.
Thanks again for any help, as you can likely tell the ZFS pools etc are all quite new to me.
Jamie