jyavenard
Patron
- Joined
- Oct 16, 2013
- Messages
- 361
It's the 2nd time it happens now.
This morning I found my pool degraded.
a closer look:
I had been playing with creating a pool then, and likely related to my report in https://bugs.freenas.org/issues/3628
e.g. don't try to select an existing used disk when using auto-import.
The weird thing however, is that I just made the disk online once again:
and that was it...
there was not extensive disk activity or any resilvering.. pool went back to online status just like that...
A know zfs is very efficient at resilvering, but *that* efficient ?
Edit: ohhh... I did resilver:
and it only showed after a reboot
This morning I found my pool degraded.
# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
pool 21.8T 12.1T 9.68T 55% 1.00x DEGRADED /mnt
pool2 21.8T 8.23T 13.5T 37% 1.00x ONLINE /mnt
a closer look:
# zpool status
pool: pool
state: DEGRADED
status: One or more devices could not be opened. Sufficient replicas exist for
the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
see: http://illumos.org/msg/ZFS-8000-2Q
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
gptid/8a8895c4-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8b7350eb-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8c589b7d-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8d3e06ab-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
317232663093945064 UNAVAIL 0 0 0 was /dev/gptid/8e26c1d2-5f18-11e3-a69e-002590875a70.eli
gptid/8f0c4073-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
I had been playing with creating a pool then, and likely related to my report in https://bugs.freenas.org/issues/3628
e.g. don't try to select an existing used disk when using auto-import.
The weird thing however, is that I just made the disk online once again:
# zpool online pool /dev/gptid/8e26c1d2-5f18-11e3-a69e-002590875a70.eli
# zpool status
pool: pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/8a8895c4-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8b7350eb-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8c589b7d-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8d3e06ab-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8e26c1d2-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
gptid/8f0c4073-5f18-11e3-a69e-002590875a70.eli ONLINE 0 0 0
errors: No known data errors
and that was it...
there was not extensive disk activity or any resilvering.. pool went back to online status just like that...
A know zfs is very efficient at resilvering, but *that* efficient ?
Edit: ohhh... I did resilver:
resilvered 614M in 0h0m with 0 errors on Mon Dec 9 09:01:23 2013
and it only showed after a reboot