MrHands
Dabbler
- Joined
- Jan 7, 2016
- Messages
- 18
Just received my daily report from my server, and i have a weird error. brand new drives, which i tested and are fine, everything seems to be working but there is a weird error below? although it says at the end there is no known data errors.
i have three 4TB HGST drives, and it shows the size as 10.9tb with 3.65tb used and 7.23tb free, so all looks good.
Just wanted some advice on what this is trying to tell me? Thanks!
Checking status of zfs pools:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 29G 1.66G 27.3G - - 5% 1.00x ONLINE -
sithvol 10.9T 3.65T 7.23T - 18% 33% 1.00x ONLINE /mnt
pool: sithvol
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub repaired 112K in 2h52m with 0 errors on Sat Oct 15 08:53:12 2016
config:
NAME STATE READ WRITE CKSUM
sithvol ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/5f495055-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 21
gptid/60480d62-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 0
gptid/614281bc-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 0
errors: No known data errors
-- End of daily output --
i have three 4TB HGST drives, and it shows the size as 10.9tb with 3.65tb used and 7.23tb free, so all looks good.
Just wanted some advice on what this is trying to tell me? Thanks!
Checking status of zfs pools:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 29G 1.66G 27.3G - - 5% 1.00x ONLINE -
sithvol 10.9T 3.65T 7.23T - 18% 33% 1.00x ONLINE /mnt
pool: sithvol
state: ONLINE
status: One or more devices has experienced an unrecoverable error. An
attempt was made to correct the error. Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
using 'zpool clear' or replace the device with 'zpool replace'.
see: http://illumos.org/msg/ZFS-8000-9P
scan: scrub repaired 112K in 2h52m with 0 errors on Sat Oct 15 08:53:12 2016
config:
NAME STATE READ WRITE CKSUM
sithvol ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/5f495055-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 21
gptid/60480d62-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 0
gptid/614281bc-aa2e-11e5-aefa-d05099c06c38 ONLINE 0 0 0
errors: No known data errors
-- End of daily output --