zpool status error...

Status
Not open for further replies.

mark1st

Cadet
Joined
May 15, 2012
Messages
4
Hi. Need some help, maybe someone could help...
I use freenas 8.02 beta3 with 8 WD RE4 (8x2TB) and UFS filssystem.
Yerterday morning I found an alert:

"WARNING: The volume Volume1 (ZFS) status is UNKNOWN: One or more devices has experienced an error resulting in data corruption. Applications may be affected.Restore the file in question if possible. Otherwise restore the entire pool from backup."

Looking closed trying to see what's happened I try "zpool status":

[root@freenas] ~# zpool status -v
pool: Volume1
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://www.sun.com/msg/ZFS-8000-8A
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
Volume1 ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: Permanent errors have been detected in the following files:

Volume1:<0x5da8>


Don't know what to do (and of course I don't have any backpu to restore from :( ). Any help will be apreciated.

best regards,
mark
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Hi Mark,

I'm a little confused, you say you are using UFS, but the error is ZFS. It sounds like some disk could be failing. You could start looking with "smartctl -a /dev/ada***" where you would change ada*** for the correct device name for EACH disk one at a time.

You can do "camcontrol devlist" to get the device names for your disks.

It's strange that zpool only lists one disk. I can't tell you what to look for specifically with smartctl, but if you compare the results between all of your disks, you should notice something different with whichever disk(s) are failing.

-- Proto
 

mark1st

Cadet
Joined
May 15, 2012
Messages
4
Hi. Thanks for answer.
Already checked all HDDs. Nothing wrong (acording to Smart info).
Don't know how freenas tell me about ZFS volume since I initialized it as UFS!
Maybe something wrong after updateing to beta3 from Beta2.
Is there any way to check/corect volume with some utility like fsck or something else?
In time I replaced HDDs and cables but error remain.

best regards,
mark
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Well, for UFS you would use fsck, but for ZFS you would do a "scrub" with "zpool scrub Volume1".

Could you post the output of "gpart show"?
 

mark1st

Cadet
Joined
May 15, 2012
Messages
4
Hi. Thanks for answer.
Here is gpart result:
Don't remember to convert UFS partition to ZFS!
:(

[root@freenas] ~# gpart show
=> 34 46874990525 da0 GPT (22T)
34 94 - free - (47K)
128 4194304 1 freebsd-swap (2.0G)
4194432 46870796127 2 freebsd-zfs (22T)

=> 63 7863345 ada0 MBR (3.7G)
63 1930257 1 freebsd [active] (943M)
1930320 63 - free - (32K)
1930383 1930257 2 freebsd (943M)
3860640 3024 3 freebsd (1.5M)
3863664 41328 4 freebsd (20M)
3904992 3958416 - free - (1.9G)

=> 0 1930257 ada0s1 BSD (943M)
0 16 - free - (8.0K)
16 1930241 1 !0 (943M)

=> 0 1930257 ada0s2 BSD (943M)
0 16 - free - (8.0K)
16 1930241 1 !0 (943M)


best regards,
mark
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Are you using hardware raid? It looks like your entire pool is one large 22TB disk. There isn't any way that I know of to convert UFS to ZFS.

Maybe someone else has some ideas while I sleep ;)
 

mark1st

Cadet
Joined
May 15, 2012
Messages
4
Hi. I've checked volume with "zpool scrub..."
After 4h no errors was found and those annoying error message dissapear.
Before scrub the checksum on Volume1 was also Ok. Strange.
Everythings seems to be Ok now.


[root@freenas] ~# zpool status -v
pool: Volume1
state: ONLINE
scrub: scrub completed after 4h20m with 0 errors on Wed May 16 17:02:58 2012
config:

NAME STATE READ WRITE CKSUM
Volume1 ONLINE 0 0 0
da0p2 ONLINE 0 0 0

errors: No known data errors


Thanks for yor help!
best regards,
mark
 
Status
Not open for further replies.
Top