hi,
I am using FreeNAS-8.3.0-RELEASE-p1-x64 (r12825). My pool started showing smart errors/warnings on two disk at the same time. I bought two new disks to replace them and followed the procedure in the manual.
After resilvering the first disk I got 'Permanent errors have been detected' and couldn't offline the 2nd disk. So I powered down and removed it and then added a new disk instead and resilvered again.
My original errors have not disappeared despite doing zpool scrub twice.
I logically understand that I need to delete the corrupted data block because it has got detached from the original reference with full path.
I googled around and found the following on using zdb to get more info on the hex reference given.
I managed to get the content listed using
It looks like a binary plain file of some sort.
How can I get rid of this single error and restore the pool to health?
I don't mind loosing a few files as I can restore from backup but ideally I don't want to recreate and restore over 3Tb. Using zpool scrub twice hasn't got rid of it and I'm very new to zfs. Can anyone help?
thanks
Jag
I am using FreeNAS-8.3.0-RELEASE-p1-x64 (r12825). My pool started showing smart errors/warnings on two disk at the same time. I bought two new disks to replace them and followed the procedure in the manual.
After resilvering the first disk I got 'Permanent errors have been detected' and couldn't offline the 2nd disk. So I powered down and removed it and then added a new disk instead and resilvered again.
My original errors have not disappeared despite doing zpool scrub twice.
Code:
zpool status -v asgard pool: asgard state: DEGRADED status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scan: scrub repaired 0 in 6h56m with 1 errors on Sat Nov 29 02:49:02 2014 config: NAME STATE READ WRITE CKSUM asgard DEGRADED 0 0 1 raidz1-0 DEGRADED 0 0 2 gptid/5ded79bf-74e1-11e4-ac61-441ea13caae6 ONLINE 0 0 0 replacing-1 DEGRADED 0 0 0 7511900252865497515 UNAVAIL 0 0 0 was /dev/gptid/d803cfcd-cb8b-11e1-958d-441ea13caae6 gptid/77e005b2-7672-11e4-a8d8-441ea13caae6 ONLINE 0 0 0 gptid/42edf68f-7d53-11e3-a71d-441ea13caae6 ONLINE 0 0 0 errors: Permanent errors have been detected in the following files: asgard/data:<0xcf34e>
I logically understand that I need to delete the corrupted data block because it has got detached from the original reference with full path.
I googled around and found the following on using zdb to get more info on the hex reference given.
Code:
zdb -ddddd asgard/data 0xcf34e Dataset asgard/data [ZPL], ID 32, cr_txg 22, 3.78T, 1340863 objects, rootbp DVA[0]=<0:6d47a9ea000:2000> DVA[1]=<0:38ad762000:2000> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=12080704L/12080704P fill=1340863 cksum=1b74ef303c:8defe7e5b0f:18d38ef8da187:3167b0ed2a5fc5 Object lvl iblk dblk dsize lsize %full type 848718 1 16K 512 5.50K 512 100.00 ZFS plain file 264 bonus ZFS znode dnode flags: USED_BYTES USERUSED_ACCOUNTED dnode maxblkid: 0 path ???<object#848718> uid 1001 gid 20 atime Sun Jul 6 14:01:12 2014 mtime Sun Jul 6 13:00:53 2014 ctime Sun Jul 6 13:00:53 2014 crtime Sun Jul 6 13:00:52 2014 gen 9665424 mode 100600 size 56 parent 848717 links 1 pflags 40800000005 xattr 0 rdev 0x0000000000000000 Indirect blocks: 0 L0 0:38cfc3d6000:2000 200L/200P F=1 B=9665424/9665424 segment [0000000000000000, 0000000000000200) size 512
I managed to get the content listed using
Code:
zdb -R asgard/data 0:38cfc3d6000:2000:r Found vdev type: raidz ??????????????????????????????????
It looks like a binary plain file of some sort.
How can I get rid of this single error and restore the pool to health?
I don't mind loosing a few files as I can restore from backup but ideally I don't want to recreate and restore over 3Tb. Using zpool scrub twice hasn't got rid of it and I'm very new to zfs. Can anyone help?
thanks
Jag