Unhealthy Pool - How to fix

vh202

Cadet
Joined
Oct 12, 2022
Messages
2
Hi all,

I have a trueNAS Scale system as per below
  • Asrock Z77 Pro4-M
  • Intel(R) Core(TM) i5-3550
  • 16GB RAM
  • 2 ZFS Arrays
  • 1 at 4x2TB and 1 mirrored 2X16TB
The mirrored array is currently showing as unhealthy with zpool status showing the following and I am unsure what to do next

pool: Mirror1 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A scan: scrub repaired 0B in 00:07:05 with 15 errors on Thu Oct 13 16:44:14 2022 config: NAME STATE READ WRITE CKSUM Mirror1 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 4bd33165-4513-4166-a5ac-dd3efddc946d ONLINE 4 0 128K 8cb65a36-33ef-4fd8-8919-1feb12a6a022 ONLINE 1 2 128K errors: Permanent errors have been detected in the following files: Mirror1/ix-applications/docker/2590635008c388fcc443cf2535076bfc8206baf27fd5e2dbadc2847eb6a477b1:<0x0> Mirror1/ix-applications/docker:<0x633> Mirror1/ix-applications/docker:<0x634> /mnt/Mirror1/ix-applications/docker/containers/e879fa17d1b13ffaca155733709516793b95fda5f9ba3089805400dce6b89d9f /mnt/Mirror1/ix-applications/docker/containerd/daemon/io.containerd.runtime.v2.task/moby /var/lib/kubelet/pods/1a1e36d1-61c4-4b72-be06-933bc64d43c3/containers/csi-snapshotter /mnt/Mirror1/ix-applications/catalogs/github_com_truecharts_catalog_main/.git/objects/pack <0xffffffffffffffff>:<0x596> pool: Pool1 state: ONLINE scan: scrub repaired 0B in 13:59:14 with 0 errors on Sun Sep 18 14:00:49 2022 config: NAME STATE READ WRITE CKSUM Pool1 ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 25c4e5a2-a13e-4fe3-bf61-6c5cf1ee0eb8 ONLINE 0 0 0 8ba03677-7cd3-43b8-9664-2b8db64e697a ONLINE 0 0 0 a9f61bf1-6e53-4628-a696-0953019ceb97 ONLINE 0 0 0 errors: No known data errors pool: boot-pool state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:01:25 with 0 errors on Wed Oct 12 03:46:26 2022 config: NAME STATE READ WRITE CKSUM boot-pool ONLINE 0 0 0 sdf3 ONLINE 0 0 0 errors: No known data errors
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
2 ZFS Arrays

ZFS doesn't have "arrays". You presumably mean "pools". While this might seem a petty correction, please note that there's a wide range of skill levels present on these forums, and also a large number of non-native English speakers. Communicating clearly is a necessary part of making sure that people understand what you're saying.


I am unsure what to do next

In one of those weird things where someone actually thought about this problem when designing a complicated thing, ZFS is actually telling you what to do.

Restore the file in question if possible. Otherwise restore the entire pool from backup.

It also tells you what's been impacted, which in this case appears to be Docker related stuff. You could delete and then reinstall the Docker stuff you've installed, which would count as "Restore the file in question". But you might also be able to find a way to manually recover just that individual file from wherever it came from as well.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You should also list the storage drives vendor & model, when they were bought, as well as which are used in which pools.

Their are known gotches with certain drives.
 

vh202

Cadet
Joined
Oct 12, 2022
Messages
2
You should also list the storage drives vendor & model, when they were bought, as well as which are used in which pools.

Their are known gotches with certain drives.
Thanks.

The 3 drive pool has 3 x Seagate NAS drives 4TB, 2 x ST4000VN000 and 1 x ST4000VN008. I can't recall when they were bought

The mirrored pool has 2 x16TB shucked WD drives, WD160EDGZ. These were bought about 6 months ago
 
Top