Fix permanent errors in files

Status
Not open for further replies.

twillkickers

Dabbler
Joined
Jun 3, 2017
Messages
19
I recently had an issue where my parity RAID array was degraded. After I received the degraded warning, one of my raid drives would not show up. After opening up my case and re-seating my SATA cables, my second drive showed up again and FreeNAS began to automatically resilver the array.

However, when I run zpool status -v, my terminal prints out the following:
Code:
errors: Permanent errors have been detected in the following files:

		/var/db/system/cores/devd.core
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-1/cpu-nice.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-2/cpu-nice.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-3/cpu-nice.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-1/cpu-interrupt.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-3/cpu-interrupt.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-0/cpu-idle.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/interface-em0/if_octets.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-1/cpu-idle.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-2/cpu-idle.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-3/cpu-idle.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/aggregation-cpu-sum/cpu-system.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/aggregation-cpu-sum/cpu-user.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-Storage/df_complex-free.r
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/disk-ada2/disk_octets.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails/df_complex-free.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails-.warden-template-pl
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/interface-bridge0/if_octets.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc/cache_operation-allocated.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails-mineos_1/df_complex
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc/cache_result-prefetch_metadata-hit.
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails-plexmediaserver_1/d
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_ops_rwd-ada1.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_bw-ada1.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_latency-ada1.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_ops_rwd-ada2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/interface-epair1a/if_packets.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_bw-ada2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_latency-ada2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails-.warden-template-st
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_ops_rwd-ada2p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_bw-ada2p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_latency-ada2p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_ops_rwd-ada1p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_bw-ada1p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/geom_stat/geom_latency-ada1p2.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer-jails-.warden-template-pl
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc_v2/gauge_arcstats_raw_prefetch-pref
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc_v2/gauge_arcstats_raw_mru-mfu_hits.
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc/cache_result-mfu-hit.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/interface-epair3a/if_octets.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc_v2/arcstat_ratio_metadata-demand_me
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc_v2/arcstat_ratio_metadata-demand_me
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/df-mnt-FileServer/df_complex-free.rrd
		/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/zfs_arc/cache_ratio-arc.rrd

Is there any way to repair some of/most of these files? Or do I need to back up my files and rebuild the array from scratch? Thanks for your help!

FYI, my setup is as follows:

CPU: Intel Xeon E3-1225 v5 3.3G Processor
RAM: 8 Gb
Raid Setup: 2 x 5TB WD Black Hard Disks in Parity
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Raid Setup: 2 x 5TB WD Black Hard Disks in Parity
If this is ZFS, it is a mirror. Parity RAID would be RAID-z2 with 4 or more drives.
This output looks like it is from the boot pool, not your storage.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
This output looks like it is from the boot pool, not your storage.
Why would you say that? All those files are on the .system dataset, which by default is stored on the first data pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
All those files are on the .system dataset, which by default is stored on the first data pool.
That explains that, I moved my system dataset to the boot drive a while back and I didn't think about it being a non-standard configuration.
 

twillkickers

Dabbler
Joined
Jun 3, 2017
Messages
19
If this is ZFS, it is a mirror. Parity RAID would be RAID-z2 with 4 or more drives.
This output looks like it is from the boot pool, not your storage.
Sorry, my terminology is probably a bit off. I'm a bit new to the whole NAS thing!

Is there any way to repair these system dataset files?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
If you go to the 'System' tab and under that to the 'Update' tab, you could try clicking to button labeled 'Verify Install' and it might detect and correct the corrupted files. Here is an illustration of the page you are looking for:
system-update1a.png
This is the link to the manual:
http://doc.freenas.org/11/system.html#update
 

twillkickers

Dabbler
Joined
Jun 3, 2017
Messages
19
If you go to the 'System' tab and under that to the 'Update' tab, you could try clicking to button labeled 'Verify Install' and it might detect and correct the corrupted files. Here is an illustration of the page you are looking for:
View attachment 21480
This is the link to the manual:
http://doc.freenas.org/11/system.html#update
I've verified the install multiple times, but the verification process only seems to detect one file. In addition, every time I run the verification process, it finds the same file but does not repair it. Is there any way to do an install repair?
snipFreenas11.PNG
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
There are two very different things going on here. The first are corrupted files identified by ZFS. These are all RRD log files for reporting, and can be ignored. Or deleted, or whatever. They will be rewritten by new versions as new performance data is collected.

The "inconsistency" detected in /boot/loader.conf could be due to a user-set tunable. This does not mean the file is corrupt, just that it is different from the standard install. Has a user-set tunable been added?
 

twillkickers

Dabbler
Joined
Jun 3, 2017
Messages
19
There are two very different things going on here. The first are corrupted files identified by ZFS. These are all RRD log files for reporting, and can be ignored. Or deleted, or whatever. They will be rewritten by new versions as new performance data is collected.

The "inconsistency" detected in /boot/loader.conf could be due to a user-set tunable. This does not mean the file is corrupt, just that it is different from the standard install. Has a user-set tunable been added?

wblock, thanks for your reply!

As to your first comment, I started deleting files that were in the "corrupted files list" using the RM command in the terminal. As I delete the files, they do not seem to disappear from the list. Instead, this happens...

Previously the first two lines showed up as follows:
Code:
 
/var/db/system/cores/devd.core
/var/db/system/rrd-5d784362f850449eb03d556b65045a45/freenas.local/cpu-1/cpu-nice.rrd


After using "RM" the lines show up as this:
Code:
FileServer/.system/cores:<0xb>
FileServer/.system/rrd-5d784362f850449eb03d556b65045a45:<0x106>


Am I deleting the old files correctly? What does the change in the output mean?

As to your second question, I am not sure what user-set tunables are, so I do not believe I have set one. In addition, my web interface claims there are no user-set tunables, see below:
snip_tunables.GIF
 

rs225

Guru
Joined
Jun 28, 2014
Messages
878
That looks right for the removed files; they should disappear from the list entirely after the next scrub.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Is there a way to do an unscheduled scrub in the command line?
You don't need to do it at the command line; you can do it in the GUI. But from the command, it's just zpool scrub poolname.
 

twillkickers

Dabbler
Joined
Jun 3, 2017
Messages
19
Thanks for the help everyone! When I removed the files with errors and performed a scrub, everything cleared up and I stopped seeing errors. I appreciate everyone's help!
 
Status
Not open for further replies.
Top