scrub job - error in crontab entry?

Status
Not open for further replies.

tingo

Contributor
Joined
Nov 5, 2011
Messages
137
In FreeNAS 8.2.0, we got scheduled scrub jobs. (Yes! Thanks!)

My FreeNAS box hasn't been up for 35 days after I upgraded, but I couldn't help noticing that the crontab entry for the default scrub job looks wrong:
Code:
[root@kg-f3] ~# cat /etc/crontab | grep scrub
00	00	*	1,2,3,4,5,6,7,8,9,a,b,c	7	root	PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/sbin/scrub -t 35 zstore

I don't think specifying months in hexadecimal is going to work.
This is on
Code:
[root@kg-f3] ~# cat /etc/version
FreeNAS-8.2.0-RELEASE-p1-x64 (r11950)
[root@kg-f3] ~# uname -a
FreeBSD kg-f3.kg4.no 8.2-RELEASE-p9 FreeBSD 8.2-RELEASE-p9 #0: Thu Jul 19 12:39:10 PDT 2012
     root@build.ixsystems.com:/build/home/jpaetzel/8.2.0/os-base/amd64/build/home/jpaetzel/8.2.0/FreeBSD/src/sys/FREENAS.amd64  amd64

I searched the bug report page, but didn't find anything about this problem there.
I hope this helps.
 

tingo

Contributor
Joined
Nov 5, 2011
Messages
137
Hmm, I updated the job from the GUI (not changing anything, just pressing "OK" in the edit dialog) and now it looks like:
Code:
tingo@kg-f3$ cat /etc/crontab | grep scrub
00	00	*	*	7	root	PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/sbin/scrub -t 35 zstore

which looks better.
 

tingo

Contributor
Joined
Nov 5, 2011
Messages
137
Hi,
I still don't know how to reproduce the bug, but it looks like it is present in 8.3.0 too.
Details:
Code:
[root@kg-f5] ~# cat /etc/version
FreeNAS-8.3.0-RELEASE-x64 (r12701M)

[root@kg-f5] ~# cat /etc/crontab | grep scrub
00	00	*	1,2,3,4,5,6,7,8,9,a,b,c	7	root	PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/sbin/scrub -t 35 z5

[root@kg-f5] ~# zpool status
  pool: z5
 state: ONLINE
  scan: scrub repaired 0 in 7h39m with 0 errors on Tue Jan 29 01:20:21 2013
config:

	NAME                                            STATE     READ WRITE CKSUM
	z5                                              ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    gptid/202e9138-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/20e9bc37-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/21a0e079-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/225d33a6-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/2319281f-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/23d54afc-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0

errors: No known data errors

[root@kg-f5] ~# scrub -t 365 z5
   skipping scrubbing of pool 'z5':
	  last scrubbing is 43 days ago, threshold is set to 365 days

[root@kg-f5] ~# uptime
 8:58PM  up 54 days,  2:37, 2 users, load averages: 0.01, 0.04, 0.00

As you can see, the threshold has been passed (and was on sunday too), but the job hasn't been run. Further "evidence":
Code:
[root@kg-f5] ~# ll /var/log/c*
-rw-------  1 root  wheel  38319 Mar 12 20:59 /var/log/cron
-rw-------  1 root  wheel   3577 Mar 12 18:00 /var/log/cron.0.bz2
-rw-------  1 root  wheel   3760 Mar 12 10:00 /var/log/cron.1.bz2
-rw-------  1 root  wheel   3679 Mar 12 02:00 /var/log/cron.2.bz2
-rw-------  1 root  wheel   3585 Mar 11 18:00 /var/log/cron.3.bz2
[root@kg-f5] ~# grep scrub /var/log/cron
[root@kg-f5] ~# sh
# for i in 0 1 2 3
> {
> bzcat /var/log/cron.${i}.bz2 | grep scrub
> }
# exit

Are there anything else I can do to try and pinpoint the problem before I open a ticket?
 

William Grzybowski

Wizard
iXsystems
Joined
May 27, 2011
Messages
1,754
This is a known bug, you have invalid entry in the crontab. Try deleting the scrub job and add it again.

Should be fixed in 8.3.1, I think.
 

tingo

Contributor
Joined
Nov 5, 2011
Messages
137
Ah, ok - it wasn't known the last time I reported it (or at least nobody told in this thread) and no new posts happened in the thread, so I just followed up on it. No harm done.
I just saved the job again from the gui, it now looks like this:
Code:
tingo@kg-f5$ cat /etc/crontab | grep scrub
00	00	*	*	7	root	PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/sbin/scrub -t 35 z5

Next week I'll know if it works properly. :smile:
 

tingo

Contributor
Joined
Nov 5, 2011
Messages
137
Final report: it works.
zpool status:
Code:
tingo@kg-f5$ zpool status
  pool: z5
 state: ONLINE
  scan: scrub repaired 0 in 7h35m with 0 errors on Sun Mar 17 07:35:12 2013
config:

	NAME                                            STATE     READ WRITE CKSUM
	z5                                              ONLINE       0     0     0
	  raidz1-0                                      ONLINE       0     0     0
	    gptid/202e9138-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/20e9bc37-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/21a0e079-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/225d33a6-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/2319281f-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0
	    gptid/23d54afc-4124-11e2-a433-3085a9ebf2a2  ONLINE       0     0     0

errors: No known data errors

cron logfiles:
Code:
[root@kg-f5] ~# grep scrub /var/log/cron
[root@kg-f5] ~# ll /var/log/cron*
-rw-------  1 root  wheel  3219 Mar 17 18:15 /var/log/cron
-rw-------  1 root  wheel  3539 Mar 17 18:00 /var/log/cron.0.bz2
-rw-------  1 root  wheel  3802 Mar 17 10:00 /var/log/cron.1.bz2
-rw-------  1 root  wheel  3669 Mar 17 02:00 /var/log/cron.2.bz2
-rw-------  1 root  wheel  3484 Mar 16 18:00 /var/log/cron.3.bz2
[root@kg-f5] ~# sh
# for i in 0 1 2 3
> {
> bzcat /var/log/cron.${i}.bz2 | grep scrub
> }
Mar 17 00:00:00 kg-f5 /usr/sbin/cron[8752]: (root) CMD (PATH="/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/root/bin" /usr/local/sbin/scrub -t 35 z5)
# exit

That is all.
 
Status
Not open for further replies.
Top