Scheduled Scrub doesn't work after upgrade 8.04 --> 8.3.0

Status
Not open for further replies.

Pixeltje

Dabbler
Joined
Feb 20, 2012
Messages
32
I use FreeNAS for a while now and a few weeks ago I've decided to upgrade from 8.04 to 8.3.0 due to 8.0.4 problems with UPS service etc. Eversince the upgrade from 8.0.4 Media Edition the weekly (scheduled) scrub doesn't work. I've scheduled a scrub for the entire Volume every week in the night from Saturday to Sunday. Since the upgrade FreeNAS sends me over 60 emails in that night telling me that no scrub could be started because there is already a scrub is in progress. Apparently it checks for that every minute or so because the mails are sent approximately one minute after the other.

The sad thing is that I can find no error messages, it seems that no scrub was in progress at the time of the scheduled scrub, so, what could cause the error mails?. I have already tried to delete the scheduled scrub, reboot freeNAS and to create a new scrub, to no avail it seems: Sunday night 50 + emails were sent with the same message. Is this a known bug (google doesn't says much about it?)

The planned scrub is the only one for the volume planned, any other scrubs are not scheduled or initiated by me.

The exact message in the mail:
[q] can not scrub tank: currently scrubbing, use 'zpool scrub-s' to cancel current scrub [/ q]
Mail has the following topic:
[q] Cron <root@freenas> PATH = "/ bin :/ sbin :/ usr / bin :/ usr / sbin :/ usr / local / bin :/ usr / local / sbin :/ root / bin" zpool scrub tank > / dev / null [/ q]

I don't really know what to do with this; I'd like to have a weekly scrub to check for any corrupt data. Most of all though I want to know what and why something goes wrong.

Thanks for helping..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Have you tried to cancel the scrub despite there not being one in progress? Have you tried a reboot? Aside from those 2 things I don't have any recommendations.
 

Pixeltje

Dabbler
Joined
Feb 20, 2012
Messages
32
Thanks for your reply :)

I've not yet tried to cancel the scrub that apparently is already in progress, according to FreeNAS, mostly due to the fact that the scrub is scheduled during night-time when I'm asleep. However, i've not found any evidence of a scrub being completed, so I think there is no scrub at all and the error has another cause.

I've also deleted the scheduled scrub, reboot and then a add a new scheduled scrub for the volume, which as it seems doesn't work either.

I'm looking through the status mails that FreeNAS sends me every night and I can see that before the upgrade tot 8.3.1 the daily-run mail was different from the one i'm now receiving:
FreeNAS 8.0.4 media:
Code:
Removing stale files from /var/preserve:

Cleaning out old system announcements:

Backup passwd and group files:

Verifying group file syntax:
/etc/group is fine

Disk status:
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs1a    927M    421M    431M    49%    /
devfs                  1.0K    1.0K      0B   100%    /dev
/dev/md0               4.6M    1.9M    2.3M    45%    /etc
/dev/md1               824K    2.0K    756K     0%    /mnt
/dev/md2               149M    9.0M    128M     7%    /var
/dev/ufs/FreeNASs4      20M    915K     17M     5%    /data
tank                   2.3T     38K    2.3T     0%    /mnt/tank
tank/Backup            700G    152G    548G    22%    /mnt/tank/Backup
tank/Data              2.8T    529G    2.3T    19%    /mnt/tank/Data
tank/Home              2.4T    105G    2.3T     4%    /mnt/tank/Home

Last dump(s) done (Dump '>' file systems):

Checking status of zfs pools:
all pools are healthy

Checking status of ATA raid partitions:

Checking status of gmirror(8) devices:

Checking status of graid3(8) devices:

Checking status of gstripe(8) devices:

Network interface status:
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
em0    1500 <Link#1>      38:60:77:9c:fe:f2   875247     0     0   473531     0     0
em0    1500 192.168.1.0   192.168.1.244       721960     -     -   524562     -     -
lo0   16384 <Link#2>                            4262     0     0     4262     0     0
lo0   16384 fe80:2::1     fe80:2::1                0     -     -        0     -     -
lo0   16384 localhost     ::1                      0     -     -        0     -     -
lo0   16384 your-net      localhost             4235     -     -     4262     -     -

Security check:
    (output mailed separately)

Scrubbing of zfs pools:
   skipping scrubbing of pool 'tank':
      last scrubbing is 0 days ago, threshold is set to 35 days

Checking status of 3ware RAID controllers:
Alarms (most recent first):
  No new alarms.

-- End of daily output --


Freenas 8.3.0:
Code:
Removing stale files from /var/preserve:

Cleaning out old system announcements:

Backup passwd and group files:

Verifying group file syntax:
/etc/group is fine

Backing up package db directory:

Disk status:
Filesystem             Size    Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs2a    926M    378M    473M    44%    /
devfs                  1.0k    1.0k      0B   100%    /dev
/dev/md0               4.6M    3.2M    977k    77%    /etc
/dev/md1               823k    2.0k    756k     0%    /mnt
/dev/md2               149M     12M    125M     9%    /var
/dev/ufs/FreeNASs4      19M    1.3M     16M     7%    /data
tank                   2.3T     39k    2.3T     0%    /mnt/tank
tank/Backup            700G    154G    545G    22%    /mnt/tank/Backup
tank/Data              2.8T    529G    2.3T    19%    /mnt/tank/Data
tank/Home              2.4T    108G    2.3T     4%    /mnt/tank/Home

Last dump(s) done (Dump '>' file systems):

Checking status of zfs pools:
NAME   SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
tank  7.25T  1.55T  5.70T    21%  1.00x  ONLINE  /mnt

all pools are healthy

Checking status of ATA raid partitions:

Checking status of gmirror(8) devices:

Checking status of graid3(8) devices:

Checking status of gstripe(8) devices:

Network interface status:
Name    Mtu Network       Address              Ipkts Ierrs Idrop    Opkts Oerrs  Coll
em0    1500 <Link#1>      38:60:77:9c:fe:f2  5664180     0     0  4462841     0     0
em0    1500 192.168.1.0   192.168.1.244      4915383     -     -  5640007     -     -
usbus     0 <Link#2>                               0     0     0        0     0     0
usbus     0 <Link#3>                               0     0     0        0     0     0
lo0   16384 <Link#4>                          887166     0     0   887160     0     0
lo0   16384 fe80::1%lo0   fe80::1                  0     -     -        0     -     -
lo0   16384 localhost     ::1                      0     -     -        0     -     -
lo0   16384 your-net      localhost           887170     -     -   887167     -     -

Security check:
    (output mailed separately)

Checking status of 3ware RAID controllers:
Alarms (most recent first):
  No new alarms.

-- End of daily output --


Please notice that the part about Scrubs has been completely removed ever since the upgrade to 8.3.0 and the part about the pool health has been more detailed after the upgrade.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What's the output of zpool status?

Just something that popped into my head. If its trying to start a scrub every minute for an hour maybe at 00 minutes it actually starts the scrub and then from 01 to 59 you get the email because the scrub started at 00.
 

Pixeltje

Dabbler
Joined
Feb 20, 2012
Messages
32
What's the output of zpool status?

Just something that popped into my head. If its trying to start a scrub every minute for an hour maybe at 00 minutes it actually starts the scrub and then from 01 to 59 you get the email because the scrub started at 00.

That could very well be the case.. I'm currently unable to reach the NAS due to a change of ISP, but i'll check tonight when I get home..
 

Pixeltje

Dabbler
Joined
Feb 20, 2012
Messages
32
What's the output of zpool status?

Just something that popped into my head. If its trying to start a scrub every minute for an hour maybe at 00 minutes it actually starts the scrub and then from 01 to 59 you get the email because the scrub started at 00.

You were right; I've changed the schedule to scrub last night and changed the settings for minutes and hours, and this morning: no mails, log shows that a scrub indeed has been executed.

Thanks a lot for your help! :D
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Weird how the most bizaare stuff pops into my head sometimes but it turns out to be right. At least its fixed!
 

Pixeltje

Dabbler
Joined
Feb 20, 2012
Messages
32
Weird how the most bizaare stuff pops into my head sometimes but it turns out to be right. At least its fixed!

Haha yeah, i'm glad this issue is solved now. Although I find the settings menu for the scrub somewhat confusing. The slider has a minimum of 1 minute so i thought that would be the same as "0" or "none", but in the other tab I can select the "0" button. Hope this fixes it. Strange though since I didn't change the settings after upgrading to 8.3.0.

nevertheless, scrub works and no more mails through the night. Thanks again!
 
Status
Not open for further replies.
Top