Disk use in scrub

Status
Not open for further replies.
Joined
Jul 13, 2013
Messages
286
I've got a running scrub in one of the servers which has been down for some maintenance last night and today; there are no other users, just the scrub.

I'm seeing disk traffic like this (identical on all 7 drives):

192.168.1.183.png


That's all during the scrub, and nothing else is going on.

Pool state is:

Code:
[ddb@zzbackup ~]$ zpool list
NAME  SIZE  ALLOC  FREE  EXPANDSZ  FRAG  CAP  DEDUP  HEALTH  ALTROOT
zzback  38T  24.5T  13.5T  -  28%  64%  1.00x  ONLINE  /mnt

[ddb@zzbackup ~]$ zpool status zzback
  pool: zzback
state: ONLINE
  scan: scrub in progress since Tue May 10 04:00:04 2016
        8.36T scanned out of 24.5T at 190M/s, 24h44m to go
        0 repaired, 34.10% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        zzback                                          ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/e2f6c2c1-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/e41790b8-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/e4fa168f-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/e5d2abce-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/e6b31a5f-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/e78826c7-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0
            gptid/ec3dad61-af82-11e5-9d15-20cf306269eb  ONLINE       0     0     0

errors: No known data errors


So why do I see such a huge change in disk throughput during the scrub? Feels weird, plus of course it makes the estimate of time to completion go crazy (it was showing less than 8 hours to completion last I looked, now it's showing 24!)

(I don't really expect to find anything I can do about it, I'm just curious, I like to understand how stuff I'm using works.)
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
One thing that jumps out is that your pool has 64% fragmentation :eek:
That would slow things down a lot.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
I think it has 28% fragmentation and 64% usage. Still, 28% is quite high.
Guess 1: there's a bunch of data that was written once and never modified, then a bunch of live data that changes frequently.
Guess 2: some undetected hardware issue.
 

ethereal

Guru
Joined
Sep 10, 2012
Messages
762
Joined
Jul 13, 2013
Messages
286
Small files should be nearly unknown; camera-raw files for still photos, plus lots of video in even bigger files, so two-digit megabytes and up. (Yeah, there are derivative smaller jpegs of some of the photos -- but maybe 1/8 or something, not that many.) Haven't actually done a full analysis of file size distribution, though, and a big chunk of this isn't my data, so I might not know it as well as I think.

I'm not really complaining about the total time taken; it's the strange surges and dips in disk rates that have me puzzled.

Code:
scrub in progress since Tue May 10 04:00:04 2016
        10.4T scanned out of 24.5T at 121M/s, 33h53m to go
        0 repaired, 42.53% done
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479
I think it has 28% fragmentation and 64% usage. Still, 28% is quite high.
Guess 1: there's a bunch of data that was written once and never modified, then a bunch of live data that changes frequently.
Guess 2: some undetected hardware issue.
Yep, you're right I missed the > - < and read that wrong (damn formatting) lol
 

RichTJ99

Patron
Joined
Sep 12, 2013
Messages
384
Just curious I see my Frag rate is 23% - Is there some sort of ZFS defrag? Is this worth worrying about? I do like to worry.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top