dd if=testfile of=/dev/null -> idle disks in iostat

Status
Not open for further replies.

ayc

Cadet
Joined
Jan 5, 2014
Messages
4
I'm trying to debug some quirks I've been seeing in performance. This one I can't figure out.
Supermicro MB, LSI SAS2008, RAIDZ2 over 6 drives, 32G memory.

I have a large testfile (650G), big enough that I can let it run for a while and watch what's happening without cache effects.

If I:
Code:
    dd if=testfile2 of=/dev/null bs=1M


And looking at iostat, I'm seeing 2 of 6 idle drives for well over a minute. Then for a while I see all drive running and again 2 idle drive. A simple dd from the raw devices runs solid and steady, so I don't think it's a HW problem.

I can validate this against zpool iostat, it's telling me the same thing.
Code:
vol1                                    4.23T  6.64T  3.96K      0   504M      0
  raidz2                                4.23T  6.64T  3.96K      0   504M      0
    gptid/d520c183-75dd-11e3-8a9a-002590ad61e3      -      -      0      0      0      0
    gptid/d5d8410a-75dd-11e3-8a9a-002590ad61e3      -      -      0      0      0      0
    gptid/d67c381f-75dd-11e3-8a9a-002590ad61e3      -      -   1023      0   126M      0
    gptid/67520e84-7640-11e3-84ef-002590ad61e3      -      -   1016      0   127M      0
    gptid/d7c9fec2-75dd-11e3-8a9a-002590ad61e3      -      -    968      0   118M      0
    gptid/d8797913-75dd-11e3-8a9a-002590ad61e3      -      -    998      0   119M      0


then later I see behavior like:
Code:

                                           capacity     operations    bandwidth
pool                                    alloc   free   read  write   read  write
--------------------------------------  -----  -----  -----  -----  -----  -----
freenas-boot                             960M  28.8G      0      0      0      0
  da8p2                                  960M  28.8G      0      0      0      0
--------------------------------------  -----  -----  -----  -----  -----  -----
vol1                                    4.23T  6.64T  3.46K      0   440M      0
  raidz2                                4.23T  6.64T  3.46K      0   440M      0
    gptid/d520c183-75dd-11e3-8a9a-002590ad61e3      -      -    903      0   109M      0
    gptid/d5d8410a-75dd-11e3-8a9a-002590ad61e3      -      -    473      0  54.4M      0
    gptid/d67c381f-75dd-11e3-8a9a-002590ad61e3      -      -    474      0  54.5M      0
    gptid/67520e84-7640-11e3-84ef-002590ad61e3      -      -    901      0   110M      0
    gptid/d7c9fec2-75dd-11e3-8a9a-002590ad61e3      -      -    451      0  51.8M      0
    gptid/d8797913-75dd-11e3-8a9a-002590ad61e3      -      -    451      0  51.8M      0
--------------------------------------  -----  -----  -----  -----  -----  -----


or the two idle drives moving around.

So, I think I have some misunderstanding on how ZFS reads. I expected for long sequential reads, it would see the sequential requests and schedule the read-aheads to pull data from all drives. What should I be seeing in this case? Also, where's a good place to dig into some of the ZFS algorithms? besides just reading source...

Thanks,

...alan
 
Status
Not open for further replies.
Top