Low iostat read performance on RAIDZ2...?

Status
Not open for further replies.

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
I'm sorry if this question has been asked before, but I can't see a clear answer. On my new RAIDZ2 FreeNAS box (6x 3TB disks), I've run an iostat for the first time (using the "Performance Test" button in the GUI). I've got the following results:

Code:
    Record Size 128 KB
    File size set to 41943040 KB
    Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
    Output is in Kbytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 Kbytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride                                  
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        41943040     128  474979  468843   138072   143652      


I'm quite surprised by how much lower the read speeds are than the write speeds? OK, the speeds are still fast enough to saturate a single GE link (which is all i've got), but why so low? Is this normal? I've run countless burnin tests and everything has come up good, so I don't anticipate any hardware issues.
 

enemy85

Guru
Joined
Jun 10, 2011
Messages
757
Listing hw specs would have been kind of useful...
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Sorry - as per signature:

FreeNAS-9.3-STABLE
ASRock C2750D4I in Node304 case
16 GB ECC Crucial CT2KIT102472BD160B
6x WD Green 3TB (WDIDLE'd) in RAIDZ2
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Signatures don't show for those of us using tapatalk, etc.

It could be a problem. ZFS will be writing the data out rapidly as the transaction group mechanism is a very good write cache, but the reads would need to be largely fulfilled from the pool since your ARC isn't going to hold 40G. If your ARC is bigger:

Code:
        Record Size 128 KB
        File size set to 41943040 KB
        Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        41943040     128  247060  415391   733888   909637



32GB of memory nearly holds the data so the reads are massively better than the writes. On a much smaller machine (8GB RAM),

Code:
        Record Size 128 KB
        File size set to 41943040 KB
        Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
        Output is in Kbytes/sec
        Time Resolution = 0.000001 seconds.
        Processor cache size set to 1024 Kbytes.
        Processor cache line size set to 32 bytes.
        File stride size set to 17 * record size.
                                                            random  random    bkwd   record   stride
              KB  reclen   write rewrite    read    reread    read   write    read  rewrite     read   fwrite frewrite   fread  freread
        41943040     128   49593   54223   200764   219931



The writes are slowish because the pool's older and somewhat fuller, but reads are still pretty zippy.

Both of those have been optimized towards their intended workload and both are RAIDZn arrays.

I guess generally speaking I'd expect your reads to be faster.
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Thanks (and good to know about the signatures).

Does seem like a bit of a concern, then. I'm not expecting ARC to save me - my use cases will typically not benefit so much from ARC, so it's good that the test parameters are not attempting to fit within the ARC.

One thing I don't understand is why the raw per disk read rates are so much lower than write during the iostat test (sorry the image is small, haven't quite figured out how to drive the forum).

Screen Shot 2015-03-27 at 10.56.02 am.png

This contrasts with when I was doing badblocks testing (prior to building the RAIDZ2 zpool) - in this testing, read and write performance was absolutely symmetrical.

I'm really not too sure where to start with this. There seems to be absolutely no indication of hardware issues. It would be great if anyone can make any suggestions about where I should look.

Many thanks.
 

Attachments

  • Screen Shot 2015-03-27 at 10.56.02 am.png
    Screen Shot 2015-03-27 at 10.56.02 am.png
    193.4 KB · Views: 271
Status
Not open for further replies.
Top