Tool for testing ZVOL performance

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
I'm setting up a new TrueNAS Core box for use as an VMware datastore using iSCSI.

I'd like to benchmark what the disk subsystem is capable of doing before I worry about the other components of iSCSI (i.e., networking).

What tool can I use to benchmark the ZVOL from the TrueNAS shell? I'm familiar with fio and iozone but those only seem to test file I/O and not block I/O.

I've searched this forum and the general Internet but haven't found an answer so I suspect it's not possible or I'm missing some important detail.

Thanks!
 

chruk

Dabbler
Joined
Sep 4, 2021
Messages
27
I have only tested ZVOL performance inside a VM (using proxmox), and it massively out performed against using a raw image on top of a dataset.

Only other method I can think of is using dd.
 

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
I have only tested ZVOL performance inside a VM (using proxmox), and it massively out performed against using a raw image on top of a dataset.

Only other method I can think of is using dd.
How does one use dd with a block device like a ZVOL? I think the "of" parameter has to be a file.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
How does one use dd with a block device like a ZVOL? I think the "of" parameter has to be a file.
Can be a device all the same. Make sure to switch off compression.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Got it. Thanks! Any other tools I can use on a ZVOL to mimic the way iSCSI would use the disk subsystem?
I'm reasonably certain that fio can target block devices, with the obvious caveat that write testing is destructive because it can't be contained to a file.
 

xyzzy

Explorer
Joined
Jan 26, 2016
Messages
76
I'm reasonably certain that fio can target block devices, with the obvious caveat that write testing is destructive because it can't be contained to a file.
It turns out you're totally right about fio working on ZVOLs. The trick is I had to change from the "posixaio" IO engine to "psync" and change the "filename" parameter from a filename like "/mnt/vol1/test.tmp" to the ZVOL device (i.e., "/dev/zvol/vol1/zvol1"). The bummer is that the "psync" IO engine seems to be quite inferior to "posixaio" (when I compared them head-to-head on a VOL).

One really strange thing I noticed is that right after I create the ZVOL, when I run my fio tests, the read numbers are super high as if it's using the ARC. However, I know it's not because I'm watching the ARC with "arcstat -f read,hits,miss,hit%,arcsz 1" and seeing no ARC hits or misses. At the same time, I'm monitoring the device activity with "iostat -d -w1" and seeing no reads. After a minute or two of read tests, I'll start seeing more reasonable numbers from fio and my arcstat and iostat commands start showing activity. The write tests are not impacted (they're consistently normal). I didn't see anything like this when testing the VOL.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I note that it really uses fio underneath, but I recently saw an article where @LawrenceSystems did an interesting benchmark between SCALE and CORE using this tool:

Their results reported here:
 
Top