How to test iSCSI Performance between FreeNAS and ESXi

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Dear Community,

i am happy to setup a new pair of servers:
- FreeNAS latest version
- ESXi 7.0, which will use a zvol of FreeNAS as storage via iSCSI

I would like to ask how to test the performance of the iSCSI link between these two servers? Are there any softwares recommended for that purpose? Are there any tools available which test especially IOPS?
I would appreciate a VM running in ESXi 7.0 to test the performance of iSCSI. During the tests, I would like to try following:

- sync writes standard / on
- with / without jumboframes
- with / without Intel Opante SLOG
The link is 10 Gbit/s fiber.

Any recommendations or tipps are appreciated! Thanks for all your help! :)
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
972
The problem here is there is very little in the way of native apps that can perform this testing directly on ESXi. You end up with and OS running in a VM, with a unit of storage presented via virtualization, possibly riding on top of the VMFS layer. The VMFS filesystem makes a very intentional trade off of safety over performance. All these things will combine to give you disappointing, and more importantly rather variable inconsistent numbers.

ESXi actually has a kind of Linux distro built into it. You can enable ssh, and log in and there's a somewhat functional *nix like environment under the hood. It can even run statically linked Linux binaries under some circumstances. I've never tried to build fio in a way that would run on ESXi. Most of the binaries provided are derived from Busybox. (I was surprised to find Python 3 hiding in there as well.) This includes a version of /bin/dd that can be used to do I/O smoke testing.

Your iSCSI datastore will mount under /vmfs/volumes/<datastorename> as a symlink to a naa style uuid. You should be able to cd there and run something like:

dd if=/dev/zero of=<somefile> bs=1M count=1024

And get some rudimentary timing info. Keep in mind the zero's are very compressible at the ZFS layer. You can pre-stage a more realistic file and use that as the input as well.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
And get some rudimentary timing info. Keep in mind the zero's are very compressible at the ZFS layer. You can pre-stage a more realistic file and use that as the input as well.
Or disable compression for the iSCSI ZVOL temporarily.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Keeping jumbo frames out of the picture, I can make a rudimentary summary of your results in the table below, with only two corners being of real value:

Without Optane SLOGWith Optane SLOG
sync=standardVery fast but unsafeSame speed without SLOG but still unsafe
sync=alwaysSlow but safeFast and safe

The two corners are "top left" and "bottom right" - top right is the Optane device isn't being used, and bottom left is likely to be too slow (bordering on "totally unusable" with spinning disks)

As far as the dd testing, it will technically work, but you should test with much smaller block sizes in DD (4K/8K) to represent the kind of performance you'll see from running VMs. 64K would be valuable as a vMotion/provisioning kind of test.
 

Scampicfx

Contributor
Joined
Jul 4, 2016
Messages
125
Thank you guys! In the meantime I performed some tests and noticed nearly no difference difference between MTU 1500 and MTU 9000. The 10 Gbit/s link was saturated with MTU 1500 as well.

While googling, I have read that high quality NICs have techniques integrated to speed up MTU 1500 as good as possible so that there is only very little difference left between 1500 and 9000. Do you know anything about that?
My goal is to have a good throughput primarily with small files. I guess MTU 9000 wouldn't bring any benefit in this case, would it?
 
Top