10Gb tunables on 9.10

Status
Not open for further replies.

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
Thought I revive this thread a bit and throw in some test results.

Recently I started testing 10gbe between FreeNAS and VMWare ESXi for hosting VMs over iSCSI and so far the default tunables is not as bad as I thought it might be, but not great either.

With IPerf, I'm getting pretty much wire speed (9.85Gbps avg.) out of the box in both directions. Once I go across iSCSI, the speed does drop as expected. Read/write tests were done using dd with different block sizes. Reads were between 3-5Gbps and writes are around 8Gbps. I've kept the VM small enough to ensure all ARC hits on reads. I'm guessing the fast write speed is due to Hardware acceleration on VMWare side and also compression since I'm writing mostly zeros?

I should also note that there was a noticeable performance drop going from 9.3 to 9.10. 9.3 being faster! Running the exact same tests, ARC hit ratio is higher on 9.3 than 9.10. I gave both a chance to "warm up" the cache before collecting stats.

Here are the arc_stats during test:

VMs over iSCSI on FreeNAS 9.3
  • 727.15MiB (MRU: 15.01GiB, MFU: 15.01GiB) / 32.00GiB
  • Hit ratio -> 94.82% (higher is better)
  • Prefetch -> 16.19% (higher is better)
  • Hit MFU:MRU -> 83.95%:14.88% (higher ratio is better)
  • Hit MRU Ghost -> 0.00% (lower is better)
  • Hit MFU Ghost -> 0.00% (lower is better)
VMs over iSCSI on FreeNAS 9.10
  • 3.73GiB (MRU: 15.07GiB, MFU: 15.07GiB) / 32.00GiB
  • Hit ratio -> 84.59% (higher is better)
  • Prefetch -> 18.80% (higher is better)
  • Hit MFU:MRU -> 77.34%:22.51% (higher ratio is better)
  • Hit MRU Ghost -> 0.00% (lower is better)
  • Hit MFU Ghost -> 0.00% (lower is better)

Couple of questions and observations:
1. Why are reads peaking out at 4Gbps despite it hitting mostly memory (ARC)?
2. There was a noticeable performance drop going from 9.3 to 9.10. 9.3 being faster! Running the exact same tests, ARC hit ratio is higher on 9.3 than 9.10. It almost seems like ARC on 9.10 is not as efficient in keeping hot data in memory than 9.3 despite plenty of free memory available??

Hardware (both VMware server and FreeNAS)
Dell Precision R5500 with 32GB RAM
Intel X520-DA2 dual 10GB NICs (1 used for test) on PCIe x16 Bus
Direct Fibre connection (No switch in between)

Software
VMWare ESXi 5.1 Update 2
FreeNAS-9.3-STABLE-201506292332
FreeNAS-9.10-STABLE-201606072003

At this point I'm not sure what tunables would help. Default network window sizes seem to be OK since iperf is doing wire speed. Maybe some arc tuning? If anyone has recommended list of tunable values, I wouldn't mind testing them out and posting results.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, wait, you're trying to do I/O testing from within a single VM and you're getting 4Gbps? That's *very* *good*.

You might be able to do better, but you're not experiencing any problem, in my opinion.

You can twiddle around with the usual I/O performance enhancement stuff, and here's some starting points for you:

https://pubs.vmware.com/vsphere-51/...UID-0D774A67-F6AC-4D8A-9E5A-74140F036AD2.html
http://www.yellow-bricks.com/2011/06/23/disk-schednumreqoutstanding-the-story/
http://pubs.vmware.com/vsphere-4-es...rformance_statistics/c_troubleshoot_disk.html

but the real fix is most likely to move to multiple LUN's and multiple VM's. No one really expects a single VM to be able to max out storage systems in VMware; why would they? If you have something that's so I/O intensive as to require a full 10Gbps of I/O all on its own, you're probably going to want to set up a physical host. Virtualization, pretty much by definition, implicitly assumes multiple VM's.
 

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
Thanks for the articles @jgreco Good info on queue depth tuning!

I definitely don't expect to have a VM needing the full 10gbe, but just thought it'd be a good test to see if VMWare itself can push that much data if iSCSI reads don't involve the disks at all (all ARC).

I added couple more LUNs and VMs and ran the same test and I'm still maxing out at 5gbps. Instead of 1 vm reading at 600MB/s, now I have 2 VMs reading at 300MB/s. The VMs are on separate LUNS, but again disk I/O on FreeNAS is pretty much idle since all of the reads are coming from memory. There's definitely still a bottleneck somewhere. Hrm...
 

Attachments

  • arc-hits.png
    arc-hits.png
    18 KB · Views: 408
  • gbe-speed.png
    gbe-speed.png
    21.5 KB · Views: 401
  • disk-act.png
    disk-act.png
    26.4 KB · Views: 392
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How's the CPU on the filer doing? Log into the console and run "top" to see if there's anything obvious.
 

DaveY

Contributor
Joined
Dec 1, 2014
Messages
141
CPU and load were low. Nothing to write home about and definitely nothing that should slow down the system. I've attached the screenshot for it.

Top showing around 60% I/O wait during heavy parallel reads. Really odd. Something is blocking it. Maybe the scheduler needs to be tuned?

@diehard, it's both. LUNs are iSCSI targets, but I had to add them as a datastore on VMware to make them usable. So VMFS5 riding on iSCSI LUNS
 

Attachments

  • cpu-load.png
    cpu-load.png
    42.3 KB · Views: 393
Joined
Nov 11, 2014
Messages
1,174
The FreeNAS version of Godzilla vs. King Kong...
popcorn anyone?

There is always tension between those two. Don't say anything so you don't get caught in a crossfire:smile:
 
Status
Not open for further replies.
Top