Very slow CIFS / NFS performance to iohyve VM

Status
Not open for further replies.

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Hi All,

I am really enjoying iohyve on my FreeNAS 9.10 system. I've now got 3 VMs, all running Debian Jesse, working like a charm. I'm slowly migrating my services which were once hosted on jails / plugins into Docker containers on my VMs. I'm so impressed with how straight forward it has been and how well it works.

Where I've come unstuck, though, is I'm finding that attaching storage external to the VM (but hosted on my FreeNAS system) is incredibly slow using either CIFS or NFS. This hasn't been too much of a problem until I recently setup Nextcloud, and I'm finding it awfully slow because of this.

As an example, I can copy (duplicate by cp command at command line) a 1 GB file on FreeNAS natively in about 2 seconds. Doing the same thing from an iohyve VM, connecting to the storage using CIFS takes about 30 seconds (30 MB/s), and by NFS about 45 seconds (22 MB/s). If I connect another machine by gigabit Ethernet, I can easily saturate the link when copying files to and from FreeNAS native CIFS shares (around 100 MB/s).

I can provide configs and more details, but I'd like to first know whether anyone else has had performance issues when attaching storage to iohyve VMs? Is CIFS / NFS the way to go? Are there any tricks people had to use to get it going well?

Many thanks,

Nick
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Contemplating the possibility that I might be fooling myself with some ZFS magic skewing results, I tried something (slightly) more scientific. This time, I used dd to generate a 1 GB file, first time natively, second time from the VM to the same ZFS dataset using a CIFS connected FreeNAS share. I alternated one after the other, hoping that I am not getting any confounding results form the ZFS ARC. Results were consistent after multiple attempts.

Native on FreeNAS:
Code:
[root@saturn] /mnt/pond/Calypso/test# dd if=/dev/zero of=./test.dd bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes transferred in 1.763954 secs (608712996 bytes/sec)

And then in the VM over CIFS
Code:
nick@calypso:/mnt/Calypso/test$ dd if=/dev/zero of=./test.dd2 bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 36.693 s, 29.3 MB/s

A fairly humongous difference however you look at it.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,367
Are you using VirtIO for the virtual NIC?

Btw, your test is probably not valid if the dataset has compression enabled.

But even with that, 30MB/s is slow.
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Hi @bass_rock - that's a very interesting post. I don't know much about rancher (I'm running Debian Jesse on my iohyve VMs and deploying docker containers from there). But interesting to see the comments about the NFS docker container to manage NFS mounts. I'm slightly dubious that it could make any difference - my assumption is that the poor NFS performance is related to iohyve itself rather than anything sitting inside a VM, but I should try it to see if it makes any difference.

@Stux - excellent question. I don't know. How would I find out? Other than setting a FreeNAS tuneable to configure iohyve_flags to kmod=1 net=igb0, I haven't mucked with network config at all. Sound point on effect of compression (which I do have enabled), but I would expect consistent behaviour whether I was testing direct on FreeNAS or via CIFS / NFS (at least until the point where CIFS / NFS itself throttles performance, remembering that this is all happening on the same physical machine).
 

danjng

Explorer
Joined
Mar 20, 2017
Messages
51
Hi @bass_rock - that's a very interesting post. I don't know much about rancher (I'm running Debian Jesse on my iohyve VMs and deploying docker containers from there). But interesting to see the comments about the NFS docker container to manage NFS mounts. I'm slightly dubious that it could make any difference - my assumption is that the poor NFS performance is related to iohyve itself rather than anything sitting inside a VM, but I should try it to see if it makes any difference.

@Stux - excellent question. I don't know. How would I find out? Other than setting a FreeNAS tuneable to configure iohyve_flags to kmod=1 net=igb0, I haven't mucked with network config at all. Sound point on effect of compression (which I do have enabled), but I would expect consistent behaviour whether I was testing direct on FreeNAS or via CIFS / NFS (at least until the point where CIFS / NFS itself throttles performance, remembering that this is all happening on the same physical machine).
You should be able to see your NIC properties in the VM tab. Select your VM and then press the Devices button. This should bring up the window of attached devices and if you click on the NIC you should be able to tell if it's VIRTIO.


Sent from my iPhone using Tapatalk
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Ah - I would, but I am doing all of this on FreeNAS 9.10, so no nice GUIs to help me. Anything I can do at the command line?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Ah - I would, but I am doing all of this on FreeNAS 9.10, so no nice GUIs to help me. Anything I can do at the command line?

Just check which NIC driver us being used in your debian VM(s) , with for e.g.:

Code:
 lspci -v | grep Ethernet
and/or
Code:
lsmod | grep net



AFAIK iohyve defaults to using virtio-net.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Ah - I would, but I am doing all of this on FreeNAS 9.10, so no nice GUIs to help me. Anything I can do at the command line?

Just check which NIC driver us being used in your debian VM(s) , with for e.g.:

Code:
 lspci -v | grep Ethernet
and/or
Code:
lsmod | grep net



AFAIK iohyve defaults to using virtio-net.
 

nickt

Contributor
Joined
Feb 27, 2015
Messages
131
Took me a while to come back to this...

Certainly looks like I am using virtio-net
Code:
nick@calypso:/mnt/Calypso/test$  lspci -v | grep Ethernet
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
nick@calypso:/mnt/Calypso/test$ lsmod | grep net
virtio_net			 26553  0
virtio_ring			17513  2 virtio_net,virtio_pci
virtio				 13058  2 virtio_net,virtio_pci

In the meantime, I had discovered that my /etc/fstab mount command was defaulting to CIFS v1. I updated the command to negotiate v2.1, but it hasn't made any difference.

Any other thoughts?
 
Status
Not open for further replies.
Top