NFS Slow In VMs

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
IIRC the 50% cap is only for iSCSI and my iSCSI ZVOL is only about 20% used, the whole pool itself is at 78%, the cap for that is 80% (FreeNAS warns you if usage is above 80%)

Your recollection is incorrect. That 80% pool fill threshold becomes ~50% if you're using the pool for iSCSI. For about the same reasons. It's just that there's actually a hardwired tripwire at 80% because stuff gets really bad beyond that point for average file storage use. For iSCSI or database or other block storage, it gets worse a lot faster:

delphix-small.png
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Ah thanks for the correction! In that case I have two 74 GB WD Raptors that I could make use of for my VM, which will actually give me the chance to get rid of one of my VMs.

This still doesnt explain why NFS is slow since the VM that resides on my SSD is just as slow as the others.

Sent from my Pixel C using Tapatalk
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This still doesnt explain why NFS is slow since the VM that resides on my SSD is just as slow as the others.

Right, but we tend to fix the obvious stuff first and address the low-hanging fruit in terms of easy questions. It isn't clear to me what's going on with your NFS, other than that perhaps there's some NFS tweaking that could/should be done.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I've did some quick research and read that there's not really much of and advantage of iSCSI over virtual storage, so I may just create the image on my SSD and be done with it. Also I'm assuming that you're referring to client side tweaks in the VM considering all seems well outside of the VM. What else would you recommend?

Sent from my Pixel C using Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'd probably start by looking at the IP performance between the VM and the filer with iperf, then move on to experimenting with larger window sizes and making sure it isn't something like TCP window scaling that's killing you.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Looks like there probably is a problem...

From Arch to FreeNAS
Code:
[bran@ra ~]$ sudo iperf -c freenas -f M
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.17 port 45954 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1006 MBytes   101 MBytes/sec


From Ubuntu VM to FreeNAS
Code:
 [bran@ubuntu ~]$ sudo iperf -c freenas -f M
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[  3] local 192.168.1.16 port 55176 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   319 MBytes  31.9 MBytes/sec

 [bran@ubuntu ~]$ sudo iperf -c freenas -f M -w 16k
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.03 MByte (WARNING: requested 0.02 MByte)
------------------------------------------------------------
[  3] local 192.168.1.16 port 55234 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   262 MBytes  26.2 MBytes/sec
 [bran@ubuntu ~]$ sudo iperf -c freenas -f M -w 32k
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.06 MByte (WARNING: requested 0.03 MByte)
------------------------------------------------------------
[  3] local 192.168.1.16 port 55240 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   291 MBytes  29.0 MBytes/sec
 [bran@ubuntu ~]$ sudo iperf -c freenas -f M -w 64k
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.12 MByte (WARNING: requested 0.06 MByte)
------------------------------------------------------------
[  3] local 192.168.1.16 port 55252 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   326 MBytes  32.6 MBytes/sec
 [bran@ubuntu ~]$ sudo iperf -c freenas -f M -w 128k
------------------------------------------------------------
Client connecting to freenas, TCP port 5001
TCP window size: 0.25 MByte (WARNING: requested 0.12 MByte)
------------------------------------------------------------
[  3] local 192.168.1.16 port 55256 connected with 192.168.1.6 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   336 MBytes  33.6 MBytes/sec


I just noticed a bunch of these in dmesg of the VM, something's up...
Code:
93752.420302] nfs: server 192.168.1.6 not responding, still trying                                                                             │
[2│[93752.477329] nfs: server 192.168.1.6 not responding, still trying                                                                             │
[2│[93752.883265] nfs: server 192.168.1.6 OK                                                                                                       │
[2│[93752.883316] nfs: server 192.168.1.6 OK                                                                                                       │
[2│[124253.044253] nfs: server 192.168.1.6 not responding, still trying


Sent from my Pixel C using Tapatalk
 
Last edited:

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
So after some more research, this seems like it is definitely a VirtualBox issue: https://forums.virtualbox.org/viewtopic.php?f=7&t=26783

Code:
 [bran@ubuntu ~]$ sudo hdparm -t /dev/sda

/dev/sda:
 Timing buffered disk reads: 110 MB in  3.02 seconds =  36.41 MB/sec


edit:

Or maybe iSCSI just sucks in my setup for writing to the OS (this is from the VM on the SSD with the vmdk)
Code:
bran@ubuntu-test:~$ sudo hdparm -t /dev/sda
/dev/sda:  
Timing buffered disk reads: 528 MB in  3.01 seconds = 175.61 MB/sec


edit 2: after more research it looks like the best network driver is VirtIO instead of the Intel Pro/1000 drivers. The user manual doesn't mention anything about BSD support, but it exists in the program itself. The only problem is that when I select it, my network connection fails to work at all.

virtio-kmod exists in the package database but doesn't seem to exist on the server, it also exists in ports but it isn't supported in 9.3, it looks like i'm screwed :( I guess I could put SABnzbd and/or sickrage and couchpotato back in a jail.

Sent from my Pixel C using Tapatalk
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, well, you definitely get points for running with the ball to what appears to be a reasonable conclusion. I'm sorry the answer isn't happier.

We've always known that VirtualBox isn't entirely awesome. But the good news is that there'll be a different virtualization option in FreeNAS 10, called bhyve, which will likely work "better" because it was designed from the ground up to work on BSD.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Thanks for the help.

Just for the hell of it I think I'm going to give ZFS on Linux a try on my NAS (installed on a separate, non ZFS drive) until FreeNAS 10 comes out. FreeBSD always kind of annoyed me since I started using it last year but stuck with it since I liked the FreeNAS interface, the removal of Linux jails was a real downer for me since most the the things I want to use are meant for Linux but have been ported to BSD and sometimes don't work as well as their Linux counterparts. It will also give me a chance to get down and dirty with ZFS (since I've been mostly using the GUI) and to use KVM instead of VirtualBox, which is pretty much what a bunch of companies use anyway.

Sent from my Pixel C using Tapatalk
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
I was looking for one of my old posts and stumbled upon this one, and just wanted to add my experiences with KVM and ZFS on Linux since I made the last post in February. I've been following the development of FreeNAS 10 and it looks gorgeous and I can't wait to use the final product, even though ZFS on Linux is treating me well, I do miss having a nice easy to use interface for everything. I hope Bhyve will be up to par performance wise compared to KVM because I have absolutely zero performance issues with KVM/qemu compared to phpVirtualBox that is in FreeNAS 9.3.

I'm running Arch Linux on the NAS itself and within all my KVMs. Just for the hell of it I ran iperf3 just to compare it to my above specs and all I can say is GOD DAMN!!

Code:
[root@usenet]: ~># iperf3 -c nas
Connecting to host nas, port 5201
[  4] local 192.168.1.17 port 48700 connected to 192.168.1.6 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  2.90 GBytes  24.9 Gbits/sec    0   3.08 MBytes
[  4]   1.00-2.00   sec  3.46 GBytes  29.8 Gbits/sec    0   3.08 MBytes
[  4]   2.00-3.00   sec  3.49 GBytes  29.9 Gbits/sec    0   3.08 MBytes
[  4]   3.00-4.00   sec  3.47 GBytes  29.8 Gbits/sec    0   3.08 MBytes
[  4]   4.00-5.00   sec  3.50 GBytes  30.0 Gbits/sec    0   3.08 MBytes
[  4]   5.00-6.00   sec  3.49 GBytes  30.0 Gbits/sec    0   3.08 MBytes
[  4]   6.00-7.00   sec  3.50 GBytes  30.1 Gbits/sec    0   3.08 MBytes
[  4]   7.00-8.00   sec  3.48 GBytes  29.9 Gbits/sec    0   3.08 MBytes
[  4]   8.00-9.00   sec  3.49 GBytes  30.0 Gbits/sec    0   3.08 MBytes
[  4]   9.00-10.00  sec  3.49 GBytes  29.9 Gbits/sec    0   3.08 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  34.3 GBytes  29.4 Gbits/sec    0             sender
[  4]   0.00-10.00  sec  34.3 GBytes  29.4 Gbits/sec                  receiver

iperf Done.


That was from the NAS itself to a KVM that lives on an XFS formatted partition on my SSD.

This is from Arch running in VirtualBox on a Windows 7 host, the VDI resides on an SSD, to the NAS itself.

Code:
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.206, port 54642
[  5] local 192.168.1.6 port 5201 connected to 192.168.1.206 port 54644
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   107 MBytes   897 Mbits/sec
[  5]   1.00-2.00   sec   111 MBytes   933 Mbits/sec
[  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec
[  5]   4.00-5.00   sec   111 MBytes   933 Mbits/sec
[  5]   5.00-6.00   sec   111 MBytes   933 Mbits/sec
[  5]   6.00-7.00   sec   111 MBytes   933 Mbits/sec
[  5]   7.00-8.00   sec   111 MBytes   933 Mbits/sec
[  5]   8.00-9.00   sec   111 MBytes   933 Mbits/sec
[  5]   9.00-9.99   sec   110 MBytes   933 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-9.99   sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-9.99   sec  1.08 GBytes   929 Mbits/sec                  receiver


And this is from the VirtualBox Arch VM to the Arch KVM

Code:
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.1.206, port 55654
[  5] local 192.168.1.17 port 5201 connected to 192.168.1.206 port 55656
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-1.00   sec   107 MBytes   895 Mbits/sec
[  5]   1.00-2.00   sec   111 MBytes   933 Mbits/sec
[  5]   2.00-3.00   sec   111 MBytes   933 Mbits/sec
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec
[  5]   4.00-5.00   sec   111 MBytes   933 Mbits/sec
[  5]   5.00-6.00   sec   111 MBytes   933 Mbits/sec
[  5]   6.00-7.00   sec   111 MBytes   933 Mbits/sec
[  5]   7.00-8.00   sec   111 MBytes   933 Mbits/sec
[  5]   8.00-9.00   sec   111 MBytes   933 Mbits/sec
[  5]   9.00-10.00  sec   111 MBytes   933 Mbits/sec
[  5]  10.00-10.11  sec  11.9 MBytes   934 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  5]   0.00-10.11  sec  0.00 Bytes  0.00 bits/sec                  sender
[  5]   0.00-10.11  sec  1.09 GBytes   929 Mbits/sec                  receiver


So I think it's pretty safe to say that phpVirtualBox sucks more than the vacuum of space hahahaha
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The last 2 both appear to be limited by a 1GbE link, no?
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
The last 2 both appear to be limited by a 1GbE link, no?

You are correct sir! I had the option to get dual 10G NICs and dual 1G NICs on my new board that I got for the NAS (it was $900 though, the one I got just doesn't have the dual 10G NICs and it was an open box so I got it for $650 instead of $800) and was contemplating it but once I looked at the price and size of 10G switches (cheapest one I found was about $500 refurbished and was huge) it definitely wasn't feasible since I live in the greater NYC area where space is at a premium and I'm struggling to survive here financially as it is. I was also thinking about doing a crossover connection between a 10G port on the NAS (if I had gotten it), but I'd still have to shell out about $200 for a 10G NIC for my desktop, which won't be pulling from the NAS a lot since it's a self build Steam Machine running Windows and I'm just using it as my desktop for the time being. It has space for 2 HDDs and an NVME drive so when I start to upgrade my mirrors in my NAS I'll just use 2x 4 TB drives in RAID1. Also I've read that game loading time don't really increase that much when going from a fast HDD to an SSD, so using a ZVOL over iSCSI or sharing stuff out over Samba or NFS wouldn't be beneficial performance wise.

Also I don't feel like messing around with static routes and different subnets on the same desktop. (eth0 goes to NAS eth1 handles everything else)
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Just curious, have you tried the new virtualization subsystem in freenas 9.10? I thought it's close to what 10 will offer.
 

brando56894

Wizard
Joined
Feb 15, 2014
Messages
1,537
Nope, I was considering it but I don't really like the v9 GUI and I didn't think bhyve was fully working yet. I may give it a go just for the hell of it to see how it compares to qemu.
 
Last edited:
Status
Not open for further replies.
Top