Very Slow Network Performance on Virtual Machines

NASbox

Guru
Joined
May 8, 2012
Messages
650
I installed a Ubuntu 20.04 Server VM on TrueNAS 12.0-U5, and for some reason the network is VERY Slow.

As a test, I downloaded the Ubuntu 20.04 ISO using wget from the shell on TrueNAS, and I get a speed of more than 40 MB/s.

When I do the same thing from within the VM, I get a speed of about 15-20kb/s.

When I run htop within the VM, it has lots of memory, and there is almost no CPU utilization. The CPU is an i7, and I've passed all 8 cores to the VM, so it's not a capacity problem. I've also tried changing the NIC on the vm VIRTIO to E1000 and back again.

Is anyone else having a problem with this networking on VMs?

Any assistance with troubleshooting this issue would be much appreciated.
 

IOSonic

Explorer
Joined
Apr 26, 2020
Messages
54
I ran into this once. The suggestions in this post fixed it for me. My issue was not with a jail, but with a VM just like yours.

https://www.truenas.com/community/t...ged-to-vlan-extremely-slow.80878/#post-575814

Someone recommended in that thread to disable all hardware acceleration features like so:

"ifconfig <interface> -rxcsum -rxcsum6 -txcsum -txcsum6 -tso -vlanhwtag -vlanhwtso

However, the only parameter that mattered for me was the

-vlanhwtag

After verifying this helped, I added it as a system tunable per the instructions in the thread:

  • Navigate to System > Tunables
  • Add the following as the variable: ifconfig_{ifname} (example: ifconfig_lagg0)
  • Add the following value: vlanhwtag
  • Add the following type: rc.conf
  • Repeat for every interface, including lags and all interfaces that make them up.
  • Reboot
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
I ran into this once. The suggestions in this post fixed it for me. My issue was not with a jail, but with a VM just like yours.

https://www.truenas.com/community/t...ged-to-vlan-extremely-slow.80878/#post-575814

Someone recommended in that thread to disable all hardware acceleration features like so:

"ifconfig <interface> -rxcsum -rxcsum6 -txcsum -txcsum6 -tso -vlanhwtag -vlanhwtso

However, the only parameter that mattered for me was the

-vlanhwtag

After verifying this helped, I added it as a system tunable per the instructions in the thread:

  • Navigate to System > Tunables
  • Add the following as the variable: ifconfig_{ifname} (example: ifconfig_lagg0)
  • Add the following value: vlanhwtag
  • Add the following type: rc.conf
  • Repeat for every interface, including lags and all interfaces that make them up.
  • Reboot
@IOSonic Thank for the reply...

I've been struggling to figure out exactly which interfaces I should apply the command:
"ifconfig <interface> -rxcsum -rxcsum6 -txcsum -txcsum6 -tso -vlanhwtag -vlanhwtso

In my context I have lagg0 on em0 and em1, and 3 VLANs (say 10,20,30) attached to lagg0. My VM, NIC, is attached to VLAN20.
There is a VNET device that gets dynamically created/destroyed when the VM starts, and it appears to be attaching itself to bridge0.

IIUC I should run the command above substituting <interface> with em0, em1, VLAN10, VLAN20, VLAN30 and lagg0 in the shell and then start the VM for testing. If it works, then I should configure them as tunables as you have described above.

Have I got that right? When you did this did it have any noticable impact on NAS performance?
 

IOSonic

Explorer
Joined
Apr 26, 2020
Messages
54
@NASbox per Patrick's post, this needs to be applied to the physical parent interface.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
@NASbox per Patrick's post, this needs to be applied to the physical parent interface.

Thanks @IOSonic that helped, I executed the following in the shell:

ifconfig em0 -vlanhwtag
ifconfig em1 -vlanhwtag

and that seemed to solve the problem. Now, how do I translate those commands to tunables so that they survive a reboot?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Thanks @IOSonic that helped, I executed the following in the shell:

ifconfig em0 -vlanhwtag
ifconfig em1 -vlanhwtag

and that seemed to solve the problem. Now, how do I translate those commands to tunables so that they survive a reboot?

You don't. You'd add -vlanhwtag in the Options field of both interfaces.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
You don't. You'd add -vlanhwtag in the Options field of both interfaces.
@Samuel Tai - Thanks for the reply. Just for clarity - Network / Interfaces / Edit, and add the string '-vlanhwtag' (including the -) to the options field, and then click Apply on both em0/em1 as shown below. Am I correct? (Sorry for the over caution, but if I stuff up the network interfaces I don't know how I would get control of TrueNAS back to fix it.)

View attachment 49799

View attachment 49799
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
If you're paranoid about using the Options field, you could also create a post-init task with the explicit ifconfig commands, and then reboot. Then apply the Options after the reboot, and then delete the post-init task.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
If you're paranoid about using the Options field, you could also create a post-init task with the explicit ifconfig commands, and then reboot. Then apply the Options after the reboot, and then delete the post-init task.

Thanks.. @Samuel Tai ... A bit of a communications gap, for some reason it doesn't appear as if my screen shot came through. I've tried to attach again.

I would prefer to use use the options field. Just want to make sure that I've got the right location, and that confirm if the correct value to use is '-vlanhwtag' or 'vlanhwtag'. (Would a syntax error disable the interface? That's what I'm afraid of.)

I know that the ifconfig commands worked, and now I need to understand how to enter them into GUI. If I can get that clarified, I'm good. Thanks in advance.


Screenshot from 2021-10-07 13-50-53.png
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Yes, include the preceding -; otherwise, you're telling ifconfig to enable vlanhwtag. If you click the ? on the right of the line, it explains you should enter the ifconfig options exactly as if typing them in on the command line. Syntax errors here shouldn't disable the interface, only be silently ignored.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Why don't you just tick the box labelled "Disable Hardware Offloading"?
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Yes, include the preceding -; otherwise, you're telling ifconfig to enable vlanhwtag. If you click the ? on the right of the line, it explains you should enter the ifconfig options exactly as if typing them in on the command line. Syntax errors here shouldn't disable the interface, only be silently ignored.

@Samuel Tai.... Thanks for speedy reply, change made... rebooted... works fine, thanks for your help.

Why don't you just tick the box labelled "Disable Hardware Offloading"?

@Patrick M. Hausen ... thanks for the reply, What is the impact on system utilization/performance of selecting "Disable Hardware Offloading" vs just adding '-vlanhwtag' to options?

IIUC, the poor speed issue has something to do with VLAN tagging (I know what VLAN tags are, but I don't know exactly what is going on in this context). I saw from your thread that there are a bunch of other options that could also be disabled, but based on other comments and my experience '-vlanhwtag' appears to be all that is needed to solve the speed problem. Does only turning off this one option still retain some of the benefits of the hardware offloading?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@NASbox as is frequently the case the answer is "it depends". Hardware Offloading lets the network stack on a host operating system delegate certain operations to the network interface hardware. Take for example TCP checksum calculation. If the host CPU does not have to do that, things are supposedly "faster" for some value of "faster".

But ... these functions only make sense in a context when the host OS network stack is actually processing the packets. If you use an interface for VMs or VNET jails the packets are bridged into the VMs or jails and processed by the VM's guest OS' network stack. And since the guest OS does not have access to the network hardware of the host, no offloading is taking place, anyway. Same for jails (in the case of VNET/bridge) - they run their own virtualised network stack, too.

The only penalty you will experience is if you use one and the same interface to host VMs/jails and provide sharing services like SMB. Then SMB will be slightly penalized with hardware offloading disabled. One way to get the best for both VM/jail hosting and sharing services is to use two different interfaces. That requires a bit of preparation and advance setup of the bridge interface instead of relying on automatic creation. And it can shutdown your network if you accidentally create a bridge connecting both interfaces. But done correctly, it's a pretty good solution.

Summary: if VM/jail hosting is your primary application. disabling hardware offloading is the documented reasonable thing to do, because the host should not mess with any packed destined for a VM or jail.

HTH,
Patrick
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
@NASboxThe only penalty you will experience is if you use one and the same interface to host VMs/jails and provide sharing services like SMB. Then SMB will be slightly penalized with hardware offloading disabled. One way to get the best for both VM/jail hosting and sharing services is to use two different interfaces. That requires a bit of preparation and advance setup of the bridge interface instead of relying on automatic creation. And it can shutdown your network if you accidentally create a bridge connecting both interfaces. But done correctly, it's a pretty good solution.

@Patrick M. Hausen thanks... I don't use SMB very much... Do the same issues apply with NFS/SCP? When you say two interfaces, do you mean 2 separate Physical NICs, or do you mean creating some sort of Virtual Interface?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Two physical NICs. One with hardware offloading disabled for all your VMs and jails, one with hardware offloading enabled for all your other services. And yes, TCP checksum offloading benefits everything that uses TCP in the host network stack. And it's hardware offloading - so it applies to the chipset in a physical interface.

But seriously, unless you are in a high performance requirement situation, you won't notice. It's not that without offloading you get only half the throughput. Its more that your main CPU will have some single digit percentage more work to do and that's it. So in a private network without many concurrent connections you probably won't even notice. This is not your typical bottleneck.

Way more serious is the fact that the FreeBSD bridge utilises only a single core in 12.x. This is going to change in 13.x, but there's nothing you can do about it right now. Except splitting the traffic by using one interface with bridge for VMs/jails and a different one for sharing without a bridge - same situation as above again.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
Two physical NICs. One with hardware offloading disabled for all your VMs and jails, one with hardware offloading enabled for all your other services. And yes, TCP checksum offloading benefits everything that uses TCP in the host network stack. And it's hardware offloading - so it applies to the chipset in a physical interface.

@Patrick M. Hausen - Thanks.... Doesn't sound like it's an issue for me either way. It spends more time scrubbing than working. I'm hoping to maybe selfhost some web apps for use inside the firewall.

A bit off topic, but do you happen to know if modern nics can be directly connected without a switch/hub with a standard cable? I think I remember reading that GB NICs had that kind of smarts built in.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
All NICs that come with the current standard RJ45 plug can be connected directly. For 100 Mbit/s and lower you frequently will need a specially wired "crossover cable". For 1 Gbit/s and up automatic crossover is mandated by the standard. Crossover meaning the transmit wires of one port are connected to the receive wires of the other one and vice versa.

Apple were to my knowledge the first to deliver auto crossover for 100 Mbit/s in their Powerbooks. At least they were the first to deliver in numbers. Crossover cables have almost vanished today.
 

NASbox

Guru
Joined
May 8, 2012
Messages
650
All NICs that come with the current standard RJ45 plug can be connected directly. For 100 Mbit/s and lower you frequently will need a specially wired "crossover cable". For 1 Gbit/s and up automatic crossover is mandated by the standard. Crossover meaning the transmit wires of one port are connected to the receive wires of the other one and vice versa.

@Patrick M. Hausen thanks for the confirmation - I thought I remembered reading that, but I don't do this stuff day to day, so I wasn't sure.
 
Top