Internet Speed TrueNAS - VM's

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Hi,

I run a speedtest cli from Truenas command line and get the speeds of about 400 Mbits. When I run the speedtest from a VM on Truenas I get 2 Mbits down. The VM's are all Linux and I'm running the VirtIO adapter.

Any suggestions on where to start looking?

Thanks
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
What's even more strange is this. I have 400 down and 25 up. When I run the speedtest on the Linux VM I get this:


Download: 1.19 Mbit/s
Upload: 23.29 Mbit/s

So the Upload works fine only the download is very slow
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Did some test... I really feel this has something to do with the Bhyve.

I SSH into TrueNAS and SSH into my Linux VM to run these test (speedtest). This is crazy ALL my VM's have this speed, but the TrueNAS box is at normal speed. I've tried the intel nic driver... the virtio nic driver no change. I have no clue where to look or even change at this point.
Notice the upload speeds are correct. It's just the download speed on the VM that is the issue. I have 5 VM's running and all do the same thing.



Testing download speed................................................................................
Download: 404.54 Mbit/s
Testing upload speed......................................................................................................
Upload: 23.40 Mbit/s
root@TruNAS#


Testing download speed................................................................................
Download: 2.32 Mbit/s
Testing upload speed......................................................................................................
Upload: 24.16 Mbit/s
root@main-server:~#
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
This is Sad. Use to be able to get support on here. Seems it's died down since everything switched to TrueNAS.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Have you disabled hardware acceleration on your physical network interfaces?
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Where do I check to see?
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Network -> Interfaces -> Disable Hardware Acceleration
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Thank You! That Worked!!!
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
That's become sort of common lore in FreeBSD. I still have to sit down for a beer with e.g. George Neville-Neil and/or Kristof Provost to get a complete picture but whenever something goes awry with stacked layer 2 features like bridge on top of vlan on top of lagg on top of physical ... the first question is always "did you disable HW offloading?".

There's some logic to that because e.g. TCP checksum offloading to the physical card is a layer violation in the above example. We have multiple levels of layer 2 decapsulation before the packet should even hit ip_input() in the kernel.

OTOH there are mechanisms in e.g. the bridge code that remove IPv6 addresses from member interfaces as soon as they are added - because that would pose a scope violation and the address belongs on the bridge interface. So I could envision a similar mechanism for all this other stuff that needs to be disabled as soon as an interface becomes part of a bridge. Or a lagg. Or whatever.

I even had a regression like that with a virtual instance (DigitalOcean droplet) and PF NAT. What the ...? :wink: Disabling the HW features like you did and everything was fine. On non-existing "cloud" hardware ...

So glad it's working now. There is definitely work to do in this area in FreeBSD land. And though I cannot code I can write proper bug reports or hint at the Foundation about priorities (coupled with a donation that does work, folks!) ...
 
Last edited:

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
I'm running 2 Nic's in Lagg mode. I just turned off the HW offloading on the Lagg not the physical Nics. Everything seems to be working. I'm pretty sure this started after I upgraded my FreeNAS to TrueNAS as I didn't have speed issues in FreeNAS this only seemed to start after the upgrade. So why do we need to disable HW offloading in TrueNAS? or was it something with the upgrade? that would be a good question to ask. I believe something with Bhyve might be the issue as the TrueNAS server speed was just fine it was just the VM's that had an issue. As I posted above you can see in the cli the differences. Also the TrueNAS box goes through the Lagg.

Not sure, but I'm so glad this is working again I really needed to update my Linux Boxes.

I did create a Ticket and point it to this Thread so maybe some Dev will see this.

Again... Thanks for the help.

Rick
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
That's how it is supposed to work at the moment. As soon as you use VMs or jails a bridge interface is created to connect your VMs or jails with your physical (or lagg in your case). Bridge does not work with HW offloading ...

You can get some more control over the entire process by manually creating an interface named "bridge0" with "lagg0" as the only member. Then move the IP address to that bridge. And disable HW offloading for the physical ports, not the lagg. This is how it is documented to be configured.

See https://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-bridging.html
If the bridge host needs an IP address, set it on the bridge interface, not on the member interfaces.

That's why I recommend creating the bridge in the UI if using VMs. You need to reboot with your VM's autostart set to "off", so no bridge will be automatically created. Then you can set up the interfaces manually. Then set the autostart back to "on".
 

Rickinfl

Contributor
Joined
Aug 7, 2017
Messages
165
Makes Sense. Just strange it worked in FreeNAS stopped in TrueNAS. I didn't change anything, but upgrade. Oh well its working so I'm happy. Maybe a Dev can figure it out.

I'll have to read up on network bridging. Thanks for the link.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The networking subsystem got a major overhaul with the change of most of the hardware drivers to a common library called "iflib". Remember that there was a major FreeBSD version upgrade from 11.3 to 12.2 when you went from FreenNAS to TrueNAS CORE ...
This is all code outside of the exclusive control of iXsystems - they have to rely on the FreeBSD base. Although of course they work closely with the FreeBSD devs.
 
Top