Performance drop when bridging networks

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
Hey, had a nice time updating from TrueNAS Core to SCALE and I am loving it as it now has a Linux base :)

The thing is that when I add the interface to a bridge to access the host from the VMs, the performance drops from ~9Gbps to almos 1.5Gbps.

I am having this issue when testing performance of a new 10G NIC that I bought, a cheap NC522SFP, and it is connected directly to another machine on the same type of NIC, running Proxmox on Linux. Curiously enough, Proxmox supports bridging that NIC and I can get consistent results testing this via a VM on the Proxmox host and from the Proxmox host.

In the future I will surely move the VMs from SCALE to the Proxmox, so I won't need the bridge, but having this performance penalty is a bummer, I also even planned on having the 2 machines running TrueNAS.

These are the logs:

Code:
root@freenas[~]# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.2.12, port 33396
[  5] local 192.168.2.14 port 5201 connected to 192.168.2.12 port 33400
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   986 MBytes  8.27 Gbits/sec
[  5]   1.00-2.00   sec   997 MBytes  8.36 Gbits/sec
[  5]   2.00-3.00   sec   990 MBytes  8.31 Gbits/sec
[  5]   2.00-3.00   sec   990 MBytes  8.31 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-3.00   sec  3.40 GBytes  9.74 Gbits/sec                  receiver
iperf3: the client has terminated
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 192.168.2.12, port 33402
[  5] local 192.168.2.14 port 5201 connected to 192.168.2.12 port 33404
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec   193 MBytes  1.62 Gbits/sec
[  5]   1.00-2.00   sec   195 MBytes  1.63 Gbits/sec
[  5]   2.00-3.00   sec   195 MBytes  1.64 Gbits/sec
[  5]   3.00-4.00   sec   195 MBytes  1.63 Gbits/sec
[  5]   3.00-4.00   sec   195 MBytes  1.63 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-4.00   sec   844 MBytes  1.77 Gbits/sec                  receiver
iperf3: the client has terminated
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
I'd research how bridges work in plain Linux and if you can get some tuning hints from that community. I know from CORE that proper setup is essential and that vast performance improvements can ve expected for TN 13, but I cannot help you with SCALE, unfortunately.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
a cheap NC522SFP,

There's a reason those are super-cheap. You might have better luck with one of the recommended cards.


There's lots of hardware that has little performance caveats, such as needing to disable offload functionality or adjusting interrupt strategies, and this can have an outsized impact on performance.
 

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
Yeah, this was true for core, the NIC was not even supported, but this one has OOB support on Linux and HP sold those cards for Linux servers... so until I mess up with bridges it works okay, I think that having almost 9 Gbps without tuning is pretty good, I am only complaining that changing the network topology cripples the performance...

I don't know so much about how SCALE handles those bridges, but Proxmox seems to handle it better with the same hardware.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
HP sold those cards for Linux servers

Yes, and they sell lots of RAID cards for Linux servers too. That doesn't mean that I'm not telling people, usually several times a day, why their RAID card isn't compatible with ZFS.

See https://www.truenas.com/community/r...bas-and-why-cant-i-use-a-raid-controller.139/

As I said, it's likely a matter of specific offload settings. One of the problems you can run into here is that a NAS is an endpoint device, and for maximum performance, you typically want offload features if available. However, once you enable bridging, that changes.

My point is that Proxmox, as a non-endpoint device, is going to be natively tuned for best bridging performance because it'd be stupid not to be, while TrueNAS has to muddle around in both worlds, because it is acting as both endpoint (all the time) and non-endpoint (when running bridges). Some of the tuning stuff for this on FreeBSD is already well-known (and our best resident expert on that, IMO, @Patrick M. Hausen has already poked his head in here). However, using Scale and ALSO an unusual network card, you may find yourself doing a bit of unexpected off-roading as you're one of the first people to experience this on your card.

Risks of being an early adopter without the recommended hardware, alas.
 

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
Yes, and they sell lots of RAID cards for Linux servers too. That doesn't mean that I'm not telling people, usually several times a day, why their RAID card isn't compatible with ZFS.

See https://www.truenas.com/community/r...bas-and-why-cant-i-use-a-raid-controller.139/

Yeah, read that already before buying a non-LSI card for expanding my previous TrueNAS Core installation, which works perfect on both Core and Scale

As I said, it's likely a matter of specific offload settings. One of the problems you can run into here is that a NAS is an endpoint device, and for maximum performance, you typically want offload features if available. However, once you enable bridging, that changes.

Seems like this is what is causing it, my intention with this post was to raise it and be curious about it, didn't want to even complain, I can even investigate or provide additional logs if needed, I am pretty sure that I am not the only one running 10G and bridging the interfaces, probably someone has this problem and solved it already...

My point is that Proxmox, as a non-endpoint device, is going to be natively tuned for best bridging performance because it'd be stupid not to be, while TrueNAS has to muddle around in both worlds, because it is acting as both endpoint (all the time) and non-endpoint (when running bridges). Some of the tuning stuff for this on FreeBSD is already well-known (and our best resident expert on that, IMO, @Patrick M. Hausen has already poked his head in here). However, using Scale and ALSO an unusual network card, you may find yourself doing a bit of unexpected off-roading as you're one of the first people to experience this on your card.

Risks of being an early adopter without the recommended hardware, alas.

I knew about this when installing this RC, I even tested the beta version on the same Core installation but backed off because I did not plan it so much and I did not have direct replacement on another box of the plugins I was running on TrueNAS Core.

Anyways, and without inteding to be rude, IMHO, that recommendation hw guide could be updated for the new TrueNAS SCALE installation once ixSystems has finished and polished a bit more, and also most of the enthusiasts have migrated to SCALE.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
that recommendation hw guide could be updated for the new TrueNAS SCALE installation once ixSystems has finished and polished a bit more, and also most of the enthusiasts have migrated to SCALE.

I'm not sure that's going to happen quite that way, in either dimension.

In general, the hardware that we've promoted here on the forums as awesome for Core also happens to be awesome for Scale (or Linux generally). That is because the same companies that write drivers and optimize their stuff for FreeBSD, which is a bit more of a niche player than Linux, are also the same companies that optimize their stuff for Linux. There are a LOT of "almost-work" and "sorta-work" cards for both FreeBSD and Linux, and while these can differ greatly in the specifics, and sometimes things work only on one OS and not the other, you really don't want to use them if you can avoid it. Some of this just comes down to group experience. It's a lot harder to say things about Qlogic or Broadcom cards because these companies didn't work as hard (possibly at all) on driver support. Dumping Intel-level amounts of effort into a driver just to make sure it can do both offload and non-offload efficiently, for example, can be a costly venture.

Also, it isn't really clear that all the enthusiasts are going to migrate to SCALE. Linux is a dumpster fire and there's no compelling reason I plan to switch, and I know others who are fine with FreeBSD as well. FreeBSD's integration with ZFS is better, and it is both stable and mature as well.

None of this is intended to discourage you, if you wish to put in the effort. I'm all for inexpensive hardware deals, but on the other hand, it is easier to know more of the answers when the hardware matrix is smaller.
 

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
I'm not sure that's going to happen quite that way, in either dimension.

I can't see the future, but I think that if some Linux has better hardware support overall, and most of the server envs are based on it, most of the people will shift to Linux from BSD if they are able to, e.g I canuse that card using SCALE and not on Core because of the Linux base, seems that the BSD driver was fuzzy and did not even work OOB.

In general, the hardware that we've promoted here on the forums as awesome for Core also happens to be awesome for Scale (or Linux generally). That is because the same companies that write drivers and optimize their stuff for FreeBSD, which is a bit more of a niche player than Linux, are also the same companies that optimize their stuff for Linux. There are a LOT of "almost-work" and "sorta-work" cards for both FreeBSD and Linux, and while these can differ greatly in the specifics, and sometimes things work only on one OS and not the other, you really don't want to use them if you can avoid it. Some of this just comes down to group experience. It's a lot harder to say things about Qlogic or Broadcom cards because these companies didn't work as hard (possibly at all) on driver support. Dumping Intel-level amounts of effort into a driver just to make sure it can do both offload and non-offload efficiently, for example, can be a costly venture.

That makes a lot of sense, thanks. In the end for some home usage, if Linux supports that card at ~9 Gbps, and this HP costs 30$ second-hand, and the Intel costs 100$ second-hand, I prefer to just buy the HP, use that on Linux and spend the other money on more storage or another card, for example.

Also, it isn't really clear that all the enthusiasts are going to migrate to SCALE. Linux is a dumpster fire and there's no compelling reason I plan to switch, and I know others who are fine with FreeBSD as well. FreeBSD's integration with ZFS is better, and it is both stable and mature as well.

None of this is intended to discourage you, if you wish to put in the effort. I'm all for inexpensive hardware deals, but on the other hand, it is easier to know more of the answers when the hardware matrix is smaller.

Isn't ZFS on FreeBSD using a shared codebase with the previous ZoL project? Imagine that the code running there is not that different nowadays.
Of course, as I said we can't see the future, anyways it is really nice that there are now 2 options SCALE and Core, let's hope that this continues that way because I was happy with those two versions.
I changed to SCALE basically for the Linux kernel and hw support, I could have waited or spent a little bit more on better card, but as I decided to fiddle with SCALE too, I ended up buying it too :D
And no, I did not get your comment as a discouragement, it was very useful and informative
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I can't see the future, but I think that if some Linux has better hardware support overall, and most of the server envs are based on it, most of the people will shift to Linux from BSD if they are able to, e.g I canuse that card using SCALE and not on Core because of the Linux base, seems that the BSD driver was fuzzy and did not even work OOB.

I've been hearing about the "Linux better hardware support going to doom BSD" idea since before the turn of the century, and I'm still not convinced (this sentence is intended to be self-explanatory).

There are server OS's with significantly more restrictive hardware compatibility, ESXi being the classic example, and for the most part this has kept server hardware manufacturers from going too far off the rails into PC hardware crazytown.

In the end for some home usage, if Linux supports that card at ~9 Gbps, and this HP costs 30$ second-hand, and the Intel costs 100$ second-hand, I prefer to just buy the HP, use that on Linux and spend the other money on more storage or another card, for example.

Your choice of a cheap card that doesn't happen to work well with BSD doesn't mean that pricey used Intel cards are the only choice. The SolarFlare cards are $30 a pop on eBay and work fine, for example. And I can find cards that work well on BSD but are flakier on Linux. So, still not a clear winner here.

Isn't ZFS on FreeBSD using a shared codebase with the previous ZoL project? Imagine that the code running there is not that different nowadays.

Yes, but FreeBSD spent almost a decade and a half before that adjusting to and evolving to fit ZFS. That's why ARC on FreeBSD just magically works and uses available memory, while Linux feels very much like someone just bolted ZFS on top of bog standard Linux. In fact, Linux is so cruddy that ZoL defaults to using only 50% of the system memory for ARC. That just screams "amateur hour" to me. That, along with lots of other stupidisms in Linux, like systemd, initrd/initramfs, gratuitous differences from every other UNIX flavor, etc., well, you know ST:TOS "The Menagerie"? That line where Vina says "Everything works, but they had never seen a human"? That's how Linux feels to me. Someone described UNIX to them and they invented something that sorta worked and sorta looked like it. I'm just not a huge Linux fan I guess.
 

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
I've been hearing about the "Linux better hardware support going to doom BSD" idea since before the turn of the century, and I'm still not convinced (this sentence is intended to be self-explanatory).

There are server OS's with significantly more restrictive hardware compatibility, ESXi being the classic example, and for the most part this has kept server hardware manufacturers from going too far off the rails into PC hardware crazytown.

Probably that quote is similar to that one that "2022 is the year of Linux on desktop haha".

Your choice of a cheap card that doesn't happen to work well with BSD doesn't mean that pricey used Intel cards are the only choice. The SolarFlare cards are $30 a pop on eBay and work fine, for example. And I can find cards that work well on BSD but are flakier on Linux. So, still not a clear winner here.

I live in Europe and on my country usual second-hand market I did not even see this kind of NIC, also the options were very very low.

Yes, but FreeBSD spent almost a decade and a half before that adjusting to and evolving to fit ZFS. That's why ARC on FreeBSD just magically works and uses available memory, while Linux feels very much like someone just bolted ZFS on top of bog standard Linux. In fact, Linux is so cruddy that ZoL defaults to using only 50% of the system memory for ARC. That just screams "amateur hour" to me. That, along with lots of other stupidisms in Linux, like systemd, initrd/initramfs, gratuitous differences from every other UNIX flavor, etc., well, you know ST:TOS "The Menagerie"? That line where Vina says "Everything works, but they had never seen a human"? That's how Linux feels to me. Someone described UNIX to them and they invented something that sorta worked and sorta looked like it. I'm just not a huge Linux fan I guess.

I did not meant thay Linux is perfect, you got a point on all that "a thousand options and complex things", and also every distro keeps re-inventing the wheel, I think the "UNIX" philosophy is pretty dead on the Linux ecosystem nowadays.
But in the end, most of this things are completely abstracted (if you want) using the TrueNAS UI, so I think that the non-tech people that just want to build a performant and cheap NAS won't care if it is BSD or Linux, it will care if it works for it's purpose or not.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Probably that quote is similar to that one that "2022 is the year of Linux on desktop haha".

They do that every year, don't they? And without the haha.

You know, the thing is, Google had it approximately right with the Chromebook, but they were about ten years too early. Now I've been having fun with this conversation and this is presented in that spirit, but it is also sort of a serious issue. Throughout the 1980's and 1990's, I was a SunOS/X11 and then FreeBSD/X11 desktop guy. I had a Digital HiNote Ultra 2000 laptop that ran FreeBSD (hell yea!), but in the 2000's as the work changed, I had to run Windows for all the stupid insipid Windows-only crap. But as things have evolved over the last 10 years, now with HTML5 for the vSphere Client and other major "apps" now being HTML-based, it becomes more practical to move back towards a platform-agnostic model. On the server side, we're also seeing more use of high level interpreted languages such as Node and Python that don't care what's underneath, and with ARM on cell phones and on Raspberry Pi's and high core count server chips and M1 Macs, Windows is not the driver it once was. I think we can see a Linux or FreeBSD desktop as more viable in the future, as the nature of the desktop itself becomes less relevant.

But in the end, most of this things are completely abstracted (if you want) using the TrueNAS UI, so I think that the non-tech people that just want to build a performant and cheap NAS won't care if it is BSD or Linux, it will care if it works for it's purpose or not.

And that's correct. The part that has impressed me is that apparently large bits of the codebase have carried over without a ton of difficulty. That's impressive for abstraction (though not necessarily what you meant).
 

AlexDRL

Cadet
Joined
Feb 16, 2022
Messages
8
They do that every year, don't they? And without the haha.

You know, the thing is, Google had it approximately right with the Chromebook, but they were about ten years too early. Now I've been having fun with this conversation and this is presented in that spirit, but it is also sort of a serious issue. Throughout the 1980's and 1990's, I was a SunOS/X11 and then FreeBSD/X11 desktop guy. I had a Digital HiNote Ultra 2000 laptop that ran FreeBSD (hell yea!), but in the 2000's as the work changed, I had to run Windows for all the stupid insipid Windows-only crap. But as things have evolved over the last 10 years, now with HTML5 for the vSphere Client and other major "apps" now being HTML-based, it becomes more practical to move back towards a platform-agnostic model. On the server side, we're also seeing more use of high level interpreted languages such as Node and Python that don't care what's underneath, and with ARM on cell phones and on Raspberry Pi's and high core count server chips and M1 Macs, Windows is not the driver it once was. I think we can see a Linux or FreeBSD desktop as more viable in the future, as the nature of the desktop itself becomes less relevant.

Actually I wanted to write "2022 is the year of Linux on desktop" hahaha. I don't think Linux is bad on deskptop, but there are things that need to improve a lot, like simplifying the graphic stack, a nicer alternative to Windows RDP, but most of this things are not Linux problems per se, its part of the ecosystem, or poor Linux support from external devs (e.g that Zoom screen capture Wayland mess). I am testing in my lab 2 VMs: a Manjaro KDE desktop and a Windows 11 desktop as servers for a "thin-client" setup on a very old laptop, and the Windows desktop graphic and RDP perfomance beats a lot Manjaro and xorg-xorg. Honestly, I end up using the Windows one as the experience is far better, but I would really love running the Linux one tbh.

Completely agree on your server comments, and now with Docker and multi-arch images, you could even end up running the app on a multi-arch k8s cluster, seems like magic.

And that's correct. The part that has impressed me is that apparently large bits of the codebase have carried over without a ton of difficulty. That's impressive for abstraction (though not necessarily what you meant).

I was also amazed around the abstraction and how flawless the upgrade process is, being a software developer, that made my head explode.
 
Top