VMXNET vs E1000E performance issue

ssts100

Dabbler
Joined
Feb 15, 2017
Messages
19
I am getting odd performance between VMXNET and E1000E.
SMB:
VMXNET3: seems to be running at 1Gbps only for download.
E1000E getting 3Gbps.


ISCSI: VM running on it. Test local(C:) speed
E1000E: Write 1.4GB/s, Read 300MB/s
VMXNET3: Write 300MB/s, Read 300MB/s

TrueNAS Core
storage is 4*2TB RI NVMe SSD in z1.
10G connection on all physical layers.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
storage is 4*2TB RI NVMe SSD in z1.


odd performance between VMXNET and E1000E.

Yes, this has been observed by a number of virtualization administrators over the years and doesn't seem to have a solid answer. E1000/E1000E typically has somewhat more consistent performance. In this case, it may be related to the use of RAIDZ1 (which is optimized towards sequential access of large files) and the lock-step nature of transactions over the network, which are a natural impediment to getting "max speed" out of your 10G net.
 

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
VMXNET3: seems to be running at 1Gbps only for download.
E1000E getting 3Gbps.
With modern hardware I'd expect the opposite results: VMXNET3 doing better than E1000(E).
At least this was my experience when running Dell PE720/740 with Intel X540-T2 connected directly to a storage box...

Others have similar results

What NICs do you use? Any switch in between? Jumbo frames?
What OS on VMs?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Others have similar results

Yeah, no, that knife cuts both ways, "modern hardware" or not. It's very dependent on lots of factors, because "high performance" networking has become so complicated. It is really a weakest link in the chain sort of problem. Trying to benchmark this in a lab environment and benchmarking in a busy production environment typically end up with very different results.
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
With modern hardware I'd expect the opposite results: VMXNET3 doing better than E1000(E).
At least this was my experience when running Dell PE720/740 with Intel X540-T2 connected directly to a storage box...

Others have similar results

What NICs do you use? Any switch in between? Jumbo frames?
What OS on VMs?
This is what we see. We only use the E1000\10000 series cards for older legacy operating systems that either don't have vmtools available(SCO Unix), or don't have a vmxnet3 driver. Our hardware spans everything from Dell PowerEdge R720xd and R620s to PowerEdge R650s and R7525s.
 

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
Trying to benchmark this in a lab environment and benchmarking in a busy production environment typically end up with very different results.
Good point.
Not sure if our environment would qualify as "busy production" but we never saw E1000E outperform VMXNET3 on our VMs.
They were interchangeable for a while, as long as we used just 1Gbps, but with switching to 10Gbps VMXNET3 became the default.
With some RJ-45 transceivers VMXNET3 had issues when turning on Jumbo frames (dropping link in about an hour or so).
Switching transceivers fixed that (jumbo frames weren't really needed; a regular crossover Cat6 cable 1' long was doing 9.5+Gbps)...

Dell PowerEdge R720xd and R620s to PowerEdge R650s and R7525s.
What are the physical 10G NICs? Intel? Broadcom? Mellanox?
What are the OSs running on the VMs? Windows? Linux? Other?
 

diogen

Explorer
Joined
Jul 21, 2022
Messages
72
VMXNET3: seems to be running at 1Gbps only for download.
E1000E getting 3Gbps.
If your networking is 10G and 3Gbps is the best you get (regardless whether you use E1000E or VMXNET3), something is wrong...

How did you test: real life performance or iperf? If real life, what size files? How many?
 

firesyde424

Contributor
Joined
Mar 5, 2019
Messages
155
What are the physical 10G NICs? Intel? Broadcom? Mellanox?
What are the OSs running on the VMs? Windows? Linux? Other?
Our older systems are running 10Gbe Intel x520-DA2 and Intel x710-DA2 cards. Our newer systems are running 25Gbe Intel XXV710-DA2 and 25Gbe Broadcom Dual Advanced cards. Our guest OSes run just about anything you can think of. Some of our older customer systems are running SCO Unix from the '90s. We also run a few Windows Server 2003, 2003R2, 2008, 2008R2, and 2012R2 servers. The vast majority of our VMs run either CentOS 7 or Windows Server 2016.
 
Top