What I don't understand is the way the VMXNET3 network interface is meant to work. I'm a bit lost when I read about it because I think it's for an internal network interface within the ESXi host, not for external NIC interfaces. I'd like to understand this better. I too found that link you provided above and read though it. Maybe I'm ignorant but I'm not grasping VMXNET3. If VMXNET3 is just an internal network, then how do I connect to the outside world. I'm sure it works but I don't see it based on what I've read. Maybe more research is required. I can see how the faster interface would help out local VMs accessing data faster.
You may not want to use the vmxnet3 driver; the benefits over the E1000 are not that great and sometimes there are strange issues with vmxnet3. I guess it is better these days.
In both the case of the vmxnet3 and the E1000, you are dealing with emulated hardware as
@Mirfster points out. You are talking to a virtual network switch that's built into ESXi (possibly one of several virtual switches). Normally a VM has no idea what physical port might be involved in communications - or even if there IS a physical port!
So let's say I have a virtual E1000
on that machine named "freebsd-81r-amd64-lab1". It thinks it is a local untagged 1Gbps ethernet on an E1000. In fact, the ESXi vSwitch will put it on vlan 803, and it uplinks out of the host to one of our two core 10G/40G switches.
Now, if there was another machine right next door to it on the same host that it was talking to, it might well get several Gbps (I've seen 4-5 pretty easily) out of that "1Gbps" E1000 interface. There's nothing in the hypervisor that deliberately limits the speed to 1Gbps. It will also happily exceed 1Gbps out those 10Gbps ethernet uplinks, and if the switch on vmnic5 were to crash while it was pumping out data, the switch on vmnic3 would take over very rapidly and there would be barely a blip in network throughput. The vmxnet3 and E1000 interfaces are just there because there has to be some sort of mechanism to get data into and out of a VM.
So you may also notice that that's "vSwitch3"; it is pretty normal for an ESXi box to have multiple vSwitches and multiple network interfaces in a more complex environment.
A vSwitch that has no physical adapters associated with it can only communicate with other VM's on the local host. This is also useful in some scenarios. For example, if you were creating a firewall environment, you might have a three-legged pfSense box that had one interface connected to vSwitch0, which has a physical interface on it that attaches to your cable modem, and then an interface that connects to vSwitch1 which is a place where you can attach VM's in a DMZ, and then vSwitch2 which connects to the other physical interface. You can then attach VM's you want only visible internally to vSwitch2, and VM's doing Internet services (OwnCloud, etc) on the vSwitch1 DMZ net.
So
you really need to imagine that your virtualization host actually represents another ethernet switch (or several) and that your VM's attach to those.
But the vSwitches won't actually do switching between the physical interfaces, so it isn't possible to go get a quad port GbE card and stick it in your ESXi and attach all the physical ports to a single vSwitch to get rid of a physical ethernet switch. :-/
Now that you've hopefully got a better idea of how that works, I'll also mention that you can do PCI passthru of a physical network interface in many cases. This obviously avoids the whole vSwitch mechanism and has other pros and cons.