FreeNAS 9.3 don't see VMXNET3 NICs

Status
Not open for further replies.

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I want 10GBE for inter VM file sharing, but somehow Freenas only sees the E1000 NIC and I'm stuck with 1GBE. All the other guest OSes, VMs, see the VMXNET3 NICs without any hiccups
VMware Tools: Running (Guest managed)

maybe dmesg gives a clue?

Code:
[root@freenas ~]# dmesg | grep net
pci11: <network, ethernet> at device 0.0 (no driver attached)
pci19: <network, ethernet> at device 0.0 (no driver attached)
bridge0: Ethernet address: 02:82:19:9a:9e:00
epair0a: Ethernet address: 02:d9:19:00:05:0a
epair0b: Ethernet address: 02:d9:19:00:06:0b
ng_ether_ifnet_arrival_event: can't re-name node epair0b
epair1a: Ethernet address: 02:b3:a6:00:06:0a
epair1b: Ethernet address: 02:b3:a6:00:07:0b
ng_ether_ifnet_arrival_event: can't re-name node epair1b
epair2a: Ethernet address: 02:d8:cf:00:07:0a
epair2b: Ethernet address: 02:d8:cf:00:08:0b
ng_ether_ifnet_arrival_event: can't re-name node epair2b
hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
bridge0: Ethernet address: 02:82:19:9a:9e:00
epair0a: Ethernet address: 02:16:59:00:05:0a
epair0b: Ethernet address: 02:16:59:00:06:0b
ng_ether_ifnet_arrival_event: can't re-name node epair0b
epair1a: Ethernet address: 02:6b:84:00:06:0a
epair1b: Ethernet address: 02:6b:84:00:07:0b
ng_ether_ifnet_arrival_event: can't re-name node epair1b
hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=1 cleanup required
hhook_vnet_uninit: hhook_head type=1, id=0 cleanup required
WARNING: VIMAGE (virtualized network stack) is a highly experimental feature.                                                      
em0: Ethernet address: 00:0c:29:75:a9:ee
pci11: <network, ethernet> at device 0.0 (no driver attached)
pci19: <network, ethernet> at device 0.0 (no driver attached)
bridge0: Ethernet address: 02:82:19:9a:9e:00
epair0a: Ethernet address: 02:b7:b4:00:05:0a
epair0b: Ethernet address: 02:b7:b4:00:06:0b
ng_ether_ifnet_arrival_event: can't re-name node epair0b
epair1a: Ethernet address: 02:b2:16:00:06:0a
epair1b: Ethernet address: 02:b2:16:00:07:0b
ng_ether_ifnet_arrival_event: can't re-name node epair1b
[root@freenas ~]#
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I want 10GBE for inter VM file sharing, but somehow Freenas only sees the E1000 NIC and I'm stuck with 1GBE.

Well if you only have a 1GbE physical link and you're running traffic across it, vmxnet3 isn't going to magically make that go faster.

However, for the situation you describe, traffic within a single hypervisor, I have no idea what causes you to think that E1000 limits you to "1GBE" (whatever the eff that is, do they even MAKE 8 gigabit ethernet?).

Did you bother to install the vmxnet3 driver?
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
did the following
# mkdir /mnt/cdrom
# mount -t cd9660 /dev/iso9660/VMware\ Tools /mnt/cdrom/
# cp /mnt/cdrom/vmware-freebsd-tools.tar.gz /root/
# tar -zxmf vmware-freebsd-tools.tar.gz
# cd vmware-tools-distrib/lib/modules/binary/FreeBSD9.0-amd64
# cp vmxnet3.ko /boot/modules

addewd tunables
vmxnet3_load
vmxnet_load
vmmemctl_load

all according to
https://b3n.org/freenas-9-3-on-vmware-esxi-6-0-guide/

Can I check if the drivers are installed?

Since all guides dictates that you should use vmxnet3 I just assumed that its preferred or required for efficient networksharing between guest OSes.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
pciconf -lv

none2@pci0:11:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01
hdr=0x00
vendor = 'VMware'
device = 'VMXNET3 Ethernet Controller'
class = network
subclass = ethernet
none3@pci0:19:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01
hdr=0x00
vendor = 'VMware'
device = 'VMXNET3 Ethernet Controller'
class = network
subclass = ethernet
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
[root@freenas /usr/local/lib/vmware-tools/modules/drivers]# ls
vmblock.ko vmhgfs.ko vmmemctl.ko vmxnet.ko
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Hm, drivers not installed properly.

[root@freenas /boot/modules]# ls
arcsas.ko geom_raid5.ko vboxdrv.ko vboxnetflt.ko vmxnet.ko
fuse.ko linker.hints vboxnetadp.ko vmmemctl.ko vmxnet3.ko

they are located here tho.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would think you'd actually need to compile them for the target, but maybe this is no longer needed by FreeBSD.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
well the e1000 only links at 1GBE not 10GBE

[root@freenas ~]# ifconfig em1
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
ether 00:0c:29:75:a9:da
inet 192.168.2.50 netmask 0xffffff00 broadcast 192.168.2.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)

two other guest OSes connected to the same vswitch connects with 10GBE, but they are using vmxnet3
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So an imaginary Ethernet card connects to an imaginary Ethernet switch, and because it is based on a physical card, shows *some* sort of link speed, because the driver is for a physical card. Since it's a driver for a 1Gbps card, why would it even have the capability to show some faster speed?

Or do you think that VMware actually engineered it so that it was somehow rate-limited to 1Gbps? What would the point of that be? The amount of effort needed to rate limit a connection artificially isn't worth it.

The E1000 will easily go faster than 1Gbps on communications within the same hypervisor, or on a hypervisor with 10Gbps ethernet.

Likewise, just because the vmxnet3 shows that it "links" at 10Gbps, that means jack. You usually cannot get anywhere near that speed out of it.

Here's an experiment for you. Take your hypervisor. Let's say it has 32GB of RAM. Create a VM with 64GB of RAM. Start the VM. Watch the system POST at more RAM than your hypervisor has. Heck, install an OS and watch it say that it has that much RAM available. Does this actually mean your hypervisor suddenly developed double the RAM? No. Of course not. It's virtualization.

Or, just try testing the E1000 with iperf.

Code:
root@freebsd9-play:~ # iperf -w 1024k -c 10.252.2.77
------------------------------------------------------------
Client connecting to 10.252.2.77, TCP port 5001
TCP window size: 1.00 MByte (WARNING: requested 1.00 MByte)
------------------------------------------------------------
[  3] local 10.252.2.75 port 35658 connected with 10.252.2.77 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  4.70 GBytes  4.00 Gbits/sec
root@freebsd9-play:~ # route -n get 10.252.2.77
   route to: 10.252.2.77
destination: 10.252.2.0
       mask: 255.255.255.0
        fib: 0
  interface: em0
      flags: <UP,DONE,PINNED>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      1500         1         0
root@freebsd9-play:~ # ifconfig em0
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
        ether 00:50:56:99:0a:2e
        inet 10.252.2.75 netmask 0xffffff00 broadcast 10.252.2.255
        inet6 fe80::250:56ff:fe99:a2e%em0 prefixlen 64 scopeid 0x1
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
root@freebsd9-play:~ #


Look at that, 4Gbps over a link that claims to be 1000baseT. It's virtual hardware on a virtual machine. What it "tells" you doesn't necessarily have any relationship at all to reality.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Well Id prefer these numbers:
iperf server on ubuntu (vmxnet3) and client on windows 10 (vmxnet3)

C:\Program Files (x86)\jperf\bin>iperf -i 1 -w 1024K -M 9000 -c 192.168.2.60
------------------------------------------------------------
Client connecting to 192.168.2.60, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[240] local 192.168.2.100 port 50269 connected with 192.168.2.60 port 5001
[ ID] Interval Transfer Bandwidth
[240] 0.0- 1.0 sec 1.61 GBytes 13.8 Gbits/sec
[240] 1.0- 2.0 sec 2.14 GBytes 18.3 Gbits/sec
[240] 2.0- 3.0 sec 2.07 GBytes 17.8 Gbits/sec
[240] 3.0- 4.0 sec 2.02 GBytes 17.3 Gbits/sec
[240] 4.0- 5.0 sec 2.08 GBytes 17.9 Gbits/sec
[240] 5.0- 6.0 sec 2.06 GBytes 17.7 Gbits/sec
[240] 6.0- 7.0 sec 2.04 GBytes 17.5 Gbits/sec
[240] 7.0- 8.0 sec 1.97 GBytes 16.9 Gbits/sec
[240] 8.0- 9.0 sec 2.06 GBytes 17.7 Gbits/sec
[240] 9.0-10.0 sec 2.01 GBytes 17.3 Gbits/sec
[240] 0.0-10.0 sec 20.1 GBytes 17.2 Gbits/sec
[240] MSS and MTU size unknown (TCP_MAXSEG not supported by OS?)

iperf server on freenas with E1000 and the same client with the same settings on windows 10 vmxnet3

C:\Program Files (x86)\jperf\bin>iperf -i 1 -w 1024K -M 9000 -c 192.168.2.50
------------------------------------------------------------
Client connecting to 192.168.2.50, TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
[240] local 192.168.2.100 port 50281 connected with 192.168.2.50 port 5001
[ ID] Interval Transfer Bandwidth
[240] 0.0- 1.0 sec 352 MBytes 2.96 Gbits/sec
[240] 1.0- 2.0 sec 270 MBytes 2.27 Gbits/sec
[240] 2.0- 3.0 sec 283 MBytes 2.37 Gbits/sec
[240] 3.0- 4.0 sec 329 MBytes 2.76 Gbits/sec
[240] 4.0- 5.0 sec 350 MBytes 2.94 Gbits/sec
[240] 5.0- 6.0 sec 341 MBytes 2.86 Gbits/sec
[240] 6.0- 7.0 sec 355 MBytes 2.98 Gbits/sec
[240] 7.0- 8.0 sec 354 MBytes 2.97 Gbits/sec
[240] 8.0- 9.0 sec 356 MBytes 2.98 Gbits/sec
[240] 9.0-10.0 sec 330 MBytes 2.77 Gbits/sec
[240] 0.0-10.0 sec 3.24 GBytes 2.78 Gbits/sec
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I ran a quick diskbench from the windows VM.

-----------------------------------------------------------------------
CrystalDiskMark 5.1.0 x64 (C) 2007-2015 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 566.280 MB/s
Sequential Write (Q= 32,T= 1) : 257.509 MB/s
Random Read 4KiB (Q= 32,T= 1) : 152.573 MB/s [ 37249.3 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 91.913 MB/s [ 22439.7 IOPS]
Sequential Read (T= 1) : 403.672 MB/s
Sequential Write (T= 1) : 286.476 MB/s
Random Read 4KiB (Q= 1,T= 1) : 25.787 MB/s [ 6295.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 15.040 MB/s [ 3671.9 IOPS]

Test : 4096 MiB [Y: 10.3% (1445.3/13992.0 GiB)] (x1) [Interval=5 sec]
Date : 2015/12/08 1:14:10
OS : Windows 10 Professional [10.0 Build 10586] (x64)

Currently the E1000 performance fits the freenas writespeed nicely.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
I can live with 2 Gbps for a while. What i can't live with is the slow Io performance.

Im downloading torrents at 30MBps in a Windows VM saving directly to a cifs Share. I got a 600 Mbit link at home and can locally sarurate that against a ssd.

So should i try async writes?
get a s3700 as zil log?

Can a old s3700 handle heavy torrent traffic?

Skickat från min HTC One via Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I can live with 2 Gbps for a while. What i can't live with is the slow Io performance.

Im downloading torrents at 30MBps

Is that 30Mbps or 240Mbps? ("MBps" means megabytes per second, "Mbps" means megabits)

in a Windows VM saving directly to a cifs Share. I got a 600 Mbit link at home and can locally sarurate that against a ssd.

So should i try async writes?

Why wouldn't you be using async writes for cruft grade traffic?

get a s3700 as zil log?

SLOG, you mean? Why would you have any significant level of sync writes?

Can a old s3700 handle heavy torrent traffic?

The S3700 and its follow-on the S3710 are among the hardiest SSD's out there. If you can't store it on them, you probably can't store it at all.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
S3700 is specced at 100 MBps 19000 iops, thats why i wondered about s3700.


Skickat från min HTC One via Tapatalk
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, the 100GB S3700 is specced at 200MBytes/sec write (see Table 4). At a 4KB blocksize, committing 19,000 blocks per second, that implies that even under a random workload, 77MBytes/sec, but SLOG writes are sequential in nature and so the real problem turns out to be latency in the whole sync-write-to-SSD process.
 

ondjultomte

Contributor
Joined
Aug 10, 2015
Messages
117
Ok latency is the limiting factor, so you really want a nvme drive. Ill try a s3700 and see what it does to the benchmarks.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The S3700 is a SATA device. Try an NVMe device like the P3700 or the Intel 750 if you want to reduce latency. Adding the SLOG device always reduces performance, it's merely a question of "by how much".
 
Status
Not open for further replies.
Top