failed to start BCM5720 network interface in TrueNAS CORE 12.0-U6.1

ldfandian

Cadet
Joined
Nov 23, 2021
Messages
3
I installed TrueNAS CORE VM of the latest version (12.0-U6.1), on my HPE ML110 Gen10 server (with HPE customized VMware ESXI 7.0U2).

I could do PCI passthrough the embedded NIC (NetXtreme BCM5720 Gigabit Ethernet) successfully, but it failed to start in TrueNAS system.
Do I need some special network driver for it ?

(To ensure it is not a cable issue, I also tried mount the same PCI passthrough NIC into another ubuntu VM on the same ESXI box, it works like a charm. So, it should not be an issue on the ESXI side)

Any feedbacks are appreciated.

==
root@truenas[~]# pciconf -lv | grep -B4 network
subclass = SATA
vmx0@pci0:3:0:0: class=0x020000 card=0x07b015ad chip=0x07b015ad rev=0x01 hdr=0x00
vendor = 'VMware'
device = 'VMXNET3 Ethernet Controller'
class = network
subclass = ethernet
none1@pci0:11:0:0: class=0x020000 card=0x22e8103c chip=0x165f14e4 rev=0x00 hdr=0x00
vendor = 'Broadcom Inc. and subsidiaries'
device = 'NetXtreme BCM5720 2-port Gigabit Ethernet PCIe'
class = network
root@truenas[~]#

==
root@truenas[~]# ifconfig
vmx0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=e403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether 00:0c:29:dc:29:8a
inet 192.168.9.20 netmask 0xffffff00 broadcast 192.168.9.255
media: Ethernet autoselect
status: active
nd6 options=1<PERFORMNUD>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
pflog0: flags=0<> metric 0 mtu 33160
groups: pflog

==
root@truenas[~]# dmesg
ge0: <HPE Ethernet 1Gb 2-port 332i Adapter, ASIC rev. 0x5720000> mem 0xe7ad0000-0xe7adffff,0xe7ae0000-0xe7aeffff,0xe7af0000-0xe7afffff irq 16 at device 0.0 on pci4
bge0: APE FW version: NCSI v1.5.18.0
bge0: CHIP ID 0x05720000; ASIC REV 0x5720; CHIP REV 0x57200; PCI-E
bge0: APE lock 1 request failed! request = 0x8404[0x1000], status = 0x8424[0x0000]
bge0: APE lock 4 request failed! request = 0x8410[0x1000], status = 0x8430[0x0000]
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: Try again
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: Try again
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: Try again
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: Try again
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: APE lock 0 request failed! request = 0x8400[0x1000], status = 0x8420[0x0000]
bge0: attaching PHYs failed
device_attach: bge0 attach returned 6
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
The Broadcom chips often have sorta flaky drivers. You are additionally trying to add another layer of complexity on top, in the form of PCI passthru. I am not particularly shocked it doesn't work. This would be much more likely to work with the Intel ethernet chips.
 

ldfandian

Cadet
Joined
Nov 23, 2021
Messages
3
The Broadcom chips often have sorta flaky drivers. You are additionally trying to add another layer of complexity on top, in the form of PCI passthru. I am not particularly shocked it doesn't work. This would be much more likely to work with the Intel ethernet chips.
Thanks, I chose to use virtual network instead... Neverthanless, it should be 90+% performance of PCI passthrough, it should be OK for me.
 
Top