Virtual Intel NIC Kernel Panic

salty72

Cadet
Joined
May 25, 2021
Messages
3
Hi,

I'm trying to build a virtual TrueNAS instance using KVM but I'm getting a kernel panic on boot of the ISO.
The error appears to be related to the Intel virtual function driver; the physical hardware is an Intel I350 where the physical interface is controlled by the host and there are 4 virtual function devices available for the VMs.
The host hardware is an IBM X4 3650, dual E5-2650v2, quad I350 (built-in), Mellanox ConnectX-2 and LSI HBAs.
The host OS is Ubuntu 20.04, fully patched.

I tried an identical configuration using:
- FreeBSD 13.0: I was able to boot the OS and register the NIC (igb0).
- FreeBSD 12.2: I got a kernel panic (although the level of detail for the error seems less).
- ArchLinux: ! was able to boot the OS and register the Intel virtual function NIC.
- Ubuntu: ! was able to boot the OS and register the Intel virtual function NIC.
- FreeBSD 12.2 without the Intel VF: ! was able to boot (no NIC since it was removed).
- TrueNAS 12 without the Intel VF: I was able to boot (no NIC since it was removed).

Is this solely a case of waiting till FreeNAS is based on FreeBSD13 or does anyone know of a patch/fix to TrueNAS 12 that would allow using the Intel VF devices?

Thanks in advance,
Sal
 

Attachments

  • truenas12_panic.png
    truenas12_panic.png
    39 KB · Views: 249
  • freebsd12_panic.png
    freebsd12_panic.png
    34.8 KB · Views: 236

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
FreeNAS and TrueNAS Core are only known to work reliably on ESXi, and perhaps Proxmox.

Unless you want your storage to be an adventure in the sharp pointy edges of virtualization and the risk of data loss when things don't go correctly, you may wish to follow the guide at

https://www.truenas.com/community/t...ide-to-not-completely-losing-your-data.12714/

Does it work without the virtual function device? Perhaps try setting up normal networking?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
I have had reasonable success with TrueNAS on KVM (under Proxmox) by using the VMXNET3 driver (it's the VMware Linux virtual NIC driver). It's included as an option in Proxmox, but I suppose it should be able to be installed separately.

The Host runs an Intel NIC and the VMware virtual NIC is bridged with it.

I don't know enough about that specific NIC to understand if what I'm suggesting is in line with what you're looking for.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't know enough about that specific NIC to understand if what I'm suggesting is in line with what you're looking for.

Please forgive the explanation here if you already have some idea:

A virtual function is a sort-of PCI-passthru from an ethernet card with virtualization capabilities. Instead of passing the entire card through, it passes a virtual clone (or clones) of itself, which can be handed to a virtual machine.

The typical hypervisor and VM use a virtual ethernet setup that relies on a software bridge setup in the hypervisor. This is fantastic for average servers and VM usage, but it loses out a bit on performance, especially if you don't have a highly optimized vSwitch like the VMware one. Typical NAS use at 1GbE is generally fine on a virtual ethernet, but if you have certain situations, such as trying to trunk vlans through your virtual adapter, which can result in your VM being hit with *all* traffic on the vSwitch, or 10GbE or faster speeds, you can benefit from a virtual function.

In this case, the hypervisor configures a special PCIe device which is just a "stub" or "clone" of the ethernet card to the VM. This is a special virtualization feature of the card, so this happens alongside the normal hypervisor use of the card, and potentially quite a few other virtual functions as well.

For example, the Intel X710 will let you set up up to 64 virtual functions, and assign them to VM's. On the guest side, these show up as ethernet devices attached to the FreeBSD iavf driver, instead of the ixl driver that would normally be used to drive an X710.
 

salty72

Cadet
Joined
May 25, 2021
Messages
3
Just to add a bit on using SR-IOV.
All the traffic is passed directly to the physical swtich which allows for centralize management of the VMs instead of splitting the management to virtual switches. The performance gains are also significant since the NIC is not virtualized and can leverage VM offloading onto the NIC hardware (just like having your own physical device).

In response to previous posts, I understand that its not ideal to virtualize TrueNAS. I am using LSI HBA pass-through with dedicated disks to minimize the risk. The preference to virtualize is based on the ability to share the horsepower in the server since TrueNAS will not be steadily busy.

I think the issue is with FreeBSD12 initializing an Intel VF based on the igb module (rather than KVM itself). I could not find any patch by searching the FreeBSD threads so I was hoping that a FreeNAS user has encountered the same issue and could provide guidance on where and how to apply the patch.

For the short term, I will split the physical hardware between the TrueNAS and the other VMs by stacking VLANs on the dedicated TrueNAS port but its not ideal since the bandwidth is shared and segregation control is transferred from the physical switch to the VM OS (e.g rogue bridge).

If anyone has experience successfully passing a NIC using the igbvf (virtualized igb module) in a Linux KVM host to a TrueNAS VM, please let me know as I would appreciate some guidance on how you got it working.
 
Top