Problems using bonded interface in VM

dd9dr

Cadet
Joined
Dec 5, 2021
Messages
7
I have a problem, using a bonded Interface (bond0) in a VM running ofSense.
I've given the Interface bond0 to the VM, and it apears there.
Now I need to setup some VLAN-networks on top of this Interface in the VM (same works in TrueNAS on this interface), but I can't make a connction to the VLAN-interfaces inside theVM.
Another independend native Interface eno3 in the VM works - but without VLANs.

Did I oversee something?
Any advice?

Harry
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Create the VLANs in TrueNAS and give the VM n virtual interfaces, one for each VLAN? That's how it would have to be done in CORE.
 

dd9dr

Cadet
Joined
Dec 5, 2021
Messages
7
Create the VLANs in TrueNAS and give the VM n virtual interfaces, one for each VLAN? That's how it would have to be done in CORE.
Mmmhh...Ok, that could be a solution if anything other fails, but it schould also be possible to use the bond0-interface inside the VM.
That solution would give me some interfaces in Truenas, I don't need, and don't want to see there.
 

dd9dr

Cadet
Joined
Dec 5, 2021
Messages
7
Ok, I've tried that, but no luck.
I've created 2 VLAN-Interfaces on TrueNAS.
One of them has a IP-configuration in TrueNAS and Truenas is reachable on this interface.
The other one is a interface only without IP-Address in Truenas.
I've then put both Interfaces to the Vrtual machine, but same behavior as with the bond0.
I can't connect to the VM by this interfaces.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You don't get the bond0 interface inside the VM - if I am not completely mistaken, as I said, I don't run KVM (yet) - you get a virtual interface that is bound to the bond0 by a bridge or vswitch or whatever that is called in Linux. In FreeBSD/CORE it would be lagg0 for bond0 and bridge0 that ties the VM's interface to lagg0.

And - also in FreeBSD, sorry - you cannot pass tagged frames into a VM. You absolutely must create one VLAN interface per VLAN and pass that into the VM. ESXi identical. Although the networking is way more powerful than FreeBSD's and better suited to a hypervisor, you still create a vswitch, then one port group per VLAN, then one interface for each VLAN for the VM.

For a firewall inside a hypervisor I would seriously consider PCIe passthrough. I.e. pass the physical interface hardware into the VM. Then you can do the bonding and VLANs i side the VM. Provided your host has got more than two network interfaces, that's what I would do. CORE, SCALE, ESXi, ... all the same.
 

dd9dr

Cadet
Joined
Dec 5, 2021
Messages
7
You don't get the bond0 interface inside the VM - if I am not completely mistaken, as I said, I don't run KVM (yet) - you get a virtual interface that is bound to the bond0 by a bridge or vswitch or whatever that is called in Linux. In FreeBSD/CORE it would be lagg0 for bond0 and bridge0 that ties the VM's interface to lagg0.
No, I've changed that, and created the VLAN-Interfaces in TrueNAS.
1638729809985.png

I've then given the 3 VLANs to the VM, so that there are only normal interfaces visible in the VM.
I've double checked the MAC-addresses to assign the correct interfaces inside pfSense.
And - also in FreeBSD, sorry - you cannot pass tagged frames into a VM. You absolutely must create one VLAN interface per VLAN and pass that into the VM. ESXi identical. Although the networking is way more powerful than FreeBSD's and better suited to a hypervisor, you still create a vswitch, then one port group per VLAN, then one interface for each VLAN for the VM.
As stated above, I've now made the whole VLAN-stuff inside of TrueNAS.
For a firewall inside a hypervisor I would seriously consider PCIe passthrough. I.e. pass the physical interface hardware into the VM. Then you can do the bonding and VLANs i side the VM. Provided your host has got more than two network interfaces, that's what I would do. CORE, SCALE, ESXi, ... all the same.
That's what I wanted to avoid, because, I actualy don't have enough cables from my server to the switch.
But, if this is the only way, I have to install some more cabeling....
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So it's working now? Then please mark the thread as "SOLVED". And I learned that networking on SCALE works essentially the same way as on CORE.
 

dd9dr

Cadet
Joined
Dec 5, 2021
Messages
7
So it's working now? Then please mark the thread as "SOLVED". And I learned that networking on SCALE works essentially the same way as on CORE.
No, not yet, but I will take your advice, and try to pass thrue 2 still unused ethernet-adapters as a pci-passthrue...
Have to install two more ethernet-cables to the server.
This seems to me the cleanest way to solve my problem.

Thanks for you pationess!
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
You are most welcome. Could someone who knows Linux networking better than I do at least confirm that I sent him on the right track?

73
DO5NSW :wink:
 

raidflex

Guru
Joined
Mar 14, 2012
Messages
531
How is Vlans in SCALE for use cases like docker? For example I have multiple jails on Core that I have separate Vlans which are attached to certain jails. Is it possible to do something like this on SCALE with containers? I would really love to move to a linux base, and use .NET for apps like Sonarr and Radarr, since mono has been a headache lately. But I also do not want my Plex server to be on the same subnet as my Truenas server or other jails, since its open to allow streaming remotely.
 
Top