Multiple VLANs and Asymmetric Routing (how to avoid this issue)

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
While the "separate network" is obvious for VMs and/or jails - do you know if other integrated "hyperconverged" products like Proxmox also apply that concept to sharing services? That's the one thing that always puzzles me in this recurring discussion. Are we talking VMs? Then TrueNAS is - although being a type 2 HV - no different from any other product out there. It's just the sharing that is lumped up with the management plane in a single stack.

Kind regards,
Patrick

Proxmox is not some magic thing. It is basically the same sort of product as TrueNAS; a prepackaged appliance OS whose purpose is primarily to run VM's, but includes ZFS as an afterthought. TrueNAS is a ZFS NAS that includes VM's/jails as an afterthought. You could turn those things around pretty easily.


600px-Hyperviseur.svg.png


From Wikipedia; see this article.

There's a lot of angst about "bare metal hypervisors" out there, but in practice, all of these things have weakened the boundaries between type 1 and type 2 hypervisors. A true type 1 hypervisor could be something akin to a system that used PCI virtual functions to let each guest directly interact with the hardware, and kept its fingers out of things other than to manage resource allocations. Since that's not desirable or practical, ESXi, KVM, and bhyve all present virtual devices and related abstractions such as switches/bridges, virtual disks (VMware has its VMFS, Proxmox and TrueNAS both have ZFS to power them), etc. This means that there's really almost always some OS layer in between the bare metal and the VM. VMware seems to be allowed with getting away with the claim that "oh that's not an OS, that's the ESXi hypervisor". This is vaguely true, but if you were to look at it via a lens of "it's just an (admittedly highly) specialized kernel" and "the management plane (i.e. userland) is clearly calling the shots", with a kernel that is intercepting all the networking and processing it through a software switch, and disk I/O through VMFS, ESXi starts to look much less than just a bare metal manager and a lot more like a ... type 2 hypervisor? Yup.

So, let me reset you on this: TrueNAS and Proxmox are both effectively Type 1 hypervisors by any reasonable modern analysis. They run their VM's with resources and scheduling managed within the kernel, as opposed to things like VirtualBox or FreeBSD jails which truly do operate in userland.

And if you're confused by this, let's not bring ESXi's new support for containers into the discussion.

Anyways, the Wikipedia article defines KVM to be a Type 1 hypervisor solution, so both Proxmox and TrueNAS are covered by that. Prickly folks will want to say "but Proxmox INTENDS to be a hypervisor" and "TrueNAS INTENDS to be a NAS" and try to argue things from that direction, but quite frankly what I see is "KVM offering up ZFS datastores with Linux networking and bridging" in both cases. I cannot see a real TECHNICAL case for calling TrueNAS a type 2 hypervisor if we call Proxmox and ESXi a type 1.

And of course you are completely correct that the sharing bits of TrueNAS are effectively on the management plane, sharing the IP networking of the host system.
 
Joined
Apr 4, 2019
Messages
5
Hello,

And thanks to all of you. First, sorry for my limited english, it's not my birth language.

I read all of your answers, and hope I understood.

OK, FreeBSD respects the way IP was designed. OK too about the multiple ESXi stacks on reasonably recent releases.

*But* on Linux, there is a way to workaround this problem : we use "ip rule" to build tables that allow us to ensure both in & out endpoints are the same. It makes sense since, from a security point-of-view, this design adds readability and configurability for the security team. No need to add exceptions : same entrey & out point in the network. In a production environment where security is a concern, it seems to be better, at least easier to maintain.

This is why I asked in another thread if a BSD guru could point me to pf scripts that would allow me to build such tables (adresse all traffic from a specific subnet through a specific NIC). I understand that the product was not exactly designed to work like this. I am looking for a way to adapt it to my needs.

Thanks,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There are all kinds of workarounds. They all come with caveats.

FreeNAS does not support pf. It does support ipfw, but, it fails open, so if a ruleset fails to apply, you aren't protected. This is generally considered unacceptable by security teams, so I strongly advise against trying to build security in the manner you are trying to. You have been warned. That said, let's move on.

You can certainly do all the same tricks you can do on Linux with FreeBSD's ipfw, but, you will find that they have the same sharp edges, and are not actually able to create full "separate" IP functionality that some people envision or wish for. We've had a bunch of posters with broken ideas about IP networking come here lately; see for ex.



That second thread actually contains ipfw examples of the sorts of things you're probably interested in. Specifically in


post #14.
 
Top