IMHO Very, Very strange TrueNas VLAN-handling. Not working as expected.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I think you illustrated the point better than I did. It will respond for packets to/from its own IP addresses across interfaces, but it wouldn't route the packets to and from IP addresses which are not its own. That is the point the I was trying to make.

Yes, but you only discussed the return-path part of the problem. From a security point of view, NAS-originated traffic is less concerning than untrusted traffic aimed at the NAS, and specifically traffic that might be aimed at a NAS interface on a network thought to be protected by an upstream firewall.

Ideally you want the traffic in both directions to be filtered, but I think you and I have both shown this to be difficult. ;-) ;-)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Exactly that is why the GUI is in my opinion so confusing and not ok !

You need a way for a layer 3 host to be able to attach to a VLAN, or VLAN's are mostly useless. So the virtual interfaces end up being called "vlan" interfaces. These are analogous to "em" or "ix" physical ethernet interfaces attached to a switch bearing a conventional layer 2 network.

To answer your SG-1100 question, my pfSense system, is just like my TrueNas system based on a releatively small PC.

Related to the functionality, I would like to have:
- TrueNAS should behave as a "set of endpoints" and NOT as "an IP-transit point"
- and it should be guaranteed that traffic towards "the intended endpoint" can not reach "another endpoint"
- and that an endpoint can not inject traffic into a another than the intended level-2 VLAN, which would violate the security
(due to routing or intentionally by a hacker)

Yes, so, it's super-important for you to come to grips with the fact that that's not actually the way it works, and why that is.

This isn't a new problem, though, and it does have solutions, though none of them are perfect.

The strongest solution is to segment your clients from your NAS on separate layer 2 networks, with separate IP ranges, and then firewall between them. With modern switches that can do filtering in silicon, this can actually be quite practical, and is how we expose NAS backends to Internet-visible frontends here at SOL, just allowing valid NFS ports and packet types through. That means if someone manages to break into a frontend, they do not have a chance at breaking into the backend storage.

You can also try to implement firewall rules on your FreeNAS, which is kind of a bad idea because FreeNAS isn't designed for it. A firewall should fail secure, but FreeNAS has the IPFIREWALL_DEFAULT_TO_ACCEPT kernel option set, which means that failure to load a firewall ruleset results in all traffic passing.

Were that not true, you could use simple BCP38-style ingress and egress firewall rules on the NAS, which would be somewhat more draconian than what you might wish for, but you end up with something like

ipfw add 100 pass ip from 192.168.1.0/24 to 192.168.1.15 in via vlan1
ipfw add 110 pass ip from 192.168.1.15 to 192.168.1.0/24 out via vlan1
ipfw add 120 deny ip from any to any via vlan1

ipfw add 200 pass ip from 192.168.2.0/24 to 192.168.2.15 in via vlan2
ipfw add 210 pass ip from 192.168.2.15 to 192.168.2.0/24 out via vlan2
ipfw add 220 deny ip from any to any via vlan2

There are basically two downsides to this. One, it breaks any case where a host on 192.168.1.0/24 might legitimately be allowed to talk to 192.168.2.15; even though your pfSense might allow the traffic in both directions, the directly connected interface on the NAS would see rule #220 blocking the ingress and #120 blocking the return traffic. This embedding of policy throughout the network can be a major hazard. I do not recommend this solution, even though I suspect your eyes might have just lit up with "but that's what I want." You are particularly warned that this is NOT a solution that will fail secure, so it is bad to rely on it, and a quarter of a century's worth of experience has had me explaining unintended side effects of such strategies to people many times.

Those requirement apply IMHO not only to Jails and VM's but also to TrueNas application itself !
- You probably want to keep TrueNas management (the GUI) separated from the storage function (bluezone e.g. v.s. greenzone) and
- if you use the TrueNas system as a store for multiple applications (perhaps related to multiple security zones / level2 vlans), you probably want to have those storage parts strictly separated from each other.

I am not sure that TrueNas makes that possible. At least I still do not understand how to do that :(

TrueNAS doesn't make that possible. Neither does any of the other FreeBSD/Linux based NAS solutions I've seen, like Synology or QNAP or whatever. You are fighting a behaviour built into modern operating systems. You are much better off accepting the limitation, and then designing your network accordingly, rather than arguing the point here on this forum. Even if you were arguing directly with the FreeNAS developers, you would not be able to get this behaviour changed, because it is a fundamental aspect of modern networking, and they're not going to rewrite the entire IP stack to work the way you think it should.

For info, at this moment I am building a server intended to replace multiple physical PC's. One of the applications to run on the system will be TrueNas another a web-server etc. To archive this I could install some VM-system, but I was / am strongly considering to implement this using TrueNas as being both the central storage system and the host for other applications running as VM and/or within a Jail.

Both vnet based jails and VM's give you IP stacks which are independent of the host FreeNAS system.

In our secured deployment environments, I find that deploying small FreeBSD NFS server VM's is a practical way to scope access without lots of side effects. I'm not actually doing that on top of FreeNAS, but there's absolutely no reason I couldn't or wouldn't. One could argue that this is a lazy solution, but this thread has already explored some of the unintended effects and consequences of complicated host networking. I find that having several smaller things each designed to address a specific set of requirements tends to work better than one bigger thing designed to try to accommodate the union of all those requirements.

This basically brings us back around to what I wrote as the summary of my first post to you in this thread:

You will need to come to grips with this kind of thing as you correct your understanding of how VLANs interact with the NAS. If this is unacceptable, then you probably cannot build your filesharing operations directly on FreeNAS. However, you CAN create a separate IP stack for a jail, which is independent of the host FreeNAS system's IP processing, so you should be able to connect a VNET jail to a LAN (virtual or physical) and have it be independent of the host.

which is the direction I think you need to move in to get the sort of control that you wish to exercise.
 

Louis2

Contributor
Joined
Sep 7, 2019
Messages
177
Of course this thread is all about security.

Given the complexity and as already written, I have decided to switch testing from my actual NAS to “a test system”, which is in fact the “not yet used” future server.

Central point in my network is the firewall/router (pfSense). That is where all the VLAN’s come together. The router / firewall is also the only point in the network where packages can go from one VLAN to another. What is allowed and what not is determinates the pfSense firewall.

All equipment is connected using managed switches (netgear/mikrotik).

The test setup I have in mind, is not far apart from the final setup. As said the complexity is related to security. For that reason I will explicitly mention VLAN-types which equals level-2 VLANs.

The server I will try to build has the following functionallity:
  • TrueNas Gui (Blue Zone), I will connect that to the management VLAN
  • General Storage (Green Zone), I will connect that to my existing TrueNas
  • SMB-drive for my PC-network (Amber Zone), there will be a connection to the PC-lan
  • Webserver (Red Zone), I will install FAMP in a Jail. Jail management is “Blue Zone”
  • A VM (Amber Zone), I intend to install Fedora. VM-management is “Blue Zone”
I realize that, larger companies will probably split that functionality over multiple physical machines. Which makes it perhaps a bit easier. However for me combining this in one server is more realistic.

This setup should be more than enough to find out what is possible and what not (I hope nothing major )



Louis
 
Top