So I'm going to kinda go through this and I want you to understand that I'm conveying what I think you need to know without knowing what you REALLY know, because I know people in your role. I work extensively with VMware too, but I'm a bit more of a "meta" person.
One of the things I've run into repeatedly is people who parrot the VMware guidelines for building networks but have basically little-to-no clue why some of the particulars exist. As an example, VMware likes to encourage a separate interface for vMotion traffic, and (perhaps obviously) the reason is that the traffic levels on this go from zero to SWAMPED to zero again as vMotions spin up. Likewise, segmenting iSCSI onto private storage networks ensures that the hypervisor has lots of I/O capacity that isn't being affected by vMotions or the traffic from virtual machines. This is what VMware teaches (/is supposed to teach) and hopefully THIS is all "obvious". But, if you actually look at the traffic on some of these dedicated interfaces, it turns out that swamping is actually rarer than you might expect.
There's another aspect here, though, and that is vSwitch performance. When you have a vSwitch with just a physical NIC and a vmkernel interface, this is very easy for the hypervisor to deal with. However, if you have a vSwitch with a physical NIC, several vmkernel interfaces, and five hundred virtual machines on dozens of vlans, the processing capabilities of the host are taxed, and it takes much more work to cope with the network traffic. Being ethernet, you are dealing with demultiplexing multiple streams off of what is essentially a serial connection. Latency tends to be higher, and performance is worse, because in addition to competition with other traffic, every received packet has to go through the vSwitch's hash tables to figure out what to do with it. And this is not trivial, no matter how much VMware likes to boast about their assembler-optimized vSwitch implementation.
Now, I would be very happy if every word I've said is entirely obvious to you, but for a lot of VMware engineers, they don't always quite get some bits of this until it is outlined. VMware encourages network segmentation by assuming the worst case for storage and vMotion, and creates some general rules to make lots of 1G networks because things can truly suck if you don't. However, by the time you get up to 40GbE, ESXi 6.0 limited you to only four interfaces (as an example), so you'd be needing to understand the whys, rather than configuring by rote, to be able to survive that limit, and that usually worked out to putting VM traffic on one or two of them, and everything else, storage, vMotion, management, on the other two. (Sort of assuming you do the usual dual ethernet failover thing for your networking, other designs clearly possible though.)
But FreeNAS/TrueNAS isn't a vSwitch implementation, so one of the major design considerations you have with vSphere simply vanishes and does not exist. Of course, you can actually do VM's on TrueNAS, and if you do, then, yes, you get a bridge ("vSwitch") and that should probably be on a separate interface...
Other than that, you are dealing with a single host. You are perfectly welcome to design this as you see fit. You can connect one of the mainboard's gigabit interfaces to an airgapped management network, and some Intel X520 ethernets dedicated to your iSCSI SAN networks for hypervisors, and an Intel XXV710 to that new 2.5/25G switch that handles your office PC's, and you can make a private network between your filers for replication traffic if you like.
These are all going to the same IP stacks on each of the filers, so there is no vSwitch efficiency issue at play. There IS traffic contention at play, of course, but that is something you can choose to design for as you wish. Unlike on ESXi hosts, 99%+++ of traffic on a filer is storage traffic of some sort, NFS, SMB, iSCSI, replication, whatever, so there are no "by rote" default rules for segmenting it. You are expected to understand what your needs are, and you can even screw this up and evolve it later as needed.
You can benefit from some of the concepts from ESXi, such as, dedicating two access mode 10G network interfaces to iSCSI for your SAN will guarantee 20G of bandwidth for SAN, be mildly less latency, and therefore probably somewhat more performant than if you shoved everything on a single 40G vlan trunk, but shoving everything on a single 40G trunk will definitely work too.
I hope I've kinda given you some idea as to why you're not going to find anything that is forcing you into a particular architecture. There will certainly be things you can do that are more optimal for your specific needs, but without a pages-long description of your network, goals, etc., it isn't really possible to provide you with optimized guidance, and that's more into the realm of professional consulting anyways.