TrueNAS Core and Scale Networking

m3m

Dabbler
Joined
Dec 28, 2022
Messages
11
So I have been using Freenas from version 4 and decided to bring myself into the modern age with TrueNAS Core and/or Scale. But I have run into one problem I could originally do on FreeNAS BUT can not do on either versions of TrueNAS. And before I get scrutinized for being lazy I have me reasons for doing what I am doing.

So my question is, can I not set a statis IP on a different subnet other than the default subnet & gateway? Even time I tire to do this it will revert the setting back to default by wiping out the configuration I set.

Example
enp5s0 192.168.1.x/24 GW 192.168.1.x (Mgmt) DHCP on
ens4f3 192.168.30.x/24 No GW set and DHCP off (10GbE Backdone) <= wipes out after testing

I separate the 1GbE network from the 10GbE network so there is limited internet access from the 10GbE network. I make minimal changes to the 10GbE switch such as controling VLANs and LAGS since I am doing nothing more than pure performance for backbone connectivity(backups from Proxmox cluster,NFS to Proxmox cluster,iSCSI). All access to devices are done through the 1GbE network. So with all that being said, is TrueNAS forcing a gateway for EVERY nic configured which is causing the test to fail and revert setting or am I missing something? I really don't want to trunk the switches just to create whole other network that my core has to management nor create static routes. Yes, Lazy!

Thanks.
 
Joined
Dec 29, 2014
Messages
1,135
How are you setting the IP's (console menu, GUI, or shell CLI)? I have a setup that is similar to yours, and it works for me in TrueNAS-12.0-U8.1.
1672315721539.png
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Its easy
1672322725613.png
 

m3m

Dabbler
Joined
Dec 28, 2022
Messages
11
Thanks for your responses! It looks like the GUI isn't working 100% correctly. I ended up using the console menu to make the configuration stick; even after reboot. Now I am going to test bonded settings. The goal is to see if I can saturate all four 10GbE ports.
 
Joined
Dec 29, 2014
Messages
1,135
You may have a challenge on that. What kind of switch are you using? Any of the channeling uses a load balancing algorithm to select which link will be used. Any single conversation will never get more bandwidth than that of a single link. It also depends on the type of load balancing you select. If you have the switch use destination MAC address (which is the most common), that won't work very well for traffic towards the NAS since it will all be the same MAC address. I use LACP channels for any link when I want/need redundancy, but it is quite rare to be able to get full utilization on all the members of a channel.
 

m3m

Dabbler
Joined
Dec 28, 2022
Messages
11
I was thinking the same thing. That is why I doing the testing now and not just hoping it works. Currently using a Mikrotik CRS317-1G-16S+ running SwOS. To me SwOS is intuitive and simple compared to ROS mode. With a little tinkering I got two servers to saturate a 10GbE bond(using iperf3) on my TrueNAS Scale test machine. Kind of surprised how well this switch has done. Had to using the following setting on TrueNAS.

1672354335878.png


Switch was tested with the following setting.
Lagsettings -
1672354403595.png

2 Vlans which are isolated from each other.
Vlan 10 - cluster communications for the proxmox server
Vlan 20 for everything else, which the testing was conducted on.


- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 0 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
iperf Done.
root@Node1:~#

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec 40 sender
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
iperf Done.
root@Node3:~#

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 5] 0.00-30.00 sec 34.6 GBytes 9.90 Gbits/sec receiver
-----------------------------------------------------------
Server listening on 5202
-----------------------------------------------------------

Ordered more SFPs and FC to maximize the Silicom PE310G4I71LBEU-XR-LP(Intel 710X) card in the test machine. Looking forward to seeing if I can push it all the way to 4GbE.

Very surprised that it worked. Was a little concerned that the switch couldn't handle it.
 
Top