Networking Recommendations

Follow these best practices for a stable and performant network. These generally apply to either TrueNAS CORE or TrueNAS SCALE, unless specified, but each software might place the related options in slightly different web interface locations.

iXsystems welcomes contributions from members of the TrueNAS community!

Use the Feedback button on the right side or click Edit Page at the top right of this page to suggest your own networking tips and tricks.

Static IP Address

By default, TrueNAS SCALE configures the primary network interface for Dynamic Host Configuration Protocol (DHCP) IP address management. Consider assigning a static IP address for increased network stability and communication between devices.

One or More Aliases?

Static IP addresses set a fixed address for an interface that external devices or websites need to access or remember, such as for VPN access.

Use aliases to add multiple internal IP addresses, representing containers or applications hosted in a VM, to an existing network interface without having to define a separate network interface.

In the UI, you can add aliases when adding or editing an existing interface using the Add button to the right of the Aliases. To add a static IP. Click Add again to add an additional alias.

From the Console Setup menu, select option 1 to configure network settings and add alias IP addresses.

See Setting Up Static IPs for more information.

Interfaces

Use multiple network interfaces if possible for load balancing and redundancy. Configure link aggregation (LAGG) to combine multiple physical interfaces into a single logical interface.

Multiple interfaces connected to a single TrueNAS system cannot be members of the same subnet.

You can combine multiple interfaces with link aggregation (LAGG) or a network bridge. Alternatively, you can assign multiple static IP addresses to a single interface by configuring aliases.

Click for more information

When multiple network interface cards (NICs) connect to the same subnet, users might incorrectly assume that the interfaces automatically load balance. However, ethernet network topology allows only one interface to communicate at a time. Additionally, both interfaces must handle broadcast messages since they are listening on the same network. This configuration adds complexity and significantly reduces network throughput.

If you require multiple NICs on a single network for performance optimization, you can use a link aggregation (LAGG) configured with Link Aggregation Control Protocol (LACP). A single LAGG interface with multiple NICs appears as a single connection to the network.

While LACP is beneficial for larger deployments with many active clients, it might not be practical for smaller setups. It provides additional bandwidth or redundancy for critical networking situations. However LACP has limitations as it does not load balance packets.

On the other hand, if you need multiple IP addresses on a single subnet, you can configure one or more static IP aliases for a single NIC.

In summary, we recommend using LACP if you need multiple interfaces on a network. If you need multiple IP addresses, define aliases. Deviation from these practices might result in unexpected behavior.

For a detailed explanation of ethernet networking concepts and best practices for networking multiple NICs, refer to this discussion from National Instruments.

See also Setting Up a Link Aggregation.

LACP

If supported by your network switch and other networking equipment in your environment, select Link Aggregation Control Protocol (LACP) to use the most common protocol for LAGG interfaces based on IEEE specification 802.3ad. LACP dynamically aggregates multiple network interfaces into a single logical link, providing increased bandwidth and fault tolerance.

LACP allows you to bundle multiple physical network links into a single logical link, increasing the overall bandwidth available. This is especially beneficial in environments where high data transfer rates are required, such as when multiple users are accessing storage concurrently. LACP does not provide a performance benefit for single-user TrueNAS systems.

We do not recommend using LACP/LAGG for iSCSI traffic. See below for more information.

SMB Multichannel

SMB Multichannel is supported in TrueNAS SCALE versions 22.12.3 (Bluefin) and later. It is not currently supported in TrueNAS CORE.

Depending on the requirements of your intended use case, SMB Multichannel (L3+) can offer certain advantages over LACP (L2).

SMB Multichannel allows for dynamic load balancing at the application layer, enabling multiple connections to be established between a client and a server. This results in more efficient utilization of available network paths, especially in scenarios where the workload consists of multiple parallel data streams. In contrast, LACP operates at the link layer and aggregates physical links into a single logical link. While it provides load balancing, the granularity is typically based on source and destination IP addresses or MAC addresses, which may not distribute traffic as dynamically.

It is not best practice to enable both LACP and SMB Multichannel in conjunction. Consider your network requirements and select the more appropriate aggregation strategy.

SMB Multichannel allows for flexibility in adapting to varying bandwidth requirements of different applications or workloads. It can dynamically adjust the number of connections based on the available network resources and the demands of the data being transferred.

SMB Multichannel does not require specific switch configurations or support for link aggregation, making it potentially easier to deploy in environments where switch compatibility or configuration limitations exist. LACP, on the other hand, relies on switch support for link aggregation. While it is a standardized protocol, ensuring consistent switch compatibility and configuration can sometimes be more complex in diverse network environments.

SMB Multichannel is application-aware and can optimize connections based on the requirements of the specific application. This can lead to more efficient use of available network resources for applications that can benefit from parallel connections. LACP, being a lower-layer protocol, is less aware of the applications running over the network and may not optimize connections at the application layer to the same extent.

These advantages can be more pronounced at higher network speeds, such as in 100G environments (see below).

See Setting Up SMB Multichannel for more information.

Network Traffic Segmentation

Segregate network traffic using both layer 2 and layer 3 methods for improved efficiency and security. This helps optimize bandwidth, reduce congestion, and enhance overall network resilience against potential security threats.

Layer 2 Isolation

Use VLANs (Virtual Local Area Networks) to segment your network and isolate traffic. VLANs allow you to create logical segments within a physical network switch, even if devices in those segments are physically connected to the same switch.

VLANs provide logical segmentation, allowing different groups of devices to be in separate broadcast domains regardless of their physical location in the network. Devices in one VLAN do not see the broadcast traffic of devices in other VLANs, reducing broadcast domain size and improving network efficiency. VLANs provide flexibility in network design, providing enhanced isolation, access control, and scalability. They are often used to group devices based on functional roles or department and optimize performance by prioritizing discrete traffic types on each VLAN.

See also Setting Up a Network VLAN.

Layer 3 Isolation

Use network layer (subnet) isolation to separate storage traffic from other network traffic to avoid congestion. If you have multiple subnets in your network, you can run the management UI and IPMI if included on your motherboard on one subnet and data on another.

Devices in different subnets cannot communicate directly without the use of a router or layer 3 switch. Each subnet has its own IP address range. Devices within the same subnet share the same network address and have unique host addresses.

Communication between devices in different subnets requires routing. Broadcast traffic is contained within the boundaries of a subnet. Devices in one subnet do not receive broadcasts from devices in other subnets without routing.

Separate subnets are commonly used for security and to manage IP address space efficiently. By placing the TrueNAS system on a separate subnet, you can isolate it from general internet traffic. This helps in reducing the attack surface and provides an additional layer of security.

Separating the TrueNAS system onto its own subnet allows for more granular control over bandwidth allocation You can prioritize or limit the bandwidth for traffic to and from the NAS, ensuring that it gets the necessary resources without being adversely affected by other network activities. Isolating the NAS on a dedicated subnet helps prevent scenarios where heavy internet traffic impacts the performance of the NAS. This is particularly important if the NAS is used for critical or time-sensitive operations. Implement Quality of Service policies to prioritize storage traffic over other non-essential traffic. This ensures that data transfers to and from the NAS receive the necessary priority and resources.

As your network grows, isolating the NAS onto its own subnet provides a scalable solution. It allows you to maintain control and security even as you add more devices and services to the network.

See Managing Interfaces and Setting Up Static IPs for more information.

iSCSI

iSCSI shares require specific networking considerations. iSCSI should be its own dedicated VLAN network to isolate it from other network traffic. This enhances security, reduces the risk of interference, and provides easier Quality of Service (QoS) management.

Prioritize iSCSI traffic over other types of traffic on the network. This ensures that storage-related activities receive the necessary network resources for optimal performance.

Ensure consistent network configurations across all components involved in the iSCSI setup. Consistency helps in maintaining predictability and stability in the network environment.

For more information, see Block Shares (iSCSI). See also Best Practices for Configuring Networking with Software iSCSI from VMware.

MPIO for iSCSI

Use multipath I/O (MPIO) to combine interfaces when using iSCSI. Do not use LAGG link aggregation.

MPIO is designed to improve the reliability and performance of data transfers by using multiple physical paths between the iSCSI initiator (client) and the iSCSI target (storage). MPIO can distribute I/O traffic across multiple paths, balancing the load and improving overall performance. If one path fails, MPIO can automatically switch to an alternate path, ensuring continuous access to the storage.

Both the iSCSI initiator and target must support MPIO for it to be effective. MPIO requires configuration on both the initiator and target sides. Each path needs to be properly identified and managed.

To configure multipath for iSCSI, go to the iSCSI screen and configure additional Portal IP addresses.

iSCSI and VMWare

Refer to the VMWare best practices guide for information on configuring multiple network adapters when using VMWare with your TrueNAS system.

Jumbo Frames

Enable jumbo frames on an iSCSI share to enhance storage performance through the transmission of larger data packets, reducing overhead and improving overall efficiency.

All network hardware, including switches and client systems, must support and enable jumbo frames to prevent fragmentation and performance degradation.

To enable jumbo frames, go to the Network screen and edit the parent interface(s). Set the MTU (Maximum Transmission Unit) to 9000. Click SAVE.

Reboot the system and restart the iSCSI share to inherit.

ALUA

TrueNAS Enterprise

TrueNAS Enterprise High Availability (HA) systems should enable Asymmetric Logical Unit Access (ALUA) for iSCSI shares. ALUA and MPIO can be used at same time.

ALUA allows a client computer to discover the best path to the storage on a TrueNAS® system. HA storage clusters can provide multiple paths to the same storage. For example, the disks are directly connected to the primary computer and provide high speed and bandwidth when accessed through that primary computer. The same disks are also available through the secondary computer, but because they are not directly connected to it, speed and bandwidth are restricted. With ALUA, clients automatically ask for and use the best path to the storage. If one of the TrueNAS® HA computers becomes inaccessible, the clients automatically switch to the next best alternate path to the storage. When a better path becomes available, as when the primary host becomes available again, the clients automatically switch back to that better path to the storage.

To enable ALUA, select Enable iSCSI ALUA from the Target Global Configuration tab on the iSCSI screen.

Enterprise customers should contact iXsystems Support to validate network design changes.

Contacting Support

Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.

Contact MethodContact Options
Webhttps://support.ixsystems.com
Emailsupport@ixsystems.com
TelephoneMonday - Friday, 6:00AM to 6:00PM Pacific Standard Time:

US-only toll-free: 1-855-473-7449 option 2
Local and international: 1-408-943-4100 option 2
TelephoneAfter Hours (24x7 Gold Level Support only):

US-only toll-free: 1-855-499-5131
International: 1-408-878-3140 (international calling
rates apply)

100 Gigabit Ethernet Design Considerations

In a 100G network environment, established best practices remain relevant, however close attention to detail becomes critical to achieve expected performance of both single and multi-client deployment strategies. Configuration and infrastructure elements that may present minor inconveniences at lower speeds, can result in significant disruptions at 100 gigabytes.

Capacity and Bandwidth

Network backhaul must have sufficient capacity and bandwidth to handle the aggregate traffic generated by one or more 100G connections. This includes accommodating normal data transfer rates and potential bursts of high-speed data. The network should provide low latency and high throughput to ensure efficient data transmission.

Network Architecture

The overall network architecture and design play a significant role in supporting 100G networking. This includes considerations such as deployment of appropriate routers, switches, and other networking equipment capable of handling high-speed data flows.

All components, such as optical transceivers, should be compatible. Consult your network switch manufacturer’s compatibility matrix to confirm.

The increased bandwidth capacity of a 100G network can result in enhanced performance. However, implementing Link Aggregation Control Protocol (LACP) on 100G switches, particularly with Multi-Chassis Link Aggregations (MLAGs) and additional layers of abstraction, can introduce added complexity. This has the potential to result in performance issues and requiring more meticulous configuration.

Implementing SMB Multichannel in a 100G environment can contribute to more efficient utilization of the available resources and improved overall performance for file sharing and data access.

The ability of SMB Multichannel to establish multiple connections in parallel aligns well with the capabilities of a high-speed network. This parallelism can result in efficient use of the available bandwidth and improved performance, especially when dealing with large data sets. Optimized for Large Workloads:

In scenarios involving substantial data transfers, such as large file copies or data-intensive applications, SMB Multichannel in a 100G environment can help optimize connections and adapt to the high-speed networking capabilities, leading to improved responsiveness and reduced transfer times. While SMB Multichannel itself is not specific to a particular network speed, its benefits become more pronounced and impactful in environments with higher bandwidth capacities, such as those provided by 100G networks.

Record Size Tuning

The increased bandwidth of 100G networks has the potential to expose performance bottlenecks that have limited impacts in 10G or even 40G environments. This means that optimization practices, such as record size tuning, can have a significant impact on network performance.

You should ensure that the record size for each dataset is in alignment with its I/O workload. To adjust record size in TrueNAS SCALE or CORE, got to Advanced/Other Options on the Add or Edit Dataset screen.

For a detailed discussion of record size tuning, see Tuning Recordsize in OpenZFS from Klara.


Performance Testing

After completing initial network configuration, you should obtain performance benchmarks and verify the configuration suits your use case.

Choose benchmark tests that closely resemble your real world workload. Synthetic benchmarks may not give a full understanding of the performance characteristics of your system.

Disk Testing

While not strictly a networking concern, storage system disk benchmarking via fio (Flexible I/O tester) can help you evaluate if the system is optimally tuned for your intended use. For instance, systems that deal primarily with large files, such as data backup or media storage, benefit from a larger block size, while systems dealing primarily with small files, like documents or logs, prefer a smaller block size. Confirm that your local storage performance is functioning as intended before moving on to test network bandwidth.

An example of a basic fio test is:

fio --ramp_time=5 --randrepeat=1 --direct=1 --name=test --bs=4M --size=4G --rw=IOtype

Where IOtype is the I/O operation to test. Options include:

I/O TypeDescription
randreadRandom reads
randwriteRandom writes
readwriteSequential mixed reads and writes
randrwRandom mixed reads and writes
readSequential reads
writeSequential writes
Do not run fio tests with write or trim workloads against an active storage device.

See the fio documentation for all parameters and options.

Network Testing

Use iperf3 to test the max bandwith between the TrueNAS system and a client computer. iperf comes installed in TrueNAS CORE and SCALE. Before you begin, check the client computer and install iperf if needed.

Enter iperf3 -s on the TrueNAS system. This tells the TrueNAS system that it is acting as server/host and activates the iperf listener. The active port displays:

----------------------------------
Server listening on 5201 (test #1)
----------------------------------

If you want to specify a port to use, you can activate iperf with iperf3 -s -p 5101 , where 5101 is the port to monitor. See Default Ports for a list of assigned port numbers.

Next, open a CLI on the client computer and enter iperf3 -c hostname -p 5201 , where hostname is the IP address or hostname and domain for the host server and 5201 is the port the server is listening on. By default, the iperf test runs for 10 seconds and outputs transfer rates, for example:

Connecting to host 8.8.8.8, port 5201
[ 5] local 8.8.8.8 port 44706 connected to 8.8.8.8 port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  52.9 MBytes   444 Mbits/sec                  
[  5]   1.00-2.00   sec  55.5 MBytes   465 Mbits/sec                  
[  5]   2.00-3.00   sec  55.5 MBytes   465 Mbits/sec                  
[  5]   3.00-4.00   sec  53.4 MBytes   448 Mbits/sec                  
[  5]   4.00-5.00   sec  53.6 MBytes   450 Mbits/sec                  
[  5]   5.00-6.00   sec  58.5 MBytes   491 Mbits/sec                  
[  5]   6.00-7.00   sec  57.3 MBytes   481 Mbits/sec                  
[  5]   7.00-8.00   sec  58.5 MBytes   491 Mbits/sec                  
[  5]   8.00-9.00   sec  56.6 MBytes   475 Mbits/sec                  
[  5]   9.00-10.00  sec  59.1 MBytes   495 Mbits/sec                  
[  5]  10.00-10.00  sec   191 KBytes   538 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate            Retr
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   0.00-10.00  sec   562 MBytes   471 Mbits/sec     0            sender
[  5]   0.00-10.00  sec   561 MBytes   471 Mbits/sec                  receiver

iperf Done.

To establish multiple connections from the client system to the TrueNAS host, use the -P (parallel) flag. From the client computer, enter iperf3 -c hostname -p 5201 -P 4 , where hostname is the IP address or hostname and domain for the host server, 5201 is the port the server is listening on, and 4 is the number of simultaneous connections to make.

Note that iperf 3 is single threaded, which means that some hosts may be CPU-bound, including 40G and 100G networks. See the iperf FAQ for more information.

To run parallel streams of iperf3 on multiple cores/ports, first initialize the TrueNAS system on multiple ports:

iperf3 -s -p 5101&; iperf3 -s -p 5102&; iperf3 -s -p 5103 &;

Next, run multiple instances on the client system, using the “-T” flag to label the output:

iperf3 -c hostname -T s1 -p 5101 &; iperf3 -c hostname -T s2 -p 5102 &; iperf3 -c hostname -T s3 -p 5103 &;