Very slow R/W speeds

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
that means that all your storage traffic is transiting your router at layer 3
Correct
What kind of router is it, and are you SURE it is up to doing that? Just because something has a gigabit interface doesn't mean it is capable of filling that interface on a consistent basis.
Fortigate 80F and a FortiSwitch 224e POE. Block-Intra-Vlan Traffic is not enabled so all the routing will take place in the switch. The switch is capped at 56 Gbps so I don't feel the switch is being over worked.

Would you mind sharing your topology/config so I can get another perspective on how to setup the networking for this TrueNAS?
 
Joined
Dec 29, 2014
Messages
1,135
Here is what I found for the specs on your firewall.
1614220234964.png

Fortigate 80F and a FortiSwitch 224e POE. Block-Intra-Vlan Traffic is not enabled so all the routing will take place in the switch.
No, that is definitely not the case. Unless your switch has multiple layer 3 (VLAN) interfaces (which isn't what it sounds like to me from the way you describe it), the traffic is being routed at layer 3 by the firewall. Unless you have done some other configuration to bypass inspection, the firewall is still inspecting the traffic that transits it.
The switch is capped at 56 Gbps so I don't feel the switch is being over worked.
No, I am sure the switch isn't being overworked. The firewall doing the layer 3 routing is almost certainly a bottleneck. Bear in mind that the total throughput numbers include any other traffic going through the firewall like you general internet traffic.
Would you mind sharing your topology/config so I can get another perspective on how to setup the networking for this TrueNAS?
You can see more details if you expand the descriptions in my signature, but the key components are a Cisco Nexus 3048 with 48 10G ports and 4 40G ports. Both my FreeNAS units have a 40G connection to the VLAN that I have dedicated to the storage network as do the ESXi hosts. The ESXi hosts also have a 10G connection to the Vmotion VLAN. The client facing side connects to a Cisco 3750E stack. The key thing that keeps the storage traffic at the highest possible speed is that the FreeNAS and ESXi hosts are on the same IP network, so no layer 3 routing for that. You can get really high speed L3 routing, but that costs way more than a L2 switch. Yes, the Nexus could do the L3 routing (probably at wire speed) but I was trying to keep that as simple as possible. This would also work for me if I ever ended up with a 40G/10G switch that wasn't L3 capable.

I guess the key point is that every component through which the traffic passes has some kind of delay associated with it. My recommendation would be to have dedicated NIC's for storage in the ESXi hosts and put them on the same IP network as the FreeNAS, and connect that interface only to the switch. It could be a VLAN that goes nowhere else.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Try a direct connection, no switch
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
No, that is definitely not the case. Unless your switch has multiple layer 3 (VLAN) interfaces
Yes, it is deff the case. There is already an inside to inside rule for the interfaces. The FortiGate and FortiSwitch are smart enough to understand there is a base rule in place, thus there is no need to inspect the internal traffic. Ingress and Egress traffic would be inspected. If I were to enable, Block intra-VLAN traffic, then yes you would be correct.
1614225772713.png

The router is capable of up to 10Gps. Its running a SOC4, ARMv8 4core CPU with 4GB of RAM. It can easily handle the traffic even if the traffic was somehow making it back to the router. What you pulled is for ingress/egress traffic.
have dedicated NIC's for storage in the ESXi hosts and put them on the same IP network as the FreeNAS, and connect that interface only to the switch.
I have toyed with the idea of removing the one of the NICs I have dedicated to my SIEM that's currently placed in pernicious mode.

I'll keep you post and I appreciate the input / help.
 
Joined
Dec 29, 2014
Messages
1,135
Yes, it is deff the case.
I remain skeptical on that, but ok. Still why bring additional processing/routing into when it doesn't bring you any value that I can see?
I'll keep you post and I appreciate the input / help.
Please do, and you are welcome.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
Stage1: Replaced card, put it back in a LAGG like I originally wanted. 110MiB/s - 123MiB/s.
Stage 2 (not started): Remove the two nics (one on each host) that are being used in promiscuous for the SIEM, and place them on the same vlan as the trunas box.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Stage 2 (not started): Remove the two nics (one on each host) that are being used in promiscuous for the SIEM, and place them on the same vlan as the trunas box.
You should expect a major improvement from this. From my initial understanding of your post #39 here (https://www.truenas.com/community/threads/very-slow-r-w-speeds.90955/page-2#post-632907) your NFS traffic is crossing your router SoC.

The setting of "intra-VLAN traffic bypass" is exactly for that - intra, not inter, as in "traffic within a VLAN going to another endpoint in the same VLAN" - so all of the traffic from the ESXi host initiator on 192.168.20.0/24 is being hairpinned through the router to reach the 192.168.22.22/28 and 192.168.22.42/28 targets.

Get the router out of the chain. Give your ESXi host(s) an IP on the same subnets/VLANs as your NFS exports from the TrueNAS machine. Temporarily setting up a direct-connect as @SweetAndLow suggests will be an excellent way to test this.

How many physical NICs are in your ESXi hosts?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Three in total. Two for the virtual switch (management/VMs) and one dedicated for the SIEM as stated.

I apologize; that was in your first post.

I would look at adding two separate VLANs for your NFS traffic as @Elliot Dierksen suggested:

My recommendation would be to have dedicated NIC's for storage in the ESXi hosts and put them on the same IP network as the FreeNAS, and connect that interface only to the switch. It could be a VLAN that goes nowhere else.

With only two NICs and I assume a requirement to have data and storage redundancy, you'll have to do it by running multiple VLANs on each interface. Do you have access to VMware vDS / Network I/O control with your current vSphere license? Three hosts makes me think it might be a VMUG which gets you Enterprise Plus, but correct me if I'm wrong. Without NIOC you could also use a single NFS portgroup and active/standby overrides on a per-portgroup basis (eg: network is active on NIC1 with standby on NIC2, NFS traffic active on NIC2 with standby to NIC1) but this limits you to the equivalent of a single 1Gbps link.
 

TheUsD

Contributor
Joined
May 17, 2013
Messages
116
With only two NICs and I assume a requirement to have data and storage redundancy, you'll have to do it by running multiple VLANs on each interface. Do you have access to VMware vDS / Network I/O control with your current vSphere license? Three hosts makes me think it might be a VMUG which gets you Enterprise Plus, but correct me if I'm wrong. Without NIOC you could also use a single NFS portgroup and active/standby overrides on a per-portgroup basis (eg: network is active on NIC1 with standby on NIC2, NFS traffic active on NIC2 with standby to NIC1) but this limits you to the equivalent of a single 1Gbps link.

I'm not using VMUG, although I can understand where you formulated that from. I do not have access to the vDS / Network I/O control. I did some deeper digging and found out what I thought traffic between vlans was only going east-west was not accurate. Made some adjustments with the FortiOS so the traffic between the two vlans was only east-west. This allows my LAGG to keep a steady stream of 120MiB/s to peeks of 133MiB/s. That caps the SFP connections which makes me happy.

At this point all that's left to decide is if I want to drop more $ for new switches that hold 10G and the SPF+ cards for the TrueNas and ESXi hosts; or use the two Dell N3048P I have collecting dust, buy SFP+ cards; OR leave it as be and just say the speed increase will not be much gain for the 2100+ (even after my 50% discount for Fortinet products) for new switches and $540 for new cards + whatever the DAC cables will cost.

1st World problems ¯\_(ツ)_/¯

Thanks for all the help.
 
Top