slow read on client side

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
Hello everyone, and thank you in advance for your time and assistance.

I'm currently facing challenges in optimizing file read/write performance on a high-spec setup and suspect that my TrueNAS Core configuration might be the bottleneck. Below, I've outlined the specifications and the issues encountered.

Server-Side Configuration:

  • Network Interface: Capable of ~35Gbps, with a theoretical capacity of 40Gbps.
  • Storage: 11 x 8TB NVMe drives configured in RAID 0, each with a speed of 7GB/s, leading to a theoretical total bandwidth of 77GB/s.
  • CPU: 192 threads.
  • Memory: 2TB ECC RAM.
  • Storage Pool Configuration: I have experimented with both LZ4 compression and without any compression, using a block size of 1MB.
Client-Side Setup:

  • The client has a configuration similar to the server, including a 40Gbps network interface, ensuring a high-capacity link between the two.
Issue: Despite the high-end hardware, benchmarking tools report similar performance metrics across different tests: approximately 700MB/s for write operations and 1GB/s for read operations. These results fall short of the expected throughput, considering the hardware capabilities.

I am relatively new to TrueNAS Core and networking at this scale. Thus, I suspect my setup or configuration might not be fully optimized. Could the TrueNAS Core system be the limiting factor here? Or perhaps there's an oversight in my configuration approach?

Any insights, suggestions, or guidance on how to improve the performance or tweak the TrueNAS Core settings would be greatly appreciated.

Thank you!
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
For network tuning:

For SMB multi-channel setup:

You may also want to enable jumbo frames on both sides of the 40 Gbps link.
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
Hey Samuel,

thanks for the response, it's very helpful.

regarding the second article, I am using NFS right now, is there any optimization article similar to the one you posted? or you recommended using SMB? all client machines are ubuntu
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
I've attempted to optimize network performance by applying the guidelines on tuning high-speed networks (High-Speed Networking Tuning to Maximize Your 10G/25G/40G Networks). Unfortunately, this has not resulted in any noticeable improvement in read speeds, which remain around 1GB/s.

Given the hardware capabilities of both the server and client sides, this performance seems to be significantly under the potential throughput. I'm eager to identify and resolve any remaining bottlenecks or configuration issues that might be hindering performance.

Could there be other TrueNAS Core settings, network configurations, or even hardware-specific adjustments that I might have overlooked? I'm open to any and all suggestions, technical insights, or guidance that could help push the performance closer to the theoretical maximums.

Thank you once again for your time and help. Your expertise is invaluable to someone still navigating the complexities of high-performance networking and storage solutions.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
For NFS, you should increase the number of server processes. You have more than enough RAM for it. Default is 16. Since your CPU can handle up to 192 threads, you could try increasing this to 64 or 128, and enable support for UDP.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Can you provide more details on your hardware?
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
The spec is
CPU: AMD EPYC 7Y43 48 Cores CPU x 2 (TWO CPU)
MEMORY: 2TiB ECC (Samsung 64GB x 32 DDR4 3200)
HARDDRIVE: Samsung, 7.68TiB NVME x 11 with ~7GB/s per drive
NETWORK CARD: MCX516A, 40Gbit/s


Here is a screenshot of the dashboard:

Screenshot 2024-03-23 at 2.46.10 AM.png
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Do you get full rate running iperf both ways through mce1? The Dashboard says you're using 100GBase-SR4 cables instead of 40G cables.
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
the cable is 100G, but the network card is 40Gbps, i have run iperf3, and the peak perf is ~35Gbps. but right now it's not even close to 35Gbps yet. since the read speed is only 1GBps ~ 8Gbps.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
What motherboard do you have? This is smelling like a PCI lane restriction.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
What is the local speed, i.e. without any network in the game?
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
I'm inclined to agree based on my recent observations. After conducting a local speed test, the results consistently hovered around ~700MB/s. This has led me to wonder whether the issue at hand could be resolved through adjustments in software configuration, or if it necessitates a hardware change, specifically the motherboard.

The motherboard is a product called Inventec Horsea, equipped with dual AMD 48-core CPUs. Given its capability to properly recognize and utilize both CPUs, it strikes me as odd that the PCI bus performance is lagging. I think that the hardware should theoretically support higher speeds. do you have insights on potential fixes?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
I don't know if there are UEFI BIOS knobs you can twiddle. From my reading of the vendor specs, the Horsea is optimized for compute & virtualization workloads, and the J80F for storage workloads, which are more appropriate for TrueNAS.
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
do you think there is anything i can set on the TrueNas system level to fix this?
like the Numa settings / PCI4 settings.

i went into the BIOS, and there is not much i can change there for the PCIE4
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
You can try spelunking through the sysctls for hw.pci.*, but the TrueNAS kernel isn't going to be able to overcome a hardware lane restriction.
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
is there any possibility that this is due to incorrect configuration in TrueNas Core? given the read speed in Ubuntu is significantly faster
 

randomusername

Dabbler
Joined
Mar 22, 2024
Messages
11
I have run fio again on the local system, found two issues:
1. single disk read is the same as the 11xNVME SSD raid 0 (stripe) read: both are 7GB/s
2. remote client can only reach up to 300MB/s
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Can you please run this quick test on the TrueNAS shell?

Write test:
time dd if=/dev/urandom of=/mnt/POOL-NAME/50gig.file bs=1G count=50

to check for reads you can do:
time cp /mnt/POOL-NAME/50gig.file /dev/null

Or

time dd if=/mnt/POOL-NAME/50gig.file of=/dev/null


Also your CPU is thermal throttling and causing some degree of this performance loss, it’s at idle and is at 92 degrees C.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
You need to also check dev random to dev null to benchmark random itself as it can be relatively slow.

And you also have to be careful that you're not just benchmarking ARC... in either direction.
 
Top