NFS benchmarks seem to be a bit disappointing, am I doing something wrong?

wlevels

Cadet
Joined
Apr 12, 2023
Messages
7
Hi all,

I'm pretty new to TrueNAS, I already read through a lot. It's a great community with a lot of information. Hopefully I didn't miss a possible answer to my following question:
I spent some time with some virtualised TrueNAS instances to practice a bit and just built up my first TrueNAS server.

Supermicro X10SLM+-LN4F
E3 1245v3 - 3.4 Ghz
4x8GB (32GB) DDR3 ECC
4x Gbit
2 disk Mirror (4TB Seagate Ironwolf 5900 RPM)

The benchmarks I am running against it seem to be a bit disappointing. At least, I would like some feedback from you guys how I configured TrueNAS and how I am testing.
Hopefully you guys can point me in the right direction.

Currently I have two 4TB Seagates in a Mirror, the Dataset has been setup with default settings.
I am testing VMWare performance by using NFS. I connected the NFS Datastore to one of my VMWare hosts and installed an Ubuntu VM on it so that I can run some tests.
The pool contains 1 vdev, the used space is 33% (I already ran an RSYNC job from my Synology to the TrueNAS in an effort to start moving stuff over).

If I run iperf3, I have the following results:

PC --> TrueNAS = 92 Mbit/sec
PC <-- TrueNAS = 100 Mbit/sec

(Adding Synology results below with the TrueNAS results, just as a comparison. The Synology is a 5 disk DS1511+)

When benchmarking using DD on the Ubuntu VM that runs on the VMWare Host and has its storage on TrueNAS I have the following results:
ESXI VM --> TrueNAS
dd if=/dev/zero of=tmp.dat bs=2048k count=12k = (26 GB, 24 GiB) copied, 538.861 s, 47.8 MB/s
ESXI VM <-- TrueNAS
dd if=tmp.dat of=/dev/null bs=2048k count=12k = (26 GB, 24 GiB) copied, 233.112 s, 111 MB/s

If I run the same test on an Ubuntu VM that has its storage on the Synology I have the following results:
ESXI VM --> Synology
dd if=/dev/zero of=tmp.dat bs=2048k count=12k = (26 GB, 24 GiB) copied, 305.886 s, 84.2 MB/s
ESXI VM <-- Synology
dd if=tmp.dat of=/dev/null bs=2048k count=12k = (26 GB, 24 GiB) copied, 266.033 s, 96.9 MB/s

The above benchmarks have been done with only 16GB of memory in the TrueNAS system. However I just received my upgrade, it's now 32GB of ram and the results are almost the same:
ESXI VM --> TrueNAS
dd if=/dev/zero of=tmp.dat bs=2048k count=12k = (26 GB, 24 GiB) copied, 406.366 s, 63.4 MB/s
ESXI VM <-- TrueNAS
dd if=tmp.dat of=/dev/null bs=2048k count=12k = (26 GB, 24 GiB) copied, 219.22 s, 118 MB/s

Are these results as expected? I would have expected that the benchmark from VM to TrueNAS would have been able to fill up the Gbit connection but it isn't
Maybe I am doing something wrong, or I need to add another mirror vdev before I am able to fill up the Gbit connection?
But it only seems to be for TrueNAS download traffic, and it's not related to the upload traffic, but it's also not related to the VMWare Host as you can see when looking at the Synology results.

Thanks a lot for helping me out!

Wesley
 

wlevels

Cadet
Joined
Apr 12, 2023
Messages
7
I think I can mostly answer my own question:
looking at the following two links:

Resource: ZFS Storage Pool Layout 2022-12-28
Resource: Sync writes, or: Why is my ESXi NFS so slow, and why is iSCSI faster?

I found out that I can leverage the read speed of the two disks in my mirror summed up, while the write speed is based on the write speed of a single disk. Once I add additional mirrors and add those in a striping setup I should see an increase in write speed.

What also threw me off was why the Synology, which can be seen as a less performing system, was not showing the same behavior. I found out Async writes where turned on, so the performance comparison is not realistic.

I will be first expanding my current pool with additional mirrors to see how much I can increase the performance before looking at the additional options like an SLOG
 
Joined
Dec 29, 2014
Messages
1,135
I can tell you from first hand experience that adding an Intel Optane NVMe as an NFS made a massive difference in the performance of NFS for me.
 

wlevels

Cadet
Joined
Apr 12, 2023
Messages
7
Sadly I don't have an M.2 slot on the motherboard I currently have. So I would already need to look at something like a PCI-E to M.2 card, which will be a big hit on performance (don't even know if it will still provide a benefit in a scenario like that)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
If using a passive PCIe adapter card to M.2 slot, there is ZERO performance loss. What a M.2 NVMe card slot does, is simply expose 1, 2 or 4 PCIe lanes when used / configured as a NVMe slot. Most NVMe type M.2 slots are 4 PCIe lanes.

Even if you used a PCIe switch based PCIe adapter card to multiple M.2 slots, a PCIe switch chip does not add that much overhead.


Now there are differences in PCIe versions, which impact speeds. Plus, some PCIe slots, (or even M.2 NVMe slots), that come off a CPU's chipset, and not directly off the CPU. That also may impact speed.

But, it is almost certain that any M.2 NVMe card will be faster than a SATA III SSD.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hello @wlevels - welcome to the forums.

I'm glad to see that you've found the articles about storage pools, SLOGs, and sync writes especially as related to VMware and NFS.

A couple quick notes from myself:

If I run iperf3, I have the following results:

PC --> TrueNAS = 92 Mbit/sec
PC <-- TrueNAS = 100 Mbit/sec

While it might seem trivial or pedantic, there's a big difference between "Mbit/sec" and "MB/sec" - 8x the difference to be exact, as one's measuring "megabits" and the other "megabytes" - based on your dd results though from in-VM I would think the iperf results have you getting the full "1000Mbit/sec" there.

dd testing itself also isn't the most accurate method - especially when using /dev/zero as an input, due to TrueNAS's inline compression that will merrily crush that 2M segment of zeroes down to a little stub of "repeat character '0' 2 million times" and skew the results.

@Arwen snuck in here with Elvish quickness while I was writing the above, so I'll just second her thoughts - a passive PCIe to NVMe M.2 adapter has zero performance overhead. It's just a bit of wiring on a PCB that changes the physical form factor - think of them like a drive tray that lets you fit a 2.5" SSD into a 3.5" bay. Doesn't change the device or the electrics, it's just allowing it to be physically attached.

Have a peek at the SLOG benchmark thread below, but the Intel Optane devices are quite ahead of almost everything else (if not everything entirely) in the M.2 form factor.


Given that you're constrained by what appears to be gigabit networking, something like a 16GB Optane M10 would be sufficient to give a major boost to sync-write speeds.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763

wlevels

Cadet
Joined
Apr 12, 2023
Messages
7
I bought a couple of M10's in the end (better deal with shipping to Europe in the end). Sadly it seems my Supermicro X10SLM-F doesn't have PCI Bifurcation while I counted on that. Will try to use a single one for now and probably check for another Motherboard that actually supports bifurcation
 
Top