Is this poor performance on high performance server?

mvipe01

Dabbler
Joined
Feb 1, 2022
Messages
18
Hello Everyone,

This is my first experience moving to Truenas Scale from the old traditional hardware raid and running everything myself in hand made containers.

Below is my hardware:
Supermicro X10DRH-IT
2x Xeon E5-2699v4
512GB DDR4-2400T ECC REG RAM
(12) 8TB HGST SAS3 Drives configured in RAIDZ2
(1) LSI 9300-8i firmware 16.00.12.00
(1) LSI 9300-8e firmware 16.00.12.00
(1) Intel X520 Dual 10GB SFP+ NIC
(1) Chelsio T520 Dual 10GB SFP+ NIC
MTU on all networking: 9000
Autotune: On
Samba service Auxiliary Parameters tried:
aio write size = 0
server multi channel support = yes

No cache drives etc.

I am having big problems with SMB and iSCSI read/writes.

Initially if I go to copy a 100GB test file over SMB from my windows host (also using the X520 direct connection) the speeds are 1250MB/s for the first 30 seconds or so then they drop to around 200MB/s.

Reads are poor in my opinion at only around 220MB/s.

This is my first experience with non hardware raid and using truenas so my question is what is the performance expected to be like?

root@black[/mnt/data1/media]# dd if=/dev/random of=/mnt/data1/media/large-file-10gb.txt count=1024 bs=10485760
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 51.0364 s, 210 MB/s
root@black[/mnt/data1/media]#

I am only hitting 210MB/s over 10 drives? Am I missing something here with the way zfs does it raid vs tradition raid? This same hardware here would max 1200+MB/s in a hardware raid 10 all day long prior to switching to TrueNAS. Is this to be expected with this kind of setup?

Any help is appreciated, thank you.
 

mvipe01

Dabbler
Joined
Feb 1, 2022
Messages
18
I am sorry I also forgot to mention my second vdev is a mirror (2) 2TB Samsung 970 Evo NVME also having the same performance:

root@black[/mnt/nvme1]# dd if=/dev/random of=/mnt/nvme1/large-file-10gb.txt count=1024 bs=10485760
1024+0 records in
1024+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 48.8798 s, 220 MB/s
root@black[/mnt/nvme1]#
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You are aware, I hope, that /dev/random, in trying to be a source of random data, does have a speed limit to the amount of data it can generate?
 

mvipe01

Dabbler
Joined
Feb 1, 2022
Messages
18
You are aware, I hope, that /dev/random, in trying to be a source of random data, does have a speed limit to the amount of data it can generate?
To be honest I've been so deep down the rabbit hole trying to figure out the poor samba performance that I didn't even think about it.
 

mvipe01

Dabbler
Joined
Feb 1, 2022
Messages
18
You are aware, I hope, that /dev/random, in trying to be a source of random data, does have a speed limit to the amount of data it can generate?
Alright, so with /dev/zero I am able to hit 2GB/s with my zfs so it must be a limitation or problem on the SMB/iSCSI/network side causing the slow speeds:

root@black[/mnt/data1/media]# dd if=/dev/zero of=/mnt/data1/media/100GB.img count=1024 bs=104857600
1024+0 records in
1024+0 records out
107374182400 bytes (107 GB, 100 GiB) copied, 52.3983 s, 2.0 GB/s
root@black[/mnt/data1/media]#
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The right way to use that would be to generate a file (I guess in a dataset where you set primarychache and secondarycache to none) using /dev/random and then use that file as source... or let it be cached if you want the fastest possible write and don't wish to test read speeds.
 

mvipe01

Dabbler
Joined
Feb 1, 2022
Messages
18
root@black[/mnt/nvme1/test]# rsync -av -P /mnt/data1/media/100GB2.img /mnt/nvme1/test/100GB.img
sending incremental file list
100GB2.img
107,374,182,400 100% 964.70MB/s 0:01:46 (xfr#1, to-chk=0/1)

sent 107,400,396,905 bytes received 35 bytes 1,008,454,431.36 bytes/sec
total size is 107,374,182,400 speedup is 1.00
root@black[/mnt/nvme1/test]# rsync -av -P /mnt/nvme1/test/100GB.img /mnt/data1/media/100GB3.img
sending incremental file list
100GB.img
107,374,182,400 100% 943.78MB/s 0:01:48 (xfr#1, to-chk=0/1)

Thank you for your suggestion,

It appears locally I can get about 1GB/s read/write between the (12) 8TB and dual nvme setup so my problem must be related on the network/samba side. I setup a ramdisk on the truenas system and FTP from my machine was able to transfer the 100GB file at around 950MB/s so it has to be samba/iscsi related.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
It appears locally I can get about 1GB/s read/write between the (12) 8TB and dual nvme setup so my problem must be related on the network/samba side. I setup a ramdisk on the truenas system and FTP from my machine was able to transfer the 100GB file at around 950MB/s so it has to be samba/iscsi related.
That would certainly point to things like async vs sync writing... if you write to a dataset with sync=disabled over SMB, is it still slow?
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I am only hitting 210MB/s over 10 drives? Am I missing something here with the way zfs does it raid vs tradition raid? This same hardware here would max 1200+MB/s in a hardware raid 10 all day long prior to switching to TrueNAS.
Incidentally, raidz2 would compare to raid6 rather than to raid10, and a stripe of mirrors (ZFS version of raid10) would likely perform better than a 12-wide raidz2, so let's keep the comparisons fair.
 
Top