[HELP] TrueNAS-SCALE-22.02.0 X540 Samba only 425MB/s

Chocofate

Dabbler
Joined
Mar 21, 2022
Messages
10
System Version:TrueNAS-12.0-U8

Hardware:
CPU,AMD Ryzen Pro 5 5650GE
Mobo,Asrock X570 Pro 4
Dram,Samsung ECC UDIMM DDR4 3200MHz 32G × 4
Graphics,GeForce RTX 3050
Network,X540-T1
SSD,Crucial P2 2T × 2
HDD,WestDigital HC550 18T × 6

Question:After staring samba service,write speed only up to 425MB/s,read only up to 550MB/s,how can copy speed up to 10Gb/s ?
000 (2).png

003.png


Description:
1、iPerf3 test speed on one thread can only reach 1.25Gb/s,10 threads can reach 10 gigabit bandwidth;
2、6 HDDs on stripe mode or Raidz2 can only reach 425MB/s;
3、try to set MTU as 9000 , but no use;
4、try to copy to different HDDs,total rate can only up to 425MB/s.
001 (2).png

002 (2).png


Other:
1、test files store in 980Pro 2T disk
2、X540 is on PCIe 4.0 x4
3、ifconfig
ix0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=e53fbb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,WOL_UCAST,WOL_MCAST,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether a0:36:9f:5e:eb:eb
inet 192.168.22.40 netmask 0xffffff00 broadcast 192.168.22.255
media: Ethernet autoselect (10Gbase-T <full-duplex,rxpause,txpause>)
status: active
nd6 options=9<PERFORMNUD,IFDISABLED>
igb0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=e527bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCS UM,TSO4,TSO6,LRO,WOL_MAGIC,VLAN_HWFILTER,VLAN_HWTSO,RXCSUM_IPV6,TXCSUM_IPV6>
ether 70:85:c2:db:e5:3b
media: Ethernet autoselect
status: no carrier
nd6 options=1<PERFORMNUD>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
inet 127.0.0.1 netmask 0xff000000
groups: lo
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
pflog0: flags=0<> metri

thanks for your reading and helping ~
 

Kris Moore

SVP of Engineering
Administrator
Moderator
iXsystems
Joined
Nov 12, 2015
Messages
1,471
We're still digging into some of the perf side of SCALE, expecting with .1 and .2 we'll start shipping some tuning and performance improvements. However if you'd like to file a ticket on https://jira.ixsystems.com with the data here, that would be helpful to engage with our team.
 

c77dk

Patron
Joined
Nov 27, 2019
Messages
468
I'm confused - the title says SCALE, but the post itself says CORE, and the ifconfig output tends to be CORE. Which is it?
 

tenknas

Dabbler
Joined
Mar 6, 2021
Messages
21
Aren't you limited by your disk speeds? I have a server that has 2 x NVMe drives as cache and 24 x 10TB Enterprise drives and Im copying over 750Mib/s coming in from 10Gbit network port.

Try a ramdisk perhaps and retest.
 

Chocofate

Dabbler
Joined
Mar 21, 2022
Messages
10
I'm confused - the title says SCALE, but the post itself says CORE, and the ifconfig output tends to be CORE. Which is it?
sorry i make a mistake ...

I tried both core and scale system, but no difference.

The post is use core test report.

I want to use docker and other plug-ins,so i will choose scale frist,that‘s why i post here.

I cannot find where to edit post to fix.

... ...
 

Chocofate

Dabbler
Joined
Mar 21, 2022
Messages
10
I'm confused - the title says SCALE, but the post itself says CORE, and the ifconfig output tends to be CORE. Which is it?
004.png
005.png

version:scale 20.02.01
ifconfig:
enp4s0f1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.22.40 netmask 255.255.255.0 broadcast 192.168.22.255
inet6 fe80::a236:9fff:fe5e:ebeb prefixlen 64 scopeid 0x20<link>
inet6 2408:8262:12bd:401d:a236:9fff:fe5e:ebeb prefixlen 64 scopeid 0x0 <global>
inet6 fdf1:b63c:8220:0:a236:9fff:fe5e:ebeb prefixlen 64 scopeid 0x0<global>
ether a0:36:9f:5e:eb:eb txqueuelen 1000 (Ethernet)
RX packets 75016308 bytes 112949549415 (105.1 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2130018 bytes 168049965 (160.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

enp8s0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
ether 70:85:c2:db:e5:3b txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfb900000-fb91ffff

lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 1221 bytes 190123 (185.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1221 bytes 190123 (185.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
 

Chocofate

Dabbler
Joined
Mar 21, 2022
Messages
10
We're still digging into some of the perf side of SCALE, expecting with .1 and .2 we'll start shipping some tuning and performance improvements. However if you'd like to file a ticket on https://jira.ixsystems.com with the data here, that would be helpful to engage with our team.
I open a ticket but still cannot edit it,so that i cannot add something new about this question.
 

Chocofate

Dabbler
Joined
Mar 21, 2022
Messages
10
Aren't you limited by your disk speeds? I have a server that has 2 x NVMe drives as cache and 24 x 10TB Enterprise drives and Im copying over 750Mib/s coming in from 10Gbit network port.

Try a ramdisk perhaps and retest.
thx for your advice.

how can i make ramdisk on truenas?
 

mervincm

Contributor
Joined
Mar 21, 2014
Messages
157
From my Windows 11 system <> TrueNAS Scale via SMB

uncached
1649026035840.png

copy again (cached)
1649026118566.png


other direction
1649026338481.png


I also have 6 similar disks RAIDZ1 at the moment. that might explain the difference, but my iperf performance is also better. (mellanox cx2 cards via unifi x-16 switch)

1649027810918.png
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'd also recommend testing with fio... not with disk copying. There are many ways that copying software can slowdown transfers through lack of queue depth.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Are you testing fio from a client? .. with SMB?

The numbers may be a little higher than expected because the test is only over 5GBytes and you have quite a bit of RAM for caching. That would seem to explain why the number were very god with 16KB io size.

The numbers aren't horrible.

With 6 drives in RAID-Z2, there are only 4 data drives. Each of those data HDDs is delivering about 100MB/s

For a single VDEV you are getting 700-1000 IOPS which is fairly good.

If you wanted more bandwidth... 3 pairs of mirrors would given you higher performance... especially for Reads.
 
Top