Slow speed when two hosts transfer at the same time

ruant

Dabbler
Joined
Feb 12, 2020
Messages
17
I'm basically just starting out with FreeNAS/TrueNAS, so I don't know all the dos and don'ts.

But my issue is that when copying large files from more then one host to the NAS the speed goes completely to the shitter.But there is more issues here I think

Host1:
- iperf3: 24 MiB/s (204 Mbit)
- Single SMB transfer: 110 MiB/s windows copy window (windows resource monitor say roughly 880Mbit/s)
- Mutliple SMB transfer (host2 is also transferring): 3-4MiB/s

Host2:
- iperf3: 3.6 MiB/s
- Single SMB transfer: 110 MiB/s windows copy window (windows resource monitor say roughly 880Mbit/s)
- Multple SMB transfer (host1 is also transferring): 3-4MiB/s



I've tried 3 different 1gbit switches;
- TP-Link TL-SG105E
- TP-Link TL-SG108E
- DLINK DGS-1016D



That's the WRITE speed issue.
There is also the issue of reading.. That's is also affected by how many hosts are reading. And will never go above 15MiB/s (no matter how many hosts are trying to read)

Hardware:
- CSE-846 X9DRi-F BPN-SAS2-846EL
- LSI 9201-16i
- 24 x Seagate ST6000NM0034 6TB harddrives (in raidz1, no cache, log, etc. drives)
OS: XCP-ng, VM with TrueNAS-12.0-U3.1
PCI passthrough of the entire LSI hba to the VM.
VM spec: 4 core, 32GB ram
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'm basically just starting out with FreeNAS/TrueNAS, so I don't know all the dos and don'ts.

But my issue is that when copying large files from more then one host to the NAS the speed goes completely to the ****ter.But there is more issues here I think

Host1:
- iperf3: 24 MiB/s (204 Mbit)
- Single SMB transfer: 110 MiB/s windows copy window (windows resource monitor say roughly 880Mbit/s)
- Mutliple SMB transfer (host2 is also transferring): 3-4MiB/s

Host2:
- iperf3: 3.6 MiB/s
- Single SMB transfer: 110 MiB/s windows copy window (windows resource monitor say roughly 880Mbit/s)
- Multple SMB transfer (host1 is also transferring): 3-4MiB/s



I've tried 3 different 1gbit switches;
- TP-Link TL-SG105E
- TP-Link TL-SG108E
- DLINK DGS-1016D



That's the WRITE speed issue.
There is also the issue of reading.. That's is also affected by how many hosts are reading. And will never go above 15MiB/s (no matter how many hosts are trying to read)

Hardware:
- CSE-846 X9DRi-F BPN-SAS2-846EL
- LSI 9201-16i
- 24 x Seagate ST6000NM0034 6TB harddrives (in raidz1, no cache, log, etc. drives)
OS: XCP-ng, VM with TrueNAS-12.0-U3.1
PCI passthrough of the entire LSI hba to the VM.
VM spec: 4 core, 32GB ram
Hmmm... What is your iperf3 transfer rate in Mbits/sec? Here's mine from a client on my 10G LAN, for example:
Code:
root@brutus:~ # iperf3 -c bandit -t30 -i5
Connecting to host bandit, port 5201
[  5] local 172.16.10.13 port 22211 connected to 172.16.10.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-5.00   sec   587 MBytes   985 Mbits/sec    0   4.01 MBytes
[  5]   5.00-10.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  10.00-15.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  15.00-20.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  20.00-25.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  25.00-30.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  3.45 GBytes   989 Mbits/sec    0             sender
[  5]   0.00-30.02  sec  3.45 GBytes   987 Mbits/sec                  receiver

iperf Done.

How did you configure your pool? How many vdevs? Is it one 24-disk RAIDZ1 vdev? Or multiple RAIDZ1 vdevs? You can post the output of zpool status tank in 'code' brackets to show this information.

The reason I ask is that a 'wide' RAIDZ1 array of 24 disks would have very poor performance, so I hope you didn't configure your system this way.

Also, RAIDZ1 isn't recommended for use with large drives. In your case, 4 x 6-disk RAIDZ2 vdevs would have 4 times the IOPS of a single-vdev pool.

EDIT: Lastly, I forgot to ask what kind of NIC you're using.
 
Last edited:

ruant

Dabbler
Joined
Feb 12, 2020
Messages
17
Hmmm... What is your iperf3 transfer rate in Mbits/sec? Here's mine from a client on my 10G LAN, for example:
Code:
root@brutus:~ # iperf3 -c bandit -t30 -i5
Connecting to host bandit, port 5201
[  5] local 172.16.10.13 port 22211 connected to 172.16.10.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-5.00   sec   587 MBytes   985 Mbits/sec    0   4.01 MBytes
[  5]   5.00-10.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  10.00-15.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  15.00-20.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  20.00-25.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
[  5]  25.00-30.00  sec   590 MBytes   990 Mbits/sec    0   4.01 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec  3.45 GBytes   989 Mbits/sec    0             sender
[  5]   0.00-30.02  sec  3.45 GBytes   987 Mbits/sec                  receiver

iperf Done.

How did you configure your pool? How many vdevs? Is it one 24-disk RAIDZ1 vdev? Or multiple RAIDZ1 vdevs? You can post the output of zpool status tank in 'code' brackets to show this information.

The reason I ask is that a 'wide' RAIDZ1 array of 24 disks would have very poor performance, so I hope you didn't configure your system this way.

Also, RAIDZ1 isn't recommended for use with large drives. In your case, 4 x 6-disk RAIDZ2 vdevs would have 4 times the IOPS of a single-vdev pool.

EDIT: Lastly, I forgot to ask what kind of NIC you're using.

The pool might be the problem, I smashed all 24 drives into one vdev in raidz1..
I want max storage capacity, but if the speed is gonna be this bad, I need to rethink that..
Code:
root@truenas[~]# zpool status Main_24x6TB
  pool: Main_24x6TB
 state: ONLINE
config:

        NAME                                            STATE     READ WRITE CKSUM
        Main_24x6TB                                     ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            gptid/ca036674-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/caa6c9bd-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/caf22447-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/cbedf69e-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/cce6bb7c-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/ce50c65d-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/cf24bd4c-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/ce9d374e-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/cffa8544-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d139f13b-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d0f8ee6f-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d2c3464f-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d3480a54-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d324529e-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d3b60337-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d4b41e09-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d5d7c685-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d6198d49-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d7c50aa5-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d7f94664-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d872fea0-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d91b2c04-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d9907549-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0
            gptid/d9f33b07-bb3b-11eb-aed6-4134f6fbb6cb  ONLINE       0     0     0

errors: No known data errors


The server has Intel® i350 Dual port GbE LAN.
Only one is connected.

Host1: Windows 10 - Intel(R) Ethernet Connection (2) I219-V
Code:
Connecting to host 10.0.10.125, port 5201
[  4] local 10.0.10.101 port 59758 connected to 10.0.10.125 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  28.6 MBytes   240 Mbits/sec
[  4]   1.00-2.01   sec  31.5 MBytes   263 Mbits/sec
[  4]   2.01-3.01   sec  29.8 MBytes   249 Mbits/sec
[  4]   3.01-4.00   sec  26.6 MBytes   224 Mbits/sec
[  4]   4.00-5.00   sec  27.6 MBytes   232 Mbits/sec
[  4]   5.00-6.00   sec  29.8 MBytes   250 Mbits/sec
[  4]   6.00-7.00   sec  30.2 MBytes   254 Mbits/sec
[  4]   7.00-8.01   sec  29.8 MBytes   248 Mbits/sec
[  4]   8.01-9.00   sec  31.0 MBytes   261 Mbits/sec
[  4]   9.00-10.00  sec  30.5 MBytes   256 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec   295 MBytes   248 Mbits/sec                  sender
[  4]   0.00-10.00  sec   295 MBytes   248 Mbits/sec                  receiver

iperf Done.


Host2: Windows 10 - Intel(R) Ethernet Connection (2) I219-V
Basically the same as above today... Yesterday it was doing around 30mbit.


Splitting it up in several smaller vdev's is gonna loose me a lot of storage.
I would like fast speed, with 1 drive as a "hot-spare" (i got cold spares ready to swap in incase of drive failures.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
"VM spec..." Running FreeNAS on a VM w/ 4C and 32GB of RAM?

Eliminate the VM aspect and come around again. FreeNAS/TrueNAS does not like living in a VM and is strongly discouraged.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Pool design is pretty much a trade-off between performance and capacity. I run a pair of 24-bay Supermicro servers here at work; I configured their pools with 3 x 7-disk RAIDZ2 vdevs plus a cold spare. For top performance -- and the least capacity -- you could use 12 mirrors. Whatever route you take, you definitely do not want a single 24-disk RAIDZ1 vdev; this may be the source of your problems, though you may still have network issues as well.

@jenksdrummer makes a good point: FreeNAS/TrueNAS running on-the-metal will nearly always beat a virtualized instance, and you have to know what you're doing to make it work. That said, I run 4 such 'all-in-one' systems using VMware v6.7 and I get reasonable performance; good enough for my purposes at least. VMware has good network support out-of-the-box. I'm not familiar with the hypervisor you're using; it may require some network tweaking.
 

ruant

Dabbler
Joined
Feb 12, 2020
Messages
17
Pool design is pretty much a trade-off between performance and capacity. I run a pair of 24-bay Supermicro servers here at work; I configured their pools with 3 x 7-disk RAIDZ2 vdevs plus a cold spare. For top performance -- and the least capacity -- you could use 12 mirrors. Whatever route you take, you definitely do not want a single 24-disk RAIDZ1 vdev; this may be the source of your problems, though you may still have network issues as well.

@jenksdrummer makes a good point: FreeNAS/TrueNAS running on-the-metal will nearly always beat a virtualized instance, and you have to know what you're doing to make it work. That said, I run 4 such 'all-in-one' systems using VMware v6.7 and I get reasonable performance; good enough for my purposes at least. VMware has good network support out-of-the-box. I'm not familiar with the hypervisor you're using; it may require some network tweaking.

I've done some more testing.
I degraded the raid, to get one "free" drive in the server.
On this single disk I get 170MiB/s write speeds, and 200MB/s read. (using `dd`)
Moving files off of the 24xRaidz1 (which now is one drive down, but still alive due to the z1) I get 100MiB/s. So the read speed is obviously there...

So I'm thinking it's more of a network issue now.. Since 100MiB/s internally is FAR from the speed I see over SMB on 1GBit network. (15MiB/s)
(However, the write speed still saturates the 1gbit network speed)

I might just copy the data out of the raidz1 like this, onto spare drives. Nuke the entire raid and just run them all as JBOD drives.
Dropping TrueNAS entirely, just run a simple Ubuntu setup which mounts all the drives up as singles.
I don't really care about redundancy per se, but running it all in a striped configuration would make the entire thing go bad with one drive down. I'll rather just loose data from that single bad drive then.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I get 100MiB/s. So the read speed is obviously there
This is terrible read speed. You should get something like 1GB/s or more.

Your network is also super messed up. Iperf should be pegged at 940mbps.
 
Last edited:

ruant

Dabbler
Joined
Feb 12, 2020
Messages
17
This is terrible read speed. You should get something like 1GB/s or more.

Your network is also super messed up. Iperf should be pegged at 940mbps.

The 100MiB/s was from the RAID, to a single drive, so I guess that's pretty normal. The disk is rated 90-120MiB/s.

Yeah, it's all a bit wonky.. I'm pulling everything apart now. Fingers crossed :)
 
Top