Slow transfer speeds on gigabit ethernet

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
Hi

I recently set up a server and noticed the transfer speeds was a little under 100mbps read and writes, but my internet support 1 gbps,
I have tried to change cables from the server to the switch and from the switch to my pc(cat6).

My specs on the server are a supermicro x8dtl-3f with 2 xeons x5550 and 32gb of ram,
I have one pool with 4x4 tb wd red not smr, and 3x250gb cache ssd's. Im just using the built in gigabit connection on my motherboard.

My pc has a asrock x570 steel legend with integrated intel gigabit port.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
How is your Internet speed relevant here?

How did you test your speed?

What system utilization do you see?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
What kind of file did you try to copy?
What are their sizes?
What is the record size of the pool?
What is the pool configuration? Z1, Mirror?

Please specify what you mean by 3x250gb cache ssd's. L2ARC? Special Vdev?
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
How is your Internet speed relevant here?

How did you test your speed?

What system utilization do you see?
1. My home network supports 1gbps(switch and router etc.), I only got 500mbps from my provider.

2. I tested via a file transfer and diskmark.

3. I'm using them in a stripe
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
What kind of file did you try to copy?
What are their sizes?
What is the record size of the pool?
What is the pool configuration? Z1, Mirror?

Please specify what you mean by 3x250gb cache ssd's. L2ARC? Special Vdev?
1. I tried a mp4 file
2. The file was 1,7 gb

3. the size of the pool is a 16tb stripe with the 4 4tb wd red hard drives.

4. the pool configuration is a stripe

5. I just plugged in cache drives to the server and added them as vdevs, I didn't originally have these cache drives but i thought they would solve the issue.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
I will try to be as coincise as possible, and you will see a lot of "I strongly suggest you to".

First: do know that SMR drives are known to cause issues with TrueNas, especially (dut not only) during resilvering. I would strongly suggest to switch to CMR ones asap; if you want confirmation, you can post the model number and we can verify, or you can check yourself here or using google.

Second: striped means a complete lack of redundancy so if you lose even one of those HDDs, everything is gone forever (unless you are willing to fork out thousands of dollars). I would strongly suggest you to destroy to destroy the pool and recreate it in a Z1 (at the very least) layout.

Third: when I asked for the record size, I intended the value you set (or leave to default) when you create the dataset. It does influence how the data is stored on the (hard) disk sectors, and can be found under Storage > Pools > Dataset Actions (The 3 dots on the right) > Edit Options. On default is set to 128K, and based on the size of the files you intend to storage it can be edited to improve (mainly) performance.

Fourth: Using L2ARC is more harmful than anything if you don't have at least 64GB of RAM (at least, this is the consensus here); if you are using them as metadata or other special vdevs, please do specify it.

Fifth: You will have to test with different, more accurate/truthful methods (ie the fio command) in order to troubleshoot the issue; I will leave this to someone more competent than me.

Finally, I strongly suggest you to read the following:
EDIT: Also, please specify which version of CORE/SCALE you are using.
 
Last edited:

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
First: do know that SMR drives are known to cause issues with TrueNas, especially (dut not only) during resilvering. I would strongly suggest to switch to CMR ones asap; if you want confirmation, you can post the model number and we can verify, or you can check yourself here or using google.
Ye i know i have cmr, I have these: WD Red WD40EFRX 64MB 4TB.

Second: striped mirror means a complete lack of redundancy so if you lose even one of those HDDs, everything is gone forever (unless you are willing to fork out thousands of dollars). I would strongly suggest you to destroy to destroy the pool and recreate it in a Z1 (at the very least) layout.
I'm not gonna store anything important on them, so I don't really care about losing any data.

Third: when I asked for the record size, I intended the value you set (or leave to default) when you create the dataset. It does influence how the data is stored on the (hard) disk sectors, and can be found under Storage > Pools > Dataset Actions (The 3 dots on the left) > Edit Options. On default is set to 128K, and based on the size of the files you intend to storage it can be edited to improve (mainly) performance.
That is set to max

Fourth: Using L2ARC is more harmful than anything if you don't have at least 64GB of RAM (at least, this is the consensus here); if you are using them as metadata or other special vdevs, please do specify it.
I added them as cache vdevs in truenas, and i thought it would fix the low speed problem.

Fifth: You will have to test with different, more accurate/truthful methods (ie the fio command) in order to troubleshoot the issue; I will leave this to someone more competent than me.
I was gonna test the bandwidth with the iperf3 command when I got home.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Ye i know i have cmr, I have these: WD Red WD40EFRX 64MB 4TB.
Yup, they are good.
I'm not gonna store anything important on them, so I don't really care about losing any data.
I am curious about why you picked ZFS and TrueNAS. Oh, and please specify your version as that could be useful.
That is set to max
This might well be the main issue imho.
I was gonna test the bandwidth with the iperf3 command when I got home.
Looks good.
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
1663159191805.png

Screenshot of the iperf3 test.

I am curious about why you picked ZFS and TrueNAS. Oh, and please specify your version as that could be useful.
Version:
TrueNAS-13.0-U2

This might well be the main issue imho.
Iput it back down too 128k and it did not solve my problem.
 
Joined
Jan 18, 2017
Messages
525
@Ikon can you please run the diskmark test again and post a screenshot of the results? It normally displays results in megabytes a second so I'm concerned......
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
@Ikon can you please run the diskmark test again and post a screenshot of the results? It normally displays results in megabytes a second so I'm concerned......
Hi
I can't take a picture right now but I can type it here.

Read Write
- 96 92
- 87 89
- 85 87
- 45 50


That was the results(sorry for the bad example)
 
Joined
Jan 18, 2017
Messages
525
Without the picture I cannot say for sure but I THINK you are getting the speed you should expect over a 1000 megabit network connection, 110 megabytes per second is practically the real limit of the connection.
 
Last edited:

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
S
Without the picture I can say for sure but I THINK you are getting the speed you should expect over a 1000 megabit network connection, 110 megabytes per second is practically the really limit of the connections

So I need 10gig ethernet?
 
Joined
Jan 18, 2017
Messages
525
If what I'm thinking is true 10gig would double your speed THEN your limitation would be your pool's speed lol

actually thinking about it I'm not sure an X8 board would be capable of saturating a 10 gigabit network connection..... does it have enough PCIE slots for NVME SSD's and a NIC?
 
Last edited:

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Iperf test is in line with cat 5e.
1 Gigabit/s = 125 MegaBytes/s

Are you sure about the units of mesure in your first post? @cobrakiller58 might be spot on.
If you want to test your pool maximum transfer speed, uncapped by the 1Gigabit floor, you have to use fio.
 
Last edited:

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
actually thinking about it I'm not sure an X8 board would be capable of saturating a 10 gigabit network connection..... does it have enough PCIE slots for NVME SSD's and a NIC?
It does have PCIE slots and I have a 10 gig card laying here
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
1663187837082.png

Here is the DiskMark test, don't know why but for some reason its slower now??
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Can you run fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randwrite --size=50g --io_size=1500g --blocksize=128k --iodepth=1 --direct=1 --numjobs=1 --group_reporting to get more accurate testing?
 

Ikon

Dabbler
Joined
Aug 26, 2022
Messages
49
Can you run fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randwrite --size=50g --io_size=1500g --blocksize=128k --iodepth=1 --direct=1 --numjobs=1 --group_reporting to get more accurate testing?
Do I just copy paste your script, or how do I use it?
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Do I just copy paste your script, or how do I use it?
Paste it into the CLI (if you can), or use something like Putty.
 
Top