10Gb Correct Write speed (500-700MB/s), Terrible Read speed (100-120MB/s)

Kz72

Cadet
Joined
Jan 17, 2017
Messages
5
Hi guys, first post on these forums.
I just put together some parts laying around to rebuild an old QNAP 4-bay nas. Setup is the following:

CPU: Intel G3220 3GHz dual core
MOBO: ASRock Z87 OC Formula
RAM: 24GB ddr3 non-ECC
HDs: 4x WD Red 3TB; 2x HGST Nas 4TB
NIC: Chelsio 10Gb
Running FreeNas 9.10.2-U1 (86c7ef5)

Desktop: (note, i'm running this as a hackintosh on Sierra)
CPU: Intel 4790K
Mobo ASRock H87 Pro4
RAM: 16GB DDR3
SSD: 1TB Samsung 850 EVO
NIC: Myricom 10Gb

Switch: Ubiquiti Unifi US-16-XG

So I used Blackmagic Speed Test at first to do a write/read and was getting 750MB/s writes but ~100MB/s reads. I wiped my raid and now have all disks as RAID0 striped for testing, so the disks should not be the bottleneck. CPU performance doesn't go above 50% in testing.

First thing I checked: Iperf to and from devices: Both around 6GB/s. I reckon there is some overhead from the nic here that's preventing full 10Gb/s, but either way, it's much faster than 100MB/s. So I think this rules out network connection issues. (I also copied files from two other desktops on the same 10Gb network and that works fine with SSD-limited speeds, so i'm 99% sure it's not a network bottleneck)

Second thing: I tried AFP and SMB. Note that I already used the "nsigning_required=no" fix with great results. So the results I see of fast writes but slow reads is the SAME in AFP or SMB.

Third thing: Real world file transfer. Via AFP, I dragged over a 200GB folder from my desktop to the NAS... copied at about 400-500MB/s, which is the SSD read speed. Great. Copying the same folder from the NAS to the desktop... slow as molasses... around 50MB/s, slower than GigaBit.

I have Autotune checked (doesn't change things with or without it) and I also tried to mess around with testing speed using "dd", but didn't get very far.

I'm going to throw a SSD into the NAS itself and see now file transfer to and from that directly works.

Do you guys have any ideas? Many thanks!
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Start by understanding the difference between bits/s and bytes/s. You used both interchangable above to describe your problem so really you have no clue where kind of performance you should expect.

Run iperf going both ways and post the exact results.

Sent from my Nexus 5X using Tapatalk
 

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
Log on to the freenas BOX with SSH
run: gstat -p
leave it running and the ssh window open
copy 50 gb to the nas, and wach the last collum of gstat, ( %busy)
now copy data from the box to the pc, and wach in gstat the same collum ( %busy)

does it go to 100%? and if yes, on all hdd's or on only 1-2
if all hdd's have the same numbers, and one hdd doesen't than that hdd is likely the problem

I had almost the same problem on my build, but I use 10 drives in raid 10, 1 disk was lazy and pulled all my datastore down to 70-100 MB/sec
HDD's will have diffrent values, but there should not be more than a 15%% of diffrence betwene them

Now i have 600MB/sec read and write while running 20 VM's over 2 x 4gbs FiberChannel ( QLOGIC )

Also, if you want to stay to this build, buy ECC memory, your cpu supports a max of 32Gb
http://ark.intel.com/products/77773/Intel-Pentium-Processor-G3220-3M-Cache-3_00-GHz

Also, i saw that your MB has 2 HDD controlers, one with 6 ports and one with 4 ports, put all controllers in AHCI mode, flash the BIOS to the lates release and disable in bios functions you do not use on a nas, like audio controller, com port.

Try to fill 1 controller before you use the other one

I am guessing that you use the on-cpu vga card, in BIOS set the card to the lowest memory setting ( 32 as i remmember ), and disable MAX DVMT for VGA.
 

Attachments

  • gstat.PNG
    gstat.PNG
    16.4 KB · Views: 1,541
Last edited:

Kz72

Cadet
Joined
Jan 17, 2017
Messages
5
Log on to the freenas BOX with SSH
run: gstat -p
leave it running and the ssh window open
copy 50 gb to the nas, and wach the last collum of gstat, ( %busy)
now copy data from the box to the pc, and wach in gstat the same collum ( %busy)

does it go to 100%? and if yes, on all hdd's or on only 1-2
if all hdd's have the same numbers, and one hdd doesen't than that hdd is likely the problem

I had almost the same problem on my build, but I use 10 drives in raid 10, 1 disk was lazy and pulled all my datastore down to 70-100 MB/sec
HDD's will have diffrent values, but there should not be more than a 15%% of diffrence betwene them

Now i have 600MB/sec read and write while running 20 VM's over 2 x 4gbs FiberChannel ( QLOGIC )

Also, if you want to stay to this build, buy ECC memory, your cpu supports a max of 32Gb
http://ark.intel.com/products/77773/Intel-Pentium-Processor-G3220-3M-Cache-3_00-GHz

Also, i saw that your MB has 2 HDD controlers, one with 6 ports and one with 4 ports, put all controllers in AHCI mode, flash the BIOS to the lates release and disable in bios functions you do not use on a nas, like audio controller, com port.

Try to fill 1 controller before you use the other one

I am guessing that you use the on-cpu vga card, in BIOS set the card to the lowest memory setting ( 32 as i remmember ), and disable MAX DVMT for VGA.


Thanks for the suggestions!
So I did this and during the write phase, all the disks are at 100%, but during Read, they drop down and bounce between 0% and 8%. This is true across all the disks. (See attached screenshot; I only had the 4 WD Reds in a RAID0 pool here to eliminate the variable of different disk types).

All the disks are connected to the same Z87 intel controller on AHCI, I was careful to do that. I will change the bios settings later when I have access to the machine. It's not the ideal board, i know, and I will upgrade to an IPMI capable Supermicro with ECC later, but I had the parts laying around and wanted to learn FreeNas before investing $$$.

Regarding IPERF, here are the results:

Going To the NAS:


$ iperf -c 192.168.1.148 -P 1 -w 65000
------------------------------------------------------------
Client connecting to 192.168.1.148, TCP port 5001
TCP window size: 69.9 KByte (WARNING: requested 63.5 KByte)
------------------------------------------------------------

[ 4] local 192.168.1.147 port 50147 connected with 192.168.1.148 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 5.13 GBytes 4.41 Gbits/sec

Going from Nas to the Desktop:

% iperf -c 192.168.1.147 -P 1 -w 65000

------------------------------------------------------------
Client connecting to 192.168.1.147, TCP port 5001
TCP window size: 69.9 KByte (WARNING: requested 63.5 KByte)
------------------------------------------------------------

[ 3] local 192.168.1.148 port 57313 connected with 192.168.1.147 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 4.36 GBytes 3.74 Gbits/sec


It gets as high as 6.67 Gbit/s, but clearly faster than Gigabit.
 

Attachments

  • Screen Shot 2017-01-18 at 8.01.24 AM.png
    Screen Shot 2017-01-18 at 8.01.24 AM.png
    41.5 KB · Views: 1,348
Last edited:

Holt Andrei Tiberiu

Contributor
Joined
Jan 13, 2016
Messages
129
So they all ar at 100% at write?

Do you have another controller around? it can be 3gbs, for testing. I would try with another controller before anything alse.
Also, as i remember Asrock has some "instant boot" and other tehnologies in bios for increased speed on boot, disable thoes too.
 

Kz72

Cadet
Joined
Jan 17, 2017
Messages
5
So they all ar at 100% at write?

Do you have another controller around? it can be 3gbs, for testing. I would try with another controller before anything alse.
Also, as i remember Asrock has some "instant boot" and other tehnologies in bios for increased speed on boot, disable thoes too.

Yes, during write the gstat -p %busy are all at 98-100%. The real world transfer results are consistent with that, i'm getting 500-600MB/s (basically limited by my SSD).

I will tinker with the BIOS settings tonight. Times like these convincing me to absolutely need IPMI on my next board.

I do not have another controller card. I was looking at a LSI HBA, but wanted to try onboard first. It could be an issue, but a super weird one as I've booted from multiple SSDs on the motherboard in the past and read speed was never an issue.

I also tried RAID0 striping the two HGSTs and doing a transfer via AFP... i got basically the same results as the 4xWD red RAID0.

This is weird right? I'm not just a total n00b? Write speed is almost never 5x faster than Read speed in a RAID0... or in general...
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
What happens when your test read speeds locally on your freenas system using something like dd? Find a big file you have and read it with dd to /dev/null. This will verify your pool isn't the problem. After that I suspect it's the client nic that is having issues keeping up.

Sent from my Nexus 5X using Tapatalk
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
edited because I missed the post with iperf tests.

Something is wrong with one of your nics. My iperf tests on my Chelsio cards consistently give me near wire line. All 9 of my nics have been in use for 2+ years without a hiccup.

Iperf may not be showing you the errors that are happening when running you read op when the data actually makes it to your Apple box
 
Last edited:

Kz72

Cadet
Joined
Jan 17, 2017
Messages
5
You guys are right, it's my Myricom NIC. I tried another PC and i'm getting solid 10Gb reads and writes.

It must be because I have the NIC in a pci Gen3 x4 slot and it's a pci Gen2 x8 card. It's weird that it affects the read speeds so much more than write, but i guess it's all very card dependent...

Guess it's time to upgrade that motherboard!

Thanks for your help guys!
 

Donny Davis

Contributor
Joined
Jul 31, 2015
Messages
139
You guys are right, it's my Myricom NIC. I tried another PC and i'm getting solid 10Gb reads and writes.

It must be because I have the NIC in a pci Gen3 x4 slot and it's a pci Gen2 x8 card. It's weird that it affects the read speeds so much more than write, but i guess it's all very card dependent...

Guess it's time to upgrade that motherboard!

Thanks for your help guys!

Yea 10gb gear can be picky. My stuff is on all fiber, and the mellanox on Linux stuff didn't want to play nice with the Chelsio stuff on BSD. Switched the mellanox cards to Intel cards, and boom problem solved. Makes no sense because it was all using the same switch... I digress. Glad to hear your issue is resolved
 
Last edited:

Kz72

Cadet
Joined
Jan 17, 2017
Messages
5
Well here's an interesting update. I ended up messing with the MTU to turn it back down to 1500... then I moved it back to 9000.... Somehow it got the NIC working fully.

500MB/s up and down now....

WEIRD, but i'm a happy camper now.
 

simoneluconi

Cadet
Joined
Mar 17, 2019
Messages
1
I have the same problem. I have a couple of Chelsio N320E, one of my HP DL380 G7 and the other one on my Windows PC. I tryed swapping pci express slot with the same results. Very low read speed from freenas but very high write speed. With some large files copied via Samba i get about 60mb/s reading from freenas and about 400mb/s writing to it (it's about the max that the disks can handle).

Tecnically it is plugged in a 8x PCI Gen 2 slot in the server and 4x PCI Gen 3 in my pc. I tried putting it also in a PCI Gen 3 16x in my PC, very slightly better speed but always very slow. I tried also putting a fan in the heatsink, because maybe it was throttling but always the same result. Any help? Below you can see an Iperf test.

Annotazione 2019-03-17 113623.jpg
 
Top