Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

SOLVED 10Gbe Transfer Speed issue + Guide?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE
Status
Not open for further replies.

MauricioU

Member
Joined
May 22, 2017
Messages
39
SOLVED:
So I created a stripe/RAID0 and iperf continued giving me the exact same results, however, with a ramdisk I was getting full 10gbe throughput both ways. I guess with RAIDZ was the bottle neck for the write speeds! =( lol.

Also, for anyone using this as a guide, I deleted all the tunables I added and continued getting full 10Gbe throughput both ways with a stripe—which I take to mean the tunables are not really changing anything. So I'm leaving them deleted.
-------------------------------------------------------------------------------------------

Hey all. This is my first post on here after being a lurker for many months.

TLDR; I am looking for answers to these two questions:
  • Question 1: Why am I getting full 10Gbe speeds when reading off my FreeNAS but only half of that when writing to my FreeNAS?
  • Question 2: Does anyone have any idea why when I woke my computer up from sleep it refused to use the 10Gbe interface to transfer data? How can I make it so my computer only communicates through the 10Gbe interface when speaking to my FreeNAS?
Backstory/Guide of what I've done so far:

I just recently put together my first FreeNAS system with 10Gbe in mind for video and photo editing as well as all the other perks. As of right now I have 6 x 4TB HDDs (HGST) in a RaidZ—I know, I know; I am waiting on one more drive so I can turn it into a RaidZ2 (I didn’t anticipate the overhead for ZFS to be so high and turning my current array into a RaidZ2 means from day one I’d be pretty much 70% full). In the meantime, I decided to get my feet wet and go forward with setting up my system just to test and for fun—not placing anything valuable or important on it. In particular, I wanted to test my 10Gbe connection and have that ready to go.

I bought two Intel x540-T2s—one for the FreeNAS system and one for my Windows 10 Pro computer and they are connected through an RJ45 ethernet cable. The network cards are both installed in PCIe 2 x8 Slots getting full throughput. Both computers are connected directly to one another through the 10Gbe interfaces and are on a totally different network (10.10.10.X) from my main 1Gbe network (10.0.0.X).

I have read tons of guides online and watched youtube videos regarding how and what to do on the Windows computer side of things. I updated the drivers, I went to my 10Gbe interface, right click > properties > configure and I set those settings like this:

  • Interrupt Moderation: Disabled
  • Jumbo Packet: 9014 Bytes
  • Maximum Number of RSS Queues: 8 Queues (8 to match the core count of my i7 2600k)
  • Offloading options > Properties > IPsec Offload: Auth Header & ESP Enabled
  • Offloading options > Properties > IPv4 Checksum Offload: Rx & Tx Enabled
  • Offloading options > Properties > TCP Checksum Offload (IPv4): Rx & Tx Enabled
  • Offloading options > Properties > TCP Checksum Offload (IPv6): Rx & Tx Enabled
  • Offloading options > Properties > UDP Checksum Offload (IPv4): Rx & Tx Enabled
  • Offloading options > Properties > UDP Checksum Offload (IPv6): Rx & Tx Enabled
  • Performance Options > Properties > Receive Buffers: 4096
  • Performance Options > Properties > Transmit Buffers: 16384
  • Receive Side Scaling: Enabled

Then I went to the host file under C:\Windows\System32\drivers\etc and edited the host file with notepad. I added both the FreeNAS 10Gbe IP address and hostname as well as my computers’ 10Gbe IP address and hostname on two separate lines at the very bottom.


On FreeNAS I went to the network > interfaces tab and selected my 10Gbe interface and under options I put “mtu 9000”. Then I went to tunables and added a bunch of tunables recommended per: http://45drives.blogspot.com/2016/05/how-to-tune-nas-for-direct-from-server.html (Scroll down to NAS NIC Tuning):
2017-06-06 Original 10Gbe Tunables.jpg

(I added the values under Comment as well in case I wanted to play with the values—which I did later—so I could remember the original value at a glance).

At this point, I created a 4Gb RAM disk with ImDisk Virtual Disk Driver and put some files on it and transferred them over to my FreeNAS:
2017-06-06 Ramdisk to FreeNAS Transfer.jpg


Not bad, but much lower than I expected. Then I transferred those same files from my FreeNAS back to my computer’s RAM disk and:
2017-06-06 FreeNAS to Ramdisk Transfer.jpg


Viola! This is what I wanted. But I could not and still cannot understand why writing to my FreeNAS is at half speed. Is it the HDDs bottlenecking? Something in my settings?

So I started running different tests, first I used Crystal Disk Mark on my FreeNAS:
2017-06-06 CrystalDiskMark FreeNAS test.jpg

Same or similar results.

Then I did some iperf tests. This is the client side read out (from my windows 10 computer):
2017-06-06 Iperf test From Computer.jpg


This is the server side read out from the ssh terminal for my FreeNAS:
2017-06-06 Iperf test From SSH (FreeNAS).jpg


I don’t understand why I keep getting that warning “Couldn’t compute FAST_CWD pointer” client side, and do not understand why I was not saturating my 10Gbe bandwidth. I did a multi thread test and that did saturate the connection, however, only one way. I was getting half the speed during the transfer from my computer to FreeNAS:
2017-06-06 Iperf test multi  threaded.jpg


Lastly I found more tunables per this post recommended by user Mlovelace: https://forums.freenas.org/index.php?threads/will-this-gear-do-10-gigabits.50720/
2017-06-06 Updated 10Gbe Tunables.jpg

So I added what I didn’t have and modified what I did, keeping the comment column with the original Values recommended by 45drives.

After adding these tunables, I reran all the tests I shared and got the exact same results if not slightly worse. I am lost and do not know what more to do...

Help?

On top of this, I put my computer to sleep, went to do some things and when I came back and ran these speed tests again I was ONLY getting gigabit speeds. It seemed no matter what I did the computer was sending data through the 1Gbe interface instead of the 10Gbe interface. I had to restart my computer for it to go back to going through the 10Gbe interface.

In Summary, I have two main questions:

Question 1: Why am I getting full 10Gbe speeds when reading off my FreeNAS but only half of that when writing to my FreeNAS?

Question 2: Does anyone have any idea why when I woke my computer up from sleep it refused to use the 10Gbe interface to transfer data? How can I make it so my computer only communicates through the 10Gbe interface when speaking to my FreeNAS?
 

Attachments

Last edited:

MauricioU

Member
Joined
May 22, 2017
Messages
39
What are your pool speeds?

EDIT: Have you tried here?

EDIT2: What's your full hardware?

How do I find what my pool speeds are?

And, yes, I've read through that 10Gbe Primer. The overview was not too useful in regards to configurating it all. Lots of good information in the discussion though.

Full hardware:
CPU: dual Xeon e5-2670s
Mbo: Intel S2600CP2J
RAM: Hynix 128 GB Kit
Boot Drive: Lexar JumpDrive 32Gb USB 3.0
HDDs: 6 x HGST 4TB 128Mb Cache 7200RPM drives
PCIe: Intel x540-T2 10Gbe Dual port NIC
PCIe: LSI 9211-8i SAS-Sata 8 port PCI-e card
PSU: EVGA Supernova 750W 80+ Bronze
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Hey all. This is my first post on here after being a lurker for many months.

TLDR; I am looking for answers to these two questions:
  • Question 1: Why am I getting full 10Gbe speeds when reading off my FreeNAS but only half of that when writing to my FreeNAS?
  • Question 2: Does anyone have any idea why when I woke my computer up from sleep it refused to use the 10Gbe interface to transfer data? How can I make it so my computer only communicates through the 10Gbe interface when speaking to my FreeNAS?
To make any meaningful suggestions, we really need the full tech specs of your client and NAS boxes.. CPU, RAM, clocks, etc.

The answer to question one usually is found in the amount of data being read/written vs the amount of RAM in the FreeNAS system, since RAM is used as cache, if the test dataset is small enough, it will hide the real pool performance from the testing tools. This is why more accurate testing methodologies use 2x or 3x available RAM as a lower size limit for testing.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
To make any meaningful suggestions, we really need the full tech specs of your client and NAS boxes.. CPU, RAM, clocks, etc.

The answer to question one usually is found in the amount of data being read/written vs the amount of RAM in the FreeNAS system, since RAM is used as cache, if the test dataset is small enough, it will hide the real pool performance from the testing tools. This is why more accurate testing methodologies use 2x or 3x available RAM as a lower size limit for testing.

I just left a comment regarding the specs of my system right before yours. Let me know if that is enough or if you need to know more.


If your system is still set up for testing, you could destroy the pool and recreate it as a stripe.

Check this out for benchmarking tools : https://forums.freenas.org/index.php?threads/notes-on-performance-benchmarks-and-cache.981/

Yes, it is and I considered doing this! Will look into testing it as a stripe as well. Are you suggesting the read speed is going so quick because it is being cached in the RAM? Why wouldn't the write be cached in the RAM as well? Also, doesn't iperf take the HDDs out of the equation? I thought it did from my understanding of it.
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
I just left a comment regarding the specs of my system right before yours. Let me know if that is enough or if you need to know more.
Yeah, Your box certainly isn't underpowered. If anything you probably have excess CPU capacity. Most of the things the NAS does are single threaded, so fewer faster cores are better than many slow cores.

What version of FreeNAS are you testing with?


Yes, it is and I considered doing this! Will look into testing it as a stripe as well. Are you suggesting the read speed is going so quick because it is being cached in the RAM? Why wouldn't the write be cached in the RAM as well? Also, doesn't iperf take the HDDs out of the equation? I thought it did from my understanding of it.
There's a lot of stuff written about how ZFS does write caching, but the short answer is the system RAM is involved, but it's still dependent on the performance of the target pool..

I would normally expect iperf to be a good indicator of the performance of your network between the 2 devices, but you posted a Windows copy that showed ~10Gb/s. So either the copy came from a cache, or some other part of your testing isn't showing the true performance of your setup.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
You can try with dd as said here. There's also numbers there you can compare. Just remember to test with something above 128GB or you'll be just testing your RAM. Also, don't use compression.
Thanks man I will do this, but this would just test my pool and the speed the pool is capable of right? So this wouldn't explain why I'm not getting full throughput on iperf which is HDD independent? or am I misunderstanding?
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
Yeah, Your box certainly isn't underpowered. If anything you probably have excess CPU capacity. Most of the things the NAS does are single threaded, so fewer faster cores are better than many slow cores.

What version of FreeNAS are you testing with?




There's a lot of stuff written about how ZFS does write caching, but the short answer is the system RAM is involved, but it's still dependent on the performance of the target pool..

I would normally expect iperf to be a good indicator of the performance of your network between the 2 devices, but you posted a Windows copy that showed ~10Gb/s. So either the copy came from a cache, or some other part of your testing isn't showing the true performance of your setup.

I'm on FreeNAS 9.10.2-U4.

And yes when I do a multi thread iperf test (by using -P 4) it does use the full throughput but only in one direction. The other direction gets half the speed. Or am I misunderstanding that?

EDIT:

What I mean is if I do an iperf test with my computer as the client and the FreeNAS as the server I get about 6.9 GBytes / 5.9 Gbits/sec.
When I do a multi thread iperf test with my computer as the client and the FreeNAS as the server I get a SUM of 11.5 GBytes / 9.8 Gbits/sec.

BUT

When I do an iperf test with the FreeNAS as the client and my computer as the server I get about 1.5 GBytes / 1.28 Gbits/sec.
When I do a multi thread iperf test with the FreeNAS as the client and my computer as the server I get a SUM of 5.85 GBytes / 5.03 Gbits/sec.

Those results are confusing me lol
 
Last edited:

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Not sure what to say.. My environment doesn't really use Windows for anything. A quick test between a NAS and one of our compute nodes (linux) gives:

Code:
root@nas1:~ # iperf -c 10.1.1.2 -P 1 -i 1 -f m -t 10 -d

------------------------------------------------------------

Server listening on TCP port 5001

TCP window size: 4.00 MByte (default)

------------------------------------------------------------

------------------------------------------------------------

Client connecting to 10.1.1.2, TCP port 5001

TCP window size: 2.00 MByte (default)

------------------------------------------------------------

[  5] local 10.1.1.3 port 22051 connected with 10.1.1.2 port 5001

[  4] local 10.1.1.3 port 5001 connected with 10.1.1.2 port 42920

[ ID] Interval	   Transfer	 Bandwidth

[  5]  0.0- 1.0 sec  1014 MBytes  8502 Mbits/sec

[  4]  0.0- 1.0 sec  1106 MBytes  9275 Mbits/sec

[  5]  1.0- 2.0 sec   646 MBytes  5421 Mbits/sec

[  4]  1.0- 2.0 sec  1118 MBytes  9382 Mbits/sec

[  5]  2.0- 3.0 sec   690 MBytes  5785 Mbits/sec

[  4]  2.0- 3.0 sec  1121 MBytes  9404 Mbits/sec

[  5]  3.0- 4.0 sec   594 MBytes  4981 Mbits/sec

[  4]  3.0- 4.0 sec  1121 MBytes  9407 Mbits/sec

[  5]  4.0- 5.0 sec   540 MBytes  4529 Mbits/sec

[  4]  4.0- 5.0 sec  1121 MBytes  9402 Mbits/sec

[  5]  5.0- 6.0 sec   577 MBytes  4842 Mbits/sec

[  4]  5.0- 6.0 sec  1121 MBytes  9400 Mbits/sec

[  5]  6.0- 7.0 sec   626 MBytes  5251 Mbits/sec

[  4]  6.0- 7.0 sec  1119 MBytes  9385 Mbits/sec

[  5]  7.0- 8.0 sec   479 MBytes  4021 Mbits/sec

[  4]  7.0- 8.0 sec  1122 MBytes  9410 Mbits/sec

[  5]  8.0- 9.0 sec   531 MBytes  4453 Mbits/sec

[  4]  8.0- 9.0 sec  1121 MBytes  9403 Mbits/sec

[  5]  9.0-10.0 sec   598 MBytes  5015 Mbits/sec

[  4]  9.0-10.0 sec  1121 MBytes  9400 Mbits/sec

[  4]  0.0-10.0 sec  11200 MBytes  9386 Mbits/sec

[  5]  0.0-10.0 sec  6307 MBytes  5280 Mbits/sec

root@nas1:~ #



and the reverse:

Code:
dps@gpu1:~$ iperf -c 10.1.1.3 -P 1 -i 1 -f m -t 10 -d

------------------------------------------------------------

Server listening on TCP port 5001

TCP window size: 0.08 MByte (default)

------------------------------------------------------------

------------------------------------------------------------

Client connecting to 10.1.1.3, TCP port 5001

TCP window size: 5.85 MByte (default)

------------------------------------------------------------

[  5] local 10.1.1.2 port 42918 connected with 10.1.1.3 port 5001

[  4] local 10.1.1.2 port 5001 connected with 10.1.1.3 port 39724

[ ID] Interval	   Transfer	 Bandwidth

[  5]  0.0- 1.0 sec  1120 MBytes  9392 Mbits/sec

[  4]  0.0- 1.0 sec   870 MBytes  7299 Mbits/sec

[  5]  1.0- 2.0 sec  1119 MBytes  9388 Mbits/sec

[  4]  1.0- 2.0 sec   942 MBytes  7901 Mbits/sec

[  5]  2.0- 3.0 sec  1118 MBytes  9375 Mbits/sec

[  4]  2.0- 3.0 sec   966 MBytes  8100 Mbits/sec

[  5]  3.0- 4.0 sec  1122 MBytes  9414 Mbits/sec

[  4]  3.0- 4.0 sec   448 MBytes  3755 Mbits/sec

[  5]  4.0- 5.0 sec  1122 MBytes  9412 Mbits/sec

[  4]  4.0- 5.0 sec   532 MBytes  4464 Mbits/sec

[  5]  5.0- 6.0 sec  1120 MBytes  9399 Mbits/sec

[  4]  5.0- 6.0 sec   585 MBytes  4905 Mbits/sec

[  5]  6.0- 7.0 sec  1120 MBytes  9394 Mbits/sec

[  4]  6.0- 7.0 sec   623 MBytes  5223 Mbits/sec

[  5]  7.0- 8.0 sec  1120 MBytes  9395 Mbits/sec

[  4]  7.0- 8.0 sec   488 MBytes  4092 Mbits/sec

[  5]  8.0- 9.0 sec  1123 MBytes  9417 Mbits/sec

[  4]  8.0- 9.0 sec   613 MBytes  5145 Mbits/sec

[  5]  9.0-10.0 sec  1121 MBytes  9406 Mbits/sec

[  5]  0.0-10.0 sec  11205 MBytes  9399 Mbits/sec

[  4]  9.0-10.0 sec   510 MBytes  4276 Mbits/sec

[  4]  0.0-10.0 sec  6586 MBytes  5513 Mbits/sec



We generally see 5-8Gb/s throughput on reads and writes over NFS. In pure bench testing I can get close to 10Gb/s, but all the background radiation of a production network takes a little of the peaks away.

Just for giggles do you have a non-windows box you can test with?
 

hugovsky

Neophyte Sage
Joined
Dec 12, 2011
Messages
559
Thanks man I will do this, but this would just test my pool and the speed the pool is capable of right? So this wouldn't explain why I'm not getting full throughput on iperf which is HDD independent? or am I misunderstanding?
Exactly. That's just so you can put disk subsystem out of the equation.

What's the hardware of the desktop computer? Might not be enough.
 

nojohnny101

Neophyte Sage
Joined
Dec 3, 2015
Messages
1,474
I can't help you personally but I was following this thread which had a good amount of technical detail on taking full advantage of a 10GBe network setup.

I know this is little consolation but I have heard that FN 11 has much better support for 10GBe.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
We generally see 5-8Gb/s throughput on reads and writes over NFS. In pure bench testing I can get close to 10Gb/s, but all the background radiation of a production network takes a little of the peaks away.

Just for giggles do you have a non-windows box you can test with?
Thanks for that info! You also seem to be getting significantly slow speeds one way as well. Unfortunately I do not have a non windows box to test this with.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
Exactly. That's just so you can put disk subsystem out of the equation.

What's the hardware of the desktop computer? Might not be enough.
My computer is a bit older now, but I think it should be good enough? Specs below:

CPU: i7 2600k - 3.4 GHz Quad Core 8 MB Cache
Mbo: Asus P8Z68-V Pro
RAM: 16 GB DDR3 Corsair RAM
SSD: 500GB Samsung 850 Evo
HDD: 2 x Hitachi 2TB 64 MB Cache 7200RPM drives
GPU: Nvidia GeForce GTX 760
PCIe: Intel x540-T2 10Gbe Dual port NIC
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
I can't help you personally but I was following this thread which had a good amount of technical detail on taking full advantage of a 10GBe network setup.

I know this is little consolation but I have heard that FN 11 has much better support for 10GBe.
I appreciate it man. I've read through that thread and it seemed like he had a similar issue as I did but he figured out he never turned on jumbo frames. I have jumbo frames set on both my systems and am still seeing diminished speeds one way.
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Thanks for that info! You also seem to be getting significantly slow speeds one way as well. Unfortunately I do not have a non windows box to test this with.
I'm not sure you can rely on Iperf though.

In the first test, FreeNAS -> Client, the forward stream shows a pretty steady ~9Gb/s, while the reverse stream shows a much slower result.

In the second test, the hosts are reversed. Client -> FreeNAS. if there truly was a bottleneck of some sort, you would expect the results to flip as well, but they don't. The forward stream (which is now the reverse of the previous test) shows ~9Gb/s, while the reverse stream (the forward stream on the previous test) now shows the variable and slower results.

I did not look up the CLI options you used, I just cut and pasted from your examples, but I know the link I ran iperf on supports ~9Gb/s NFS transfers in both directions, so I would suggest if this is really bothersome to you, it might be worth asking the iperf developers what's happening under the covers and see if there is a code or usage issue or other explanation for these results.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
I'm not sure you can rely on Iperf though.

In the first test, FreeNAS -> Client, the forward stream shows a pretty steady ~9Gb/s, while the reverse stream shows a much slower result.

In the second test, the hosts are reversed. Client -> FreeNAS. if there truly was a bottleneck of some sort, you would expect the results to flip as well, but they don't. The forward stream (which is now the reverse of the previous test) shows ~9Gb/s, while the reverse stream (the forward stream on the previous test) now shows the variable and slower results.

I did not look up the CLI options you used, I just cut and pasted from your examples, but I know the link I ran iperf on supports ~9Gb/s NFS transfers in both directions, so I would suggest if this is really bothersome to you, it might be worth asking the iperf developers what's happening under the covers and see if there is a code or usage issue or other explanation for these results.

Got you, will look into it. What gives in regard to the real world performance? with the right speeds also being half of the read speeds from the FreeNAS? You think that has to do with the pool speed, thus your original recommendation?
 

c32767a

Senior Member
Joined
Dec 13, 2012
Messages
362
Got you, will look into it. What gives in regard to the real world performance? with the right speeds also being half of the read speeds from the FreeNAS? You think that has to do with the pool speed, thus your original recommendation?
I think it's likely there's something in the protocol stack that doesn't like both TX and RX running at 10Gb/s at the same time.

As to the read and write speeds, yeah, you want to test and profile each component. Minimize the variables so you know how each subsystem performs before you try to understand the performance of the collective system.
 

MauricioU

Member
Joined
May 22, 2017
Messages
39
I think it's likely there's something in the protocol stack that doesn't like both TX and RX running at 10Gb/s at the same time.

As to the read and write speeds, yeah, you want to test and profile each component. Minimize the variables so you know how each subsystem performs before you try to understand the performance of the collective system.
So I created a stripe and iperf continued giving me the exact same results, however, with a ramdisk i was getting full 10gbe throughput both ways. I guess with raid z was the bottle neck for the write speeds! =( lol.

Also, I deleted all the tunables I added and continued getting the same speed.

EDIT: Compression on or off was not a factor for my system.

EDIT: Thank you to everyone that commented and helped! I don't get whats up with iperf but yeah!
 
Last edited:
Status
Not open for further replies.
Top