Weekend Upgrades - Far Less Than Expected 10GbE Performance

Status
Not open for further replies.

BarryCarey

Cadet
Joined
Sep 19, 2016
Messages
6
Hey All,

This weekend I added a few 10GbE cards to my network along with a couple more drives to my pool. However, performance is not where I was expecting when transferring between my Freenas boxes or PC.

I know performance related questions are a dime a dozen around here so I'll try to be as detailed as possible.

Freenas Hosts

Main Freenas Server
Running in ESXI but I also did the tests while with baremetal Freenas on the same server (with full 72gb of RAM) with very similar results

8 Cores (host is Dual Xeon 5620)
48gb RAM
H200 Passed through to VM
Pool: 6x 4tb Seagate 7200rpm drives in a Striped Mirror 2x2x2

2nd Freenas Server
Xeon X5650
24gb RAM
Pool: 6x 2tb assorted 7200rpm drives in RAIDZ1

Networking

Both servers have a Mellanox ConnectX-2 cards directly connect with DAC

Performance Tests

Main Server

iperf:
Speed is fine, 9gbps+ between all the hosts.

DD Write:
[root@freenas] /mnt/tank/test# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 256.724640 secs (418246501 bytes/sec)

Result: 418mbs

DD Read:

[root@freenas] /mnt/tank/test# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 182.100611 secs (589642077 bytes/sec)

Result: 589mbs

https://puu.sh/t2tiY/e733088ff7.png

2nd Server

DD Write:

[root@fn2] /mnt/tank/test# dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 268.583272 secs (399779858 bytes/sec)

Result: 399mbs

DD Read:

[root@fn2] /mnt/tank/test# dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 351.216426 secs (305720845 bytes/sec)

Result: 305mbs

Tunables

Main server:
https://puu.sh/t1UGD/d96b112a91.png
2nd Server: https://puu.sh/t1UJb/aad97d7ce3.png

Background
The pool on my main server was originally using 4x 4tb drives in a striped mirror. It was about full so I added 2 more 4tb drives. After adding the drives I moved all data over to my 2nd server and deleted/recreated the pool on the original. All tests above are performed on an empty pool for the main server.

When transferring the data back to the pool I noticed transfer speeds were far less than expected. I first did a Send/Receive from the 2nd server to the main server. Speeds were around 700mbps. I then decided to try Rsync. This results in around 600mbps when moving files ~20gb. If I run multiple Rsync sessions I cannot get above combined 1.5gbps transfer. This is no where near what iperf and DD show should be possible. I've tried many suggestions I've found around here and nothing has helped so far.

This screenshot shows as 3 Rsync transfers are started from the 2nd server to the main. Each step in speed is when a transfer is start. All 3 transfers are of video files ranging from 2gb to 30gb.

https://puu.sh/t2tgQ/d1a35805ec.png

I'm hoping someone can offer some advise to get me going in the right direction.
 
Last edited:

BarryCarey

Cadet
Joined
Sep 19, 2016
Messages
6
Did you make any headway with this?

I did today. Got it sorted out.

My network ended up being the problem.

I had too many balls in the air to troubleshoot it properly. I initially thought iperf all around looked good. However, today I spent the time to make a spreadsheet testing every direction possible. Saw a pretty obvious issue

I don't have a 10gbe switch. As a result, I was using my pfSense VM to pass traffic around.

The full setup is: 1 physical Freenas server (backup server), my PC, Freenas VM (main storage) in ESXI. The ESXI host has 2 10gbe NICs.

My PC is on one subnet, virtual Freenas and Physical Freenas are on another. Each ESXI 10gbe card is assigned to a separate vSwitch and pfSense has an interface on each vSwitch.

For some reason the Freenas VM did not like the setup. I was getting expected speeds between everything except the VM. Even setting up a test Windows VM on the same vSwitch to bypass the physical nics the Freenas VM barly topped 1gbps.

I decided to redo the whole network cofig as I initially setup everything all in one shot. This time I went a step at a time to make sure speeds were as expected.

As it stands now transfer speeds are pretty much what I expect. Although the read speeds on my stripped mirror seem lower than they should be but I'll leave that for another thread.
 

strikermed

Dabbler
Joined
Mar 22, 2017
Messages
29
So, can you break down your network configuration then? I'm doing something similar with UNRAID (running FreeNAS as a Vm with HBA pass through and intel x540 dual port Passthrough. I then connect 2 pc to each NIC using a separate Subnet... 192.168.2.1 and 192.168.2.2 for each of the 2 ports and the 2 PC are on 192.168.2.3 and 192.168.2.4.

First I can't get one of my PC to connect even though the two are configured exactly the same.

Second, on the one I can get connected I get what I expect out of a 5drive RAIDZ2 for about 5-10 seconds then it drops WELL BELOW 1GbE speeds, sometimes in the 25MB/s range. It's very frustrating going from 500MB/s to 25MB/s.

I got this even after making 45 drives suggested tuning changes.

Any info on how you got expected speeds, would be helpful
 

strikermed

Dabbler
Joined
Mar 22, 2017
Messages
29
CONCLUSION/SOLUTION:

I've been testing this for a long time, and I never seemed to reach the 10GbE speeds I was looking for. I tried Corral, and then I even downgraded to Freenas9.1(Due to Corral no longer being supported). First, I still have not been able to install Freenas on my hardware (Still bewilders me), but I'm able to use UNRAID to successfully run Freenas on a virtual machine with PCI pass through (using a HBA card). I'm using a dual port intel x540 Network card in the FreeNas Server, and I use single port intel X540 in my two clients. I direct connect each to the FreeNas server and configure each to be on their own subnet. For example. PC1 is on subnet 192.168.3.1 and PC2 is on 192.168.2.1. On the FreeNas server I changed the individual connections to be 192.168.3.2 and the other to b 192.168.2.2. Once that's done I ensured each connection was connected to the appropriate connection.



The other night (around the end of May in 2017) Freenas came out with an update that improved Samba, which in turn improves SMB (Windows file sharing protocol). So I gave it a whirl, first I followed these tweaks from 45 drives to do some tuning. Very simple stuff: http://45drives.blogspot.com/2016/05/how-to-tune-nas-for-direct-from-server.html



Next, I did some testing of my transfer speeds using a RAMPerfect Ram drive and some SSD's and HDD in the FreeNAS build. I got around 250MB/s on both the SSD and HDD. So something is wrong especially if we have SSD's in a stripe and the HDD in a RAIDZ2 arrangement.



I initiated the painless update (took about 5-8 minutes reboot included).



When I came back to do the exact same tests I got a flying 1GB/s on my SSD stripe, and about 500MB-600MB/s on my 5 drive RAIDZ2. Needless to say, I'm finally relieved to get one hurdle out of the way, and now the next challenge will be to see if this new update will install on my hardware.



I hope this provides some clarity for anyone who hasn't made the update and is looking for 10Gigabit speeds with Freenas 9.1
 
Status
Not open for further replies.
Top