Improving throughput to FreeNAS NFS/CIFS from FreeBSD 8.4

Status
Not open for further replies.

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
Greetings. I'm running a FreeBSD 8.4 server connecting to FreeNAS 9.3 using both NFS and CIFS. Throughput to the ReadyNAS saturates the GB link:

Code:
[root@arthur ~]# iperf -c avalon
------------------------------------------------------------
Client connecting to avalon, TCP port 5001
TCP window size:  109 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.126 port 26954 connected with 192.168.0.12 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.07 GBytes   920 Mbits/sec


Transferring a 5GB file using rsync to the NFS mount:
Code:
5gbtest
  5,368,709,120 100%   36.77MB/s    0:02:19 (xfr#1, to-chk=0/1)


Transferring the same file to the CIFS mount:
Code:
5gbtest
  5,368,709,120 100%   45.17MB/s    0:01:53 (xfr#1, to-chk=0/1)

Using "cp" wasn't much better for time:
Code:

real    1m34.762s
user    0m0.008s
sys     0m9.463s


When copying files over from my windows machine to the FreeNAS I was sustaining 87 MBps (~700 Mbps). The difference between the two is the windows machine is connected to the same switch as the FreeNAS machine. The FreeBSD server is on another switch that is uplinked. Both switches are Cisco mid-tier, gigabit, jumbo frames enabled. As shown above with iperf, the connection between the two is capable.

I've done several forum searches here and on Google in general and haven't been able to successfully fix this issue.

I did # top -SH on FreeNAS, no service's single thread was over 70%, likewise on my FreeBSD server.

So, if anyone has suggestions, or needs specific sysctl output, please let me know.

EDIT: The file was copied off an SSD drive.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
No, I haven't. And there's more oddities. Copying a small (less than 500MB) file transfers at 20MBps, but larger files transfer at ~45 MBps. I tried various sysctl tunings but nothing seems to work. I'm going to be building a new FreeBSD 64 bit 10.1 box here soon on a similar ASRock board as the mini, and maybe that will show improvement.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
Though this is interesting doing a tcpdump from arthur (FreeBSD 8.4) to the FreeNAS (9.3):

Code:
19:22:10.879512 IP (tos 0x0, ttl 64, id 35272, offset 0, flags [DF], proto TCP (6), length 1500)
    arthur.wlan.silvertree.org.38536 > avalon.wlan.silvertree.org.netbios-ssn: Flags [.], cksum 0xefb2 (correct), seq 1448:2896, ack 1, win 8326, options [nop,nop,TS val 701973441 ecr 2413302582], length 1448
>>> NBT Session Packet
NBT - Unknown packet type
Type=0xCF
Data: (1447 bytes)


But check the MTU for the NIC and route:

Code:
[root@arthur /boot]# !route
route get avalon.wlan.silvertree.org
   route to: avalon.wlan.silvertree.org
destination: 192.168.0.0
       mask: 255.255.255.0
  interface: em1
      flags: <UP,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      7936         1         0

[root@arthur /boot]# ifconfig em1
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 7936
        options=4209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,VLAN_HWTSO>
        ether 68:05:ca:17:5f:3e
        inet 192.168.0.126 netmask 0xffffff00 broadcast 192.168.0.255
        media: Ethernet 1000baseT <full-duplex>
        status: active


So perhaps I am missing a tuneable somewhere? Or is samba client on FreeBSd hard wired to use MTU 1500 in some way?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are using jumbo frames. You almost certainly do not have them set up "properly" across your network. There's a reason why we tell people that jumbo frames are for morons. Because they basically are. Only a fool would use them with today's hardware.

So please disable them and see what happens. Here's a free tip.. if you are having to do sysctl changes and you aren't saturating Gb, you are not doing things properly to begin with, and some sysctls aren't going to fix whatever you've done wrong.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
I'm not entirely certain I'm a moron, but sometimes definitely a fool. However, your point is valid. I'll reset everything on my switches and NICs back to 1500. I'm also interested to see how it goes with my 10.1 build.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
Change the MTU to 1500 across the path got me to 49-50 MBps which is definitely an improvement. Perhaps I have an unrealistic expectation of what throughput should be?
 

willnx

Dabbler
Joined
Aug 11, 2013
Messages
49
What the problem statement sounds like to me after reading the comments:
"Throughput is lower than expectations when my clients isn't on the same switch as the NAS."

Sounds like a network issue to me. Here are the two scenarios I'll referrer to:
A) From 'fast' client to NAS & NAS to 'fast' client
B) From 'slow' client to NAS & NAS to 'slow' client

If I were in your shoes, I'd:
  • Check for latency differences between the two scenarios.
  • Check for dropped packets in both scenarios
  • Might also be worth checking for an a-sync route as well between the two scenarios with the traceroute command. Mostly you'll just want to sanity check is that you don't have few hops from client to NAS, and a pile from NAS to client (or the inverse).
Beyond that, if possible, try connecting a 'slow' client to the same switch as the NAS and see if the problem goes away.

If none of that points to a problem, I would start looking into differences between the NIC each client is using (the config, drivers, brand, etc), then maybe try setting up 'sterile' tests between vanilla (fresh install, no special configurations) clients, and aiming to get a consistent repro of the problem which I can share with others.

Hope this helps.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
Fast client is using on board RealTek NIC, slow client is using Intel Ethernet Pro 1000 (em driver). At this point, I'm thinking it's more the slow client. I can do SSD to local spinning platter at 150 MBps, but I just think the network stack on the 32 bit 8.4 system is just not optimal. I'm building the 64 bit system this weekend, and I'm hoping to see dramatic improvements. Everything is on the same subnet (on two switches) so there are no hops at all in either direction. No packet loss according to wireshark.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
32-bit, let alone running 8.4, is kind of... outdated. Go 64-bit and go with 9.3 or 10.x! :D
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
Yeah, that's the plan. 10.1 64 bit. Not looking forward to all the port/config/user data foo, but this is long overdue.
 

Havoc70

Dabbler
Joined
Mar 27, 2015
Messages
13
So, after getting FreeBSD 10.1 set up on 64 bit here's where I'm at:

Copy either via rsync or cp from FreeBSD 10.1 to the FreeNAS samba mount: 50 MBps (switches traversed: 2 cable length: 27 meters)
Copy to windows client from FreeBSD 10.1 samba user mount: 110 MBps (switches traversed: 2 cable length: 27 meters)
Copy from windows client to FreeNAS samba mount: 104 MBps (switches traversed: 1 cable length: 2 meters)
Copy from FreeBSD server to FreeNAS server using both servers mounted as drives on the windows client: 74 MBps (switches traversed: 2 cable length 27 meters)

So I think 45-50 MBps is as good as it's going to get if I do command line copy from the FreeBSD server to FreeNAS. Sometimes even rsync/copies internally within the FreeBSD box itself seem to throttle at 45MBps depending on the file so I'm going to look into tuning that performance and see how things go.

I appreciate everyone's assistance, as this isn't really a FreeNAS issue or problem, but hopefully my testing will prove useful to folks coming across similar issues. Should I improve throughput I'll post my fixes here but I'm half tempted to call it good.
 
Status
Not open for further replies.
Top