iSCSI and 10GbE tuning

Status
Not open for further replies.

Paul Muller

Banned
Joined
Aug 5, 2012
Messages
5
I propose that this thread cover iSCSI and 10GbE specifically as I appear to be hitting performance limits very specific to FreeBSD's iSCSI/istgt that I have not been able to diagnose - but as I am a Mac OS X and FreeNAS only shop, it's hard for me to diagnose the source of the problem client, server, tunings or all-of-the-above.

My hope this is a new thread, I can't find anything specific to this topic in the forum and with more 10GbE and multi-gigabit WiFi gaining popularity, I believe this topic will become more prevalent.

I recently setup a home studio and video production facility (based on Mac clients and FreeNAS filer) and wanted to move my entire workflow off client machines and into a SAN and NAS environment (I am aware of the difference).

Having hit several obstacles on the way AND having found a variety of conflicting and generally out of date information on performance and tuning, I thought it would be worth covering people's experience here.

So opening question, who's running 10GbE with FreeNAS and iSCSI, what performance are you obtaining and what setup are you using to get it?

In my next post, I'll start by sharing mine.

- - - Updated - - -

My FreeNAS 8.3.1p2 10GbE MPIO iSCSI write performance is 25% of my AFP performance 100-150MBytes (not bits)/sec vs 450-600MB/s

Setup my 10GbE network as a dual (2x) MPIO device on two separate networks (10.0.16.x/24 and 10.0.17.x/24) hard wired (no switch).

Performance testing iSCSI and obtained poor results compared to AFP (netatalk) - write performance is maybe 150MB/s burst and read performance is 250MB/s where AFP is capable of reading and writing in excess of 550MB/s (to a striped array of 3xIntel SSDs).

Netperf tests have been able to saturate the link and CPU and memory usage is minimal. The server itself is based on an Intel S1200BTL motherboard with 32GB memory and the following loader and sysctl tunables;

loader
hw.igb.max_interrupt_rate 32000
hw.igb.txd 2048
hw.igb.rx_process_limit -1
vfs.zfs.arc_max 22777669969
vm.kmem_size_max 31635652736
vm.kmem_size 25308522188

sysctl
net.inet.tcp.sendbuf_max 16777216
kern.ipc.maxsockbuf 16777216
net.inet.tcp.recvbuf_max 16777216
net.inet.tcp.sendspace 262144
net.inet.tcp.recvspace 262144
net.inet.tcp.mssdflt 8940

For the purposes of testing I have hard wired a Mac Pro 2009 2.26GHz 8-core client with dual 10GbE Small-Tree NIC with the SNS GlobalSAN initiator (which I am not able to tune).

Is there special voodoo I need to know to tune the Target Global Config to work better with the GlobalSAN initiator? I have tried tuning up and down burst lengths and T2W and T2R as well as R2T tunables and not made much difference.

I also tried setting up FreeBSD 9.1 to see if the problem was related to FreeNAs, but I achieved roughly the same performance on FreeBSD so switched back to FreeNAS (because the team has done a great job - THANK YOU! :smile:

Is it just me? Does anyone else have a 10GbE setup that is performing better?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Do NOT cross post.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
It would have been useful had he mentioned what sort of performance he was getting with a single 10GbE. And not hijacked that other thread.
 

Paul Muller

Banned
Joined
Aug 5, 2012
Messages
5
It would have been useful had he mentioned what sort of performance he was getting with a single 10GbE. And not hijacked that other thread.
Good question - the performance doesn't appear to be any better or worse with dual MPIO 10GbE - in other words the performance figures above are the same for both single and dual channel.
 
Status
Not open for further replies.
Top