ISCSI slow in comparison with CIFS

Status
Not open for further replies.

Armeron

Cadet
Joined
Sep 27, 2011
Messages
1
Hi,

My setup is as follows :

FreeNAS box with one SATA HDD formated UFS.

The SATA drive is 1TB and on it there is one 200GB file used for ISCSI and the rest of the drive I shared using CIFS.

I connect to the ISCSI Share using a Windows 2003 server. I then share a folder on the ISCSI drive on the windows server and connect this as a network drive for the users computers. (+-35 users)

Copying a file from a users computer to the share is quite slow ( 200mbps ) VS CIFS ( 700mbps).

What can I do to inprove performance ? (Im using it to store Outlook PST files on)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's going to be a lot more overhead if you're going through two layers of server. It sounds like you're seeing it.

How are your transfer speeds from the Win2K3 server to the FreeNAS iSCSI drive? If they're substantially better than 200Mbps, then I think the answer is "fix or ditch the Win2K3 box" (or upgrade it or get faster hardware or directly attach a drive to it rather than trying to use iSCSI).
 

sheltjo6

Cadet
Joined
Dec 5, 2011
Messages
1
@ Ameron. I experienced the same issue with iSCSI when compared to CIFS. Write performance to both iSCSI and CIFS were approximately the same. But, read performance was dramatically different. CIFS read performance was about the same as its write performance (10 MB/sec). However, iSCSI read performance was only 4 MB/Sec. I'm not sure why there is a degradation in read performance. The iSCSI target is hosted on a Proliant DL-360 G6 server with 6 Gbps SAS drives spinning @ 15K rpm.
 

mkruger

Cadet
Joined
Jan 5, 2012
Messages
3
Effects of different MTU values on ISCSI performance

I did some testing last night using 2 FreeNAS ISCSI targets and a Windows 7 initiator. Here is what I found:

The FreeNas 8.0.2 (64Bit) machine is running an AMD64 3500+ (2.2Ghz) with 2GB of memory and an Intel PCI gigabit NIC (EXPI9301CTBLK - Gigabit CT).
Target # 1 - Two WD Blue 500GB drives (WD5000AAKS) RAID 0 stripe on LSI SAS 3041E-R
Target # 2 - Single Samsung 320GB drive (HD321KJ) on LSI SAS 3041E-R

The ISCSI initiator is a Windows 7 Enterprise (64Bit) machine running an Intel Pentium E5700 (Dual Core) 3.0Ghz with 12GB of memory and networked via an Intel PCI express Gigabit NIC (PWLA8391GT - Pro 1000 GT).

Targets and Initiator are connected using CAT5 direct cabling (no switch used). Full Duplex Mode enabled and verified on both systems (using ifconfig and system event viewer respectively).

First for a DAS comparision, when measured locally using the Gnome disk utility, the drive array used in Target #1 boasts read speed of 198MB/s. I cannot test it at this time, but I recall the write speeds are similar, except a little bit faster. The singe drive used for Target # 2 produces read speeds of 51MB/s and I suspect the write speeds are similar. I have not tested I/O either locally or remotely on either drive.

Tested the following read speeds using HDTUNE 2.55 using a 64KB block size.

Target #1
WIN7 MTU 1500, FreeNAS MTU 1500 - 31.5 MB/s
WIN7 MTU 1500, FreeNAS MTU 9000 - 22.0 MB/s
WIN7 MTU 9014, FreeNAS MTU 9000 - 11.6 MB/s
WIN7 MTU 9014, FreeNAS MTU 9014 - 33.8 MB/s

Target #2
WIN7 MTU 1500, FreeNAS MTU 1500 - 30.6 MB/s
WIN7 MTU 1500, FreeNAS MTU 9000 - 31.2 MB/s
WIN7 MTU 9014, FreeNAS MTU 9000 - 30.8 MB/s
WIN7 MTU 9014, FreeNAS MTU 9014 - 31.4 MB/s

Observations and conclusion
While Target # 2 was barely effected by MTU mismatches, Target # 1 suffered severe performance degration. Additionally, when Jumbo Frames were correctly matched, neither target improved much versus the default MTU of 1500.

Interestingly, the results suggest packet size was not the limiting factor in this particular environment. This is especially true because Gigabit Ethernet is supposed to be capable of data throughput approaching 125MB/s.

CPU utilization also was not the limiting factor as neither machine saw CPU rates exceed 10-25%.

Perhaps in an environment where CPU utilization or packet size is the limiting factor, there may be some noticeable improvements when enabling correctly matched Jumbo Frames. In this case however, it would seem the limiting factor is FreeNAS 8.0.2. Anecdotal evidence scattered on the Internet suggests this to be the situation.

While I think FreeNAS has a very pretty and polished web interface, I have also found this NAS product to be a bit flakey. Most changes require a reboot to before they take effect. Almost like Windows in that regard. I will be moving on to something else....Solaris 11, Linux, or any one of the other NAS oriented products such as Open-E Data Storage Software V6 Lite.
 

mkruger

Cadet
Joined
Jan 5, 2012
Messages
3
Update:

Tried a set of Realtek NIC's and improved 64KB performance by about 12MB/s....in Windows 7 that is. Also found if you set the block size to 8MB in HDTune, read speeds improve to about 80-90MB/s. Didn't see any improvement in ESXi from the alternate networking hardware. Gonna play with MPIO next.

Also took a look at Open E DSS V6 Lite and discovered the "lite" version does not support hardware raid controllers (at all). Even their full version trial did not support my particular LSI 3041E-R. Not sure why the VMware blogosphere is raving about it so much. I found the software has a very clunky and difficult to navigate web interface. It looked more like a website than a software application web interface. And the console was not much better, with very awkward key combinations required to do anything. FreeNas in contrast, has a wonderful web interface and nice easy to use console.
 
Status
Not open for further replies.
Top