Bad iSCSI perfomance on 10GbE

Status
Not open for further replies.

Peppermint

Cadet
Joined
Jul 8, 2014
Messages
9
Hi,



I am working on a network storage for our esxi cluster based on freenas.

The cluster consists of three HP DL 360 G8 each with a Intel X540-T2 card. At first I tried to export the share via NFS. The performance was very bad, as I described already in this post. Now I switched to iSCSI. The three esxi servers are connected to the freenas box via a managed netgear switch that way:


freenas port 1 10.1.0.1/24 VLAN 11
freenas port 2 10.2.0.1/24 VLAN 12

esxi1 port 1 10.1.0.2/24 VLAN 11
esxi1 port 2 10.2.0.2/24 VLAN 12

esxi2 port 1 10.1.0.3/24 VLAN 11
esxi2 port 2 10.2.0.3/24 VLAN 12

esxi3 port 1 10.1.0.4/24 VLAN 11
esxi3 port 2 10.2.0.4/24 VLAN 12



There is no routing between the vlans and the switch is only used for iSCSI traffic. Everything is running on 10GbE full-duplex.


Frenas box:
2x Xeon E5-2609 @ 2,5GHz
128GB RAM
Intel x540 T2 dual port 10 GbE-card
LSI 9300 12Gb/s SAS 8i with 12 HGST UltraStar 15K600 (6 of them in a software RAID 10)


iSCSI extent:
Extend: 2TB zvol on six disks configured as raid 10
Portal: 10.1.0.1 Port 3260, 10.2.0.1 Port 3260

iSCSI Settings on esxi:
Two vmks for iSCSI, each assigned only to one physical nic
Path: dynamic
Path policy: round-robin


Performance:

Running a dd (bs=2M count=7500) in a single vm located on the iSCSI storage, I see a peak at about 1.2GBit/s on both interfaces. Running a dd in two vms simultaneously the peak stays at 1.2GBit/s. The Pool can do 2.3GByte/s write and about 5GByte/s read (measeured with iozone and dd). So the disks should not be the bottleneck…

Because performance is still not what it should be, I assume that this is affected by the intel ixgbe driver issue….

It would be great, if anyone can confirm this or even got an hint how to improve performance =)


Thanks,

Peppermint
 

Peppermint

Cadet
Joined
Jul 8, 2014
Messages
9
I disabled tso a few days ago. There was no change in performance. Maybe only NFS is affected by this issue?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Intel 10Gb NICs aren't really reliable. At the present time, if you want performance and/or reliability you should use something like a 10Gb Chelsio card. These are what iX uses.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
Yep, were just gonna get two of the Chelsio cards and see if that helps.
 

downingjosh

Dabbler
Joined
Feb 25, 2013
Messages
22
I ran into this same issue with an Intel X520-DA2. I just upgraded to 9.3Alpha and the issue has not recurred. I've hit 6 Gb/s on a single link, no more crashes.

Are people seeing significantly better performance with the Chelsio cards? I had heard several people reporting good performance with the Intel 10GBE cards, which is why I bought them; never saw much mention of the Chelsio NICs in the forums.
 

jamiejunk

Contributor
Joined
Jan 13, 2013
Messages
134
It's not ever just a performance problem that is the problem. They just straight up fail to work after awhile. Lost all network connectivity after a few days. Got two Chelsio cards on the way. We'll see if these do any better.
 

downingjosh

Dabbler
Joined
Feb 25, 2013
Messages
22
It's not ever just a performance problem that is the problem. They just straight up fail to work after awhile. Lost all network connectivity after a few days. Got two Chelsio cards on the way. We'll see if these do any better.
Oh, I get it, I experienced the same issue. Coincidentally, I upgraded to 9.2.17 right before adding the Intel 10GB card; I'm very glad that bug got resolved 1 day before I experienced the issue; it had been open for 5 months, very nice that I didn't have to wait.

I'm just wondering if there are other issues with the Intel 10GB NICs, or if it was just this driver/OS interaction driving the recent opinions. I'll consider getting a Chelsio adapter if there are further issues with the Intel cards, but I'd rather not spend the money if this kernel change negates the only real issue.
 

Peppermint

Cadet
Joined
Jul 8, 2014
Messages
9
I'll try the 9.3Alpha tomorrow. I hope this will speed up the performance as downingjosh reported.... otherwise I'll order some of those Chelsio cards ;-)
 

Peppermint

Cadet
Joined
Jul 8, 2014
Messages
9
I recently installed a nightly build of 9.3 alpha. Now i see about 2GByte/s on both of the path while doing a dd if=/dev/zero of=/path/testfile bs=2MB count=75000. Measured write speed, in a single vm, is about 380-450MB/s. read is about 140MB/s... Faster write than read speed...?!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
zeros compress really well.... so naturally you appear to get amazing performance where none exists.
 
Status
Not open for further replies.
Top