bottleneck of iscsi speed using 2 or more interfaces

Status
Not open for further replies.

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
Hello.

I have freenas 9.2.1.9, 32Gb RAM and 2 gigabit interfaces (10.10.1.1/16 and 10.20.1.1/16).
Windows server 2012 conected to the freenas target via MPIO round-robin.
Maximum speed of copy to freenas is 95 mb per sec - utilisation 450 mbps per interface.
If i use 1 interface speed of copy is the same - 95 Mb per sec - utilisation of interface is 900 mbps.

How to get 200 mb per sec using 2 interfaces and MPIO?
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
Which iscsi service do you use?(ctl or the old one?) how do youtest?
Which hardware?
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
Which iscsi service do you use?(ctl or the old one?) how do youtest?
Which hardware?
I use CTL.
Intel ethernet 2 gigabit port ET (server type).
I test it simple: copy 3 big files from several sources first MPIO enabled then second MPIO disabled. And monitor utilisation lan graphs on the freenas and windows.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
Which hba and disks? What is your speed on the freebox system, when you test it with dd? (forum search will help)
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
2 adaptec controllers configured as JBOD.
installed 12 SATA 2 Tb disks.
I have tried different configurations:
1. pool 3 vdev = raidz 3 disks + raidz 3 disk + raidz 3 disks
2. pool 2 vdev = raidz 5 disk + raidz 5 disks
3. pool 1 vdev = 9 raidz disks
4. pool 1 vdev = 3 raidz disk

Up write speed from iscsi is 95 mb to all pools. Bottleneck is NIC.
DD speed very fast more than 200 MB per sec in all pools. So it does not matter.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Which hba and disks? What is your speed on the freebox system, when you test it with dd? (forum search will help)
zambanini, would this be a good example of a DD test he could run?
dd if=/dev/zero of=/mnt/tank1/test.file bs=1M count=10000 (with compression disabled)
Or something like this?
dd if=/dev/zero of=/mnt/tank1/test.file bs=1048576
(tank1 = your pool name, test.file is any file name you want to use)
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
Now configured pool with 3 disks raidz. result: 10485760000 bytes transferred in 80.642245 secs (130028126 bytes/sec)
 

biostacis

Dabbler
Joined
Jun 27, 2014
Messages
15
zambanini, would this be a good example of a DD test he could run?
dd if=/dev/zero of=/mnt/tank1/test.file bs=1M count=10000 (with compression disabled)
Or something like this?
dd if=/dev/zero of=/mnt/tank1/test.file bs=1048576
(tank1 = your pool name, test.file is any file name you want to use)

Screen shots speed limit:


upload_2014-11-23_18-16-15.png

upload_2014-11-23_18-17-37.png
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
biostacis, did you end up coming up with any solutions? I'm just curious.
 

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Wish someone could attach their own speed measurement charts when cloning VMs on single freenas or between them. On twin dedicated for iscsi giga nics my transfer rates are rather low (60mBs on freenas charts per nic) :-(
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Wish someone could attach their own speed measurement charts when cloning VMs on single freenas or between them. On twin dedicated for iscsi giga nics my transfer rates are rather low (60mBs on freenas charts per nic) :-(

Yeah, except there are external factors that can affect the speeds even more than your FreeNAS box. Pretend you have an infinitely fast FreeNAS box, but your source disks only do 25MB/sec. Guess what speed you'll get.

The only truely accurate answer is "up to whatever the bottleneck is".
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
Yeah, I don't think someone posting their chart will help you all that much other than prove that it's possible under the right conditions to get better speeds than you're seeing, but their equipment will differ. I have tapped out my single NIC at around 800M. I'll be doing the same testing as yourself once I bring that up to a quad NIC.

If you're convinced your disk is perfectly capable of the sustained activity, and that you have a network issue, I wonder if you checked all the usual network suspects?

Is your iSCSI traffic in a separate vLan or isolated on a different switch from your other network traffic?
Have you already ruled out all the simple layer 1 issues we all gloss over? Ensuring good cat5e or 6 cabling? Checked port status to ensure everything is gigabit?
More likely, since you're talking about VMware, have you check all your round robin settings for the datastore and then set your IO threshold to something other than default (which is 1000, I think?)
Don't see network config, we would be assuming you did a different IP subnet for each of the VMkernal ports and each of your iSCSI interfaces?
Not sure how SIOC (storage IO control) effects cloning or if that's enabled?

I could be off base, but those are things that come to mind.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
How to get 200 mb per sec using 2 interfaces and MPIO?
If you want greater then gigabit speed you need to upgrade to 10gbe.

Bonding multiple gigabit connections allows you to service more threads at full gigabit capacity but it doesn't increase network speed beyond gigabit. You get a wider pipe, not a faster one. ;)
 
Last edited:

TimTeka

Dabbler
Joined
Dec 18, 2013
Messages
41
Yeah, I don't think someone posting their chart will help you all that much other than prove that it's possible under the right conditions to get better speeds than you're seeing, but their equipment will differ. I have tapped out my single NIC at around 800M. I'll be doing the same testing as yourself once I bring that up to a quad NIC.

If you're convinced your disk is perfectly capable of the sustained activity, and that you have a network issue, I wonder if you checked all the usual network suspects?

Is your iSCSI traffic in a separate vLan or isolated on a different switch from your other network traffic?
Have you already ruled out all the simple layer 1 issues we all gloss over? Ensuring good cat5e or 6 cabling? Checked port status to ensure everything is gigabit?
More likely, since you're talking about VMware, have you check all your round robin settings for the datastore and then set your IO threshold to something other than default (which is 1000, I think?)
Don't see network config, we would be assuming you did a different IP subnet for each of the VMkernal ports and each of your iSCSI interfaces?
Not sure how SIOC (storage IO control) effects cloning or if that's enabled?

I could be off base, but those are things that come to mind.
Give me some time to thoroughly answer your questions please :smile:
 
Status
Not open for further replies.
Top