MPIO Freenas 9.1 vmware esxi 5.5

Status
Not open for further replies.

hobrawker

Cadet
Joined
Jan 6, 2014
Messages
5
Okay, I know that there are many topics on this but i am just not finding the answer that i am looking for. I have multipaths setup, but i am still only getting about 60 MB/s disk speed check. My setup is as follows

freenas server:
Build FreeNAS-9.1.1-RELEASE-x64 (a752d35)
Platform Dual-Core AMD Opteron(tm) Processor 2218
Memory 8170MB
System Time Mon Jan 06 13:44:14 PST 2014
Uptime 1:44PM up 2 mins, 0 users
Load Average 0.54, 0.54, 0.25

it is total of 4 core.
interface 1 for management has internal routable IP
interface 2 (em01) 172.15.10.10/24
interface 3 (em02) 172.15.20.10/24

portal contains both interfaces.
I have target setup

i have vmware setup with a vswitch setup with a vmkernel labeled iscsi1 and the ip address is 172.15.10.11/24 and a vmkernel labeled iscsi2 with ip address 172.15.20.11/24

the storage adapter shows 1 device and 2 paths and when i look in the properties, it shows 2 paths as active.

When i do :
dd if=/dev/zero of=/tmp/output.img bs=8k count=256k
i get:
262144+0 records in
262144+0 records out
2147483648 bytes (2.1 GB) copied, 35.5749 s, 60.4 MB/s
if i drop a path, i get the same. When i look at the reports i see the traffic on em01 but nothing on em02.
Is there something i am missing? 60MB/s is awful.
-Rob
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Umm.. so? iSCSI is always slow. And I have no clue how big your pool is, but you aren't doing yourself any favors in the performance area with the minimum required for ZFS. If you want iSCSI(or NFS) and you want performance with it, you're going to have to beef up your system. Try 32GB of RAM. ;)

This looks completely normal and expected to me. Do you disagree?
 

hobrawker

Cadet
Joined
Jan 6, 2014
Messages
5
Umm.. so? iSCSI is always slow. And I have no clue how big your pool is, but you aren't doing yourself any favors in the performance area with the minimum required for ZFS. If you want iSCSI(or NFS) and you want performance with it, you're going to have to beef up your system. Try 32GB of RAM. ;)

This looks completely normal and expected to me. Do you disagree?

The pool size is only 300gb. This is just a POC.
RAM also should not affect the multi-path issue.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
POC?

RAM can affect multi-path indirectly. If your pool isn't fast enough to saturate more than 1 path, regardless of how many paths you have, performance won't change. And RAM is the single biggest way to increase pool performance. My guess is your current bottleneck is the pool. So your speeds won't change.
 

Got2GoLV

Dabbler
Joined
Jun 2, 2011
Messages
26
OP:

1- Run DD tests on a shell prompt on the FN box to establish a baseline for your array.
2- If it is more than 60MB/s, then disable MPIO, configure only 1 path and connect that interface on the FN box directly to the interface on the ESX box.
Then test again. Can you max out the Gig link ?
Do the same test but using the other adapter on each box. Same result ?
3- If you are NOT able to max out both Gig links, then the issue might be with the NICs. Check those. What brand/model are they ?
4- If you ARE able to max out the Gig links individually, then turn MPIO back on and connect the NICs on the FN box directly to the NICs on the ESX box and try again.
(Im assuming you were using a switch previously)
5- If this works, then the issue might be with the switch.

Also, make sure that MPIO is configured for round robin. Otherwise, it will not cycle paths to utilize them all.


Umm.. so? iSCSI is always slow.

Huh?
o_O



He should be able to saturate a Gig links worth of bandwidth (assuming the array can deliver that throughput) with a single stream.
And more aggregate bandwidth up to the sum of the MPIO links with many different/randomized streams (as in many server requests simultaneously being cycled between the MPIO links)
 

hobrawker

Cadet
Joined
Jan 6, 2014
Messages
5
POC?

RAM can affect multi-path indirectly. If your pool isn't fast enough to saturate more than 1 path, regardless of how many paths you have, performance won't change. And RAM is the single biggest way to increase pool performance. My guess is your current bottleneck is the pool. So your speeds won't change.

POC= Proof of Concept

anyhow, i increased the ram to 32gb with same result
 

hobrawker

Cadet
Joined
Jan 6, 2014
Messages
5
OP:

1- Run DD tests on a shell prompt on the FN box to establish a baseline for your array.
2- If it is more than 60MB/s, then disable MPIO, configure only 1 path and connect that interface on the FN box directly to the interface on the ESX box.
Then test again. Can you max out the Gig link ?
Do the same test but using the other adapter on each box. Same result ?
3- If you are NOT able to max out both Gig links, then the issue might be with the NICs. Check those. What brand/model are they ?
4- If you ARE able to max out the Gig links individually, then turn MPIO back on and connect the NICs on the FN box directly to the NICs on the ESX box and try again.
(Im assuming you were using a switch previously)
5- If this works, then the issue might be with the switch.

Also, make sure that MPIO is configured for round robin. Otherwise, it will not cycle paths to utilize them all.




Huh?
o_O



He should be able to saturate a Gig links worth of bandwidth (assuming the array can deliver that throughput) with a single stream.
And more aggregate bandwidth up to the sum of the MPIO links with many different/randomized streams (as in many server requests simultaneously being cycled between the MPIO links)




#####

i will give the above a shot tomorrow. I had round robin on and it actually halved my through put for some reason. anyhow, i will give the above suggestions a go tomorrow.. end of my day here

-Rob
 

hobrawker

Cadet
Joined
Jan 6, 2014
Messages
5
OP:

1- Run DD tests on a shell prompt on the FN box to establish a baseline for your array.
2- If it is more than 60MB/s, then disable MPIO, configure only 1 path and connect that interface on the FN box directly to the interface on the ESX box.
Then test again. Can you max out the Gig link ?
Do the same test but using the other adapter on each box. Same result ?
3- If you are NOT able to max out both Gig links, then the issue might be with the NICs. Check those. What brand/model are they ?
4- If you ARE able to max out the Gig links individually, then turn MPIO back on and connect the NICs on the FN box directly to the NICs on the ESX box and try again.
(Im assuming you were using a switch previously)
5- If this works, then the issue might be with the switch.

Also, make sure that MPIO is configured for round robin. Otherwise, it will not cycle paths to utilize them all.




Huh?
o_O



He should be able to saturate a Gig links worth of bandwidth (assuming the array can deliver that throughput) with a single stream.
And more aggregate bandwidth up to the sum of the MPIO links with many different/randomized streams (as in many server requests simultaneously being cycled between the MPIO links)
1-To sound like an epic n00, how do i test the zfs pool speed? when i run dd on the FN box it whines about it being a read-only (cause it is running embeded on a flash drive) so how do i test the pool itself?
2- i dropped it to one fixed path with no change, so the through put is the through put.
 

Got2GoLV

Dabbler
Joined
Jun 2, 2011
Messages
26
1-To sound like an epic n00, how do i test the zfs pool speed? when i run dd on the FN box it whines about it being a read-only (cause it is running embeded on a flash drive) so how do i test the pool itself?


You run dd on the FN box, but against whichever storage device you wish to test, not against the actual FN install drive.
You want to test your data storage, not the OS storage drive.

Where are your iSCSI extents ? Use dd against those zfs pools(path to them).
 
Status
Not open for further replies.
Top