2 x Freenas iSCSi Target to VMware Slow Migration!

Status
Not open for further replies.

Josif

Dabbler
Joined
Aug 15, 2015
Messages
12
Hi Guys,

I believe i`ve chosen the right section of the forum to ask for your professional opinion. If not than please guide me.

Well here is the deal:

I`ve 2 x VMware Hypervisors connected via iSCSi MPIO to 2 x Freenas boxes using 2 x 1Gbps cards with MTU 9000 on both sides.

I`ve carefully read all of your recommendations and opinions on how to squeeze the best performance and I am very happy of the results i`ve got on the test:
Here is a HDD Speed test inside VM:
k6cci.png
168ga6q.png


But this is not why I am here for. I`ve a problem when migrating a VMs from one Freenas/Datastore to another Freenas/Datastore. I am not able to saturate the 2 x 1Gbps MPIO i`ve created. The maximum i am getting is 1Gbps speed between the 2 Freenas boxes:
21evj89.png

It doesn't matter if i am doing Live migration or VM powered off migration. The maximum i get is 1Gbps speed
although the monitoring shows me for the purpose of the migration i am using both nics:
35k05md.png

Interestingly but when i make live migration from local (SSD disk on my Hypervisor) datastore to Freenas i am hitting the maximum of the network.
If my readings are correct then why the maximum speed on migration between 2 Freenas boxes is 1Gbps?
 
Last edited:

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
MPIO is not a panacea. It may not be applicable to all cases. Single SCSI request can not be split between two paths, initiator has to choose one of them. If there are many outstanding requests, the situation is somewhat easier, but instead it creates different problem for target -- maintaining request ordering. MPIO does not guarantiee ordering of requests sent via different paths, and of they are received out of order, it may significantly affect performance due to confusing read-ahead logic, which may think it is a random I/O. That is why VMware by default prefers to send request only through one link, or do some kind of affinity. How that affinity differs between VM I/O and vMotion, is an interesting question.

It would be interesting what happen with performance if you disable all paths except one at least on copy source. Besides of MPIO complications, AFAIK VMware may itself throttle vMotion operations to not create network congestion for running VMs.
 

Josif

Dabbler
Joined
Aug 15, 2015
Messages
12
Is this means that if configuring LACP and having 2Gbps in one link i can achieve the maximum saturation of the 2Gbps network and most probably it`ll be better solution to what i am trying to accomplish?

LACP on Freenas
LACP on a VMware
No MPIO
= ?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
LACP is in no way better then MPIO. With LACP you almost never get more then 1Gbps.
 
Status
Not open for further replies.
Top