Ok, I have read your post.
If I understand it correctly I should have more than one connection for each interface to maximize the bandwidth; is it correct?
Not exactly.
You need more than than one connection from initiator to target.
Not for each interface.
MPIO is not for bandwidth aggregation.
But, it can max out all the bandwidth links in cases where there are enough separate connections between targets and initiators so that each connection ends up cycling between links randomly enough.
It will only really use one link at a time, but it will cycle to the next link every X number of IOs (configuration setting).
In the case of ESX, this defaults to 1,000 IOs.
So, say you transfer a file from the target...
MPIO will start the transfer on one link, and after 1,000 IOs, it will switch to the next link, and so on.
If you start many different connections, you will be able to use more bandwidth than the equivalent of one of the Gig links.
But this would normally happen if, say, you have many different/busy servers accessing the target.
On single (or low numbers of connections), MPIO will not use much more, if at all, the equivalent bandwidth of one of the MPIO links.
I only have experience setting up MPIO on VMWare, but I can tell you how it works there (and it does work). Your best practice should be to have your VM Host (ESXi server for VMWare) setup with 2 (or more) dedicated NICs for iSCSI. Each of these NICs should be assigned an IP address in a separate subnet. On the FreeNAS side you do the same thing, you setup 2 (or more) NICS each with an IP address in a separate subnet (you should have a 1 to 1 relationship between NICs and subnets between an individual ESXi server and your FREENAS system). I don't even setup routing for these subnets since it's not needed in my setup. The end result is that each NIC on the ESXi server can only see one NIC on the FreeNAS system.... so VMWare sees this as two totally separate paths that can be load balanced over using round robin
Correct.
But this is still not link aggregation.
This is simply link redundancy.
MPIO is not bandwidth aggregation.
It works well for what it was designed for, and on busy configurations with lots of connections, the bandwidth of all links is maximized.