I've read all the posts and generally it's guys with one server and one host maybe two trying to pump as much bandwidth through gig links...
Our current setup has 10 hosts connected with dual 10gb links.
Our sans also have dual 10gb links, we're currently using one link for nfs and the other for zfs replication.
We're looking for feedback from others who may have used lagg for the either failover or more bandwidth.
Currently we are able to get about 900MB/s transfer if we do a large migration from the san to DAS for maintenance.
This is rare that we do this, however we can't help to be curious if it would be possible to get faster speeds. It's not really necessary as day to day bandwidth doesn't even come close to 10-15% of the link.
Has anyone used these two features in a larger configuration?
Not interested in iscsi and mpio either, tried it, we're happier with NFS.
Our current setup has 10 hosts connected with dual 10gb links.
Our sans also have dual 10gb links, we're currently using one link for nfs and the other for zfs replication.
We're looking for feedback from others who may have used lagg for the either failover or more bandwidth.
Currently we are able to get about 900MB/s transfer if we do a large migration from the san to DAS for maintenance.
This is rare that we do this, however we can't help to be curious if it would be possible to get faster speeds. It's not really necessary as day to day bandwidth doesn't even come close to 10-15% of the link.
Has anyone used these two features in a larger configuration?
Not interested in iscsi and mpio either, tried it, we're happier with NFS.