Fermin Rodriguez
Dabbler
- Joined
- Sep 23, 2013
- Messages
- 35
hi, i find this a very specific and complicated scenario
first the storage servers
a couple of R710 servers with 48gb memory each and 6 hard drive slots (2tb each) and a perc controller; and 6 NICS (1gb each)
first question: hard ware raid or zraid? (looking for perfomance and hotswapping in case of failure, right now i got a raidz1)
second: nics, so far i made a "management" lagg (failover) with bce0 and 1. then a lacp lagg with bce2 and 2 and another lacp lagg with igb0 and igb1 (i should mention bce are the intergrated 1gb broadcom and the igb are 1gb intel)
i read somewhere that freenas doesnt really do loadbalance, so i figure lacp should help.. sorta
this r710 will server the entire space via ISCSI to a vmware cluster of 3 servers.
each server has 4 cards dedicated to iscsi (10gb/s cards, mind you, and all the switches are manageable and 10gb/s)
to make mpio work i made it so 2 cards are in the 192.168.130.x range and 2 in the 192.168.131.x range.
in freenas lagg1 is 130.x and lagg2 is 131.x
i already set mtu = 9000 in each lagg and every switch and everywhere in vmware
from the reports screen i barely make it 123mbits/s. shouldnt i get closer to 1gb/s ?
im planing on setting nas4free in the other server for perfomance testing too.
first the storage servers
a couple of R710 servers with 48gb memory each and 6 hard drive slots (2tb each) and a perc controller; and 6 NICS (1gb each)
first question: hard ware raid or zraid? (looking for perfomance and hotswapping in case of failure, right now i got a raidz1)
second: nics, so far i made a "management" lagg (failover) with bce0 and 1. then a lacp lagg with bce2 and 2 and another lacp lagg with igb0 and igb1 (i should mention bce are the intergrated 1gb broadcom and the igb are 1gb intel)
i read somewhere that freenas doesnt really do loadbalance, so i figure lacp should help.. sorta
this r710 will server the entire space via ISCSI to a vmware cluster of 3 servers.
each server has 4 cards dedicated to iscsi (10gb/s cards, mind you, and all the switches are manageable and 10gb/s)
to make mpio work i made it so 2 cards are in the 192.168.130.x range and 2 in the 192.168.131.x range.
in freenas lagg1 is 130.x and lagg2 is 131.x
i already set mtu = 9000 in each lagg and every switch and everywhere in vmware
from the reports screen i barely make it 123mbits/s. shouldnt i get closer to 1gb/s ?
im planing on setting nas4free in the other server for perfomance testing too.