I've finally hit the point where the 10 GbE LAN is a real-world bottleneck. I have considered a 2x link aggregation, but that feels like a short-term stopgap when what we need is more headroom. Basically, we want to get back to the design assumption that the LAN is never a bottleneck.
Naturally, the cheap eBay surplus of 25 GbE and 40 GbE cards is tempting, as are FS's cheap transceivers.
However, I'm seeing a real lack of 25 and 40 GbE switching gear at anything but an enterprise price point or with a datacenter-level power draw. There doesn't seem to be much gear appropriate for a workgroup instead of a rack or spine. Currently in startup/development mode, so we're being frugal with cash. A $5000+ switch is out of the question.
Has anyone been successful in migrating to a faster than 10 gigabit network without breaking the bank, either in hardware acquisition or power draw? Don't need a 48 port solution. We would be fine with eight ports running faster than 10.
For cost effectiveness, should we be looking at 25 or 40? Either would meet our needs (40, or 25 with room to aggregate).
SSD arrays could nearly saturate 100, so storage won't be the limiting factor.
Is this realistic right now, or is the switching hardware just not available per the needed price and power specs?
Naturally, the cheap eBay surplus of 25 GbE and 40 GbE cards is tempting, as are FS's cheap transceivers.
However, I'm seeing a real lack of 25 and 40 GbE switching gear at anything but an enterprise price point or with a datacenter-level power draw. There doesn't seem to be much gear appropriate for a workgroup instead of a rack or spine. Currently in startup/development mode, so we're being frugal with cash. A $5000+ switch is out of the question.
Has anyone been successful in migrating to a faster than 10 gigabit network without breaking the bank, either in hardware acquisition or power draw? Don't need a 48 port solution. We would be fine with eight ports running faster than 10.
For cost effectiveness, should we be looking at 25 or 40? Either would meet our needs (40, or 25 with room to aggregate).
SSD arrays could nearly saturate 100, so storage won't be the limiting factor.
Is this realistic right now, or is the switching hardware just not available per the needed price and power specs?