Enterprise Switches = great value. Need help avoiding errors

Status
Not open for further replies.

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
I need a 10GbE switch to be a standalone device for a customer. They will be shortchanged by the ZFS server if I don't get their network improved to about a GB/s ... While this one person isn't going to be using the entirety of the ports, the devices aren't priced by the port-quantity, nor the port speed really. (There're Mellanox 40Gb switches that are less).

My goal is to find a switch which supports:
• LACP / LAG (such that a single file would benefit from the aggregation).
• If less than 24x SFP+ ports which support 1GbE / 10GbE and ~4x QSFP+ (40GbE) ports
• Able to connect to a wifi router such as Airport Extreme, etc, without it being very difficult.

Example:
MSX6036T-2SFS 36-port FDR 40Gbp/s QSFP
QSFP+ to SFP+ adapter
SFP to 1000base-T https://goo.gl/YniqNS

• FreeNAS
• macOS
• Windows
• Ubuntu

Is there an adapter that will actually do 40GbE ... between any of these operating systems? If not, I may as well stick to SFP+ and using LAG, yes?

What is the cheapest brand that is pretty easy, gives the highest speeds...? (Obviously, what everyone wants)

I'm not expecting this to be like setting up an Airport Extreme, but I can't afford to be dependent on outside resources or to spend 8-hours troubleshooting every time there's a problem.

I've had a few customers that were good fits for ZFS systems that'd exceed 800MB/s, if not a 1GB/s. (I hope said demand amongst my client base increases, which can only happen if I don't create fiascos that are more trouble than they're worth.

Please don't assume "I'm versed in CLI" and "competently administrate ZFS" ... but I have been more than willing to hire people local to assist with things as required.

I'm doing my best - and I hope to improve my competency by using ZFS server with an SFP+ with QSFP+ in both my business and home office (data recovery). Given that all of these demand learning CLI, File Systems, and immersing within the subject, I hope it accelerates my rate at which I become competent.

For now, I know better; but I need to make good choices that not only offer good value, but allow me to make it the basic hallmark I rely on, so that over time I become more and more versed at using the device.

I believe I have dodged a few bullets by skipping over systems which seemed (based on performance and price) to be enticing. However, I REALLY need it to work as something that's standalone with the exception of adding wifi access.

These are VERY low security [targets]. I'm not suggesting they don't deserve the priority of security, but they aren't running businesses such that people will be trying to break in to their networks.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
My goal is to find a switch which supports:
• LACP / LAG (such that a single file would benefit from the aggregation).
LAG doesn't make single file transfer faster.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 
Last edited:

TrumanHW

Contributor
Joined
Apr 17, 2018
Messages
197
LAG doesn't make single file transfer faster.

Sent" from my SAMSUNG-SGH-I537 using Tapatalk

I knew LAG didn't for Cat-5e ... but I'd thought that it kind of worked that way, since QSFP is just 4 SFP sets.

Forgive the rhetorical question, I know there's no such thing as why in physics; just how.

'How' has already had sense made of it. Four pairs, bonded - make QSFP work.

The logical capacity to aggregate pairs in to a single, available bandwidth - for a single file, just doesn't scale? Bizarre.

This has to be a port issue; you can split four from one... but not combine them? The only difference is the number of ports and I guess however that logic is made discrete. It feels like code, even if there's an efficiency issue (as their usually is) the logic would exist.

I thought optical some how changed the rules.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
This has to be a port issue; you can split four from one... but not combine them? The only difference is the number of ports and I guess however that logic is made discrete. It feels like code, even if there's an efficiency issue (as their usually is) the logic would exist.
I don't claim to be an expert, but we have some very smart folks that come here and share their wisdom.
Take a look at these resources:

https://forums.freenas.org/index.php?resources/lacp-friend-or-foe.43/

https://forums.freenas.org/index.php?resources/jumbo-frames-notes.44/

https://forums.freenas.org/index.php?resources/multiple-network-interfaces-on-a-single-subnet.45/

https://forums.freenas.org/index.php?resources/10-gig-networking-primer.42/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Joined
Dec 29, 2014
Messages
1,135
@Elliot Dierksen would you share some network know-how here?

Not to get too nitpicky here, but Cat-5e is just the physical layer. It doesn't have any impact (assuming it is working of course) on link aggregation at layer 2 which is where LACP would be operating. I haven't done anything with NIC's at 40Gb, but I have done switch to switch 40G. Chelsio is the NIC of choice for FreeNAS, and they have 40Gb NIC's in both the T5 and T6 family. Both of those are supported by FreeBSD with the cxgbe driver. I'll try to abbreviate my usual rant about LACP... LACP is load balancing, NOT bonding. That means that a hash algorithm determines which link of the bundle is used for each conversation. No one conversation can consume more than the bandwidth of one physical member of the bundle. That works really well if you have a file server trying to serve multiple clients. It wouldn't help you if you are trying get the speed of a single transfer to be more than that of a single link. Also, performance tuning is a whack-a-mole exercise. Something is always the slowest component. The idea is to make the slowest component be fast enough for your needs. I have been really happy to be able to get ~=8Gb read off my FreeNAS system. I have never been able to fully saturate a single 10Gb link. I would certainly never be one to argue against the nerd coolness of 40Gb links, but you are destined for disappointment if your drive controllers and drives are not up to the task. I have never done it so it is just speculation, but I would bet you would have to be into an all SSD configuration before you would approach saturating a 40Gb link. I have spent absolutely no energy proving the hypothesis since it isn't relevant to my world yet.

On to the switches. I have some HPE switches (which are really re-badged H3C units) that I have been very happy with. I got them off eBay. You can also get some older Cisco Nexus switches, but those are loud and annoying because of the 1U fans. Not an issue if you don't sit close to them, but REALLY matters if you do. My gut would be to stick with 10Gb unless you have some insanely fast storage. IMHO, storage will be your bottleneck to filling a 40Gb link, not the NIC itself.
 
Last edited:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
IMHO, storage will be your bottleneck to filling a 40Gb link, not the NIC itself.
That is true and an issue that I don't know if the OP looked at. My pool appears to max out at around 6Gb/s reads and around 5Gb/s write and have not done enough testing to determine what the holdup is but when copying from pool to pool within the chassis, I can hit 10Gb/s speeds, but I never come close to that speed over the network. I think there is some tuning that could be done on the network stack, but I don't know what.

@TrumanHW would you tell us about the rest of your server hardware? If the drive pool is not fast enough, or drive controllers, the speed of the network is not going to matter.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
40 GbE is effectively 4x 10 GbE links bonded together, but that is done transparently at the physical layer, not unlike what happens with 1GBaseT or 10GBaseT, which bond four 250 Mb/s or 2.5 Gb/s physical links. You could ask what's keeping other implementations from doing the same, and it's definitely a fair question in simple cases (switch to switch or switch to single NIC with multiple ports).
 
Status
Not open for further replies.
Top