NFS + 10gb + Bonding?

Status
Not open for further replies.

kspare

Guru
Joined
Feb 19, 2015
Messages
508
We've hit an interesting problem....a minor problem....

When migrating data off our servers we are hitting between 85-95% usage rates on our 10gb link.

We don't want to use iscsi and want to keep NFS.

What have people done, or are using to bond two links together?

I've read about mixed results and not sure if it's worth doing?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Chelsio makes a nice 40Gbps card, the T580, which shares a driver with the T520 so it "probably works great" (no warranties or guarantees implied).

Get a few of those and then a Mellanox SX1012. Then I'll be jealous. :smile:
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
lol i think the same thing when people ask this about gig connections...just get 10gb!

its not a regular event so hard to justify a budget, i was just what people with multiple gig links and nfs do for speed?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Can you clarify your situation a little more?

You're running NFS and nearly hitting 100% usage on your 10Gb links and you're looking for options to rectify the issue?

How many NFS clients do you have? If you've got less than 10 your options are more limited than if you have 100s.
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
12 esxi6 hosts. its only an issue if we evacuate the storage to do updates., like i said aminor problem, but just curious what options are available other than going to 40gb
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Well, 40Gb is an option.

You can also setup different subnets for different ESXi hosts and have multiple 10Gb LANs. Totally doable, but in the big scheme of things I wouldn't worry too much about the fact you're hitting 85-95% only when doing evacuations of the storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
lol i think the same thing when people ask this about gig connections...just get 10gb!

its not a regular event so hard to justify a budget, i was just what people with multiple gig links and nfs do for speed?

The term @cyberjock was looking for is "network segmentation," which is the normal answer, but may introduce additional issues in terms of network design. If your switchgear supports basic layer 3 functionality, that may be the way to go though.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
40Gbe is the way to go. Get an Extreme Networks summit X770 and some chelsio T580 cards. Our Extreme Networks X670V-48X has 4 40Gbe qspf+ ports but the X770 is the next upgrade. :)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
40Gbe is the way to go. Get an Extreme Networks summit X770 and some chelsio T580 cards. Our Extreme Networks X670V-48X has 4 40Gbe qspf+ ports but the X770 is the next upgrade. :)

All the Extreme Networks stuff is just unrealistically expensive. A lot of the Dell Networking stuff is based on Force10 technology and is available rather cheaply on eBay...

There's a Force10 S4810 with 48x SFP+ and four QSFP+ for only $1800 right now...
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
All the Extreme Networks stuff is just unrealistically expensive. A lot of the Dell Networking stuff is based on Force10 technology and is available rather cheaply on eBay...

There's a Force10 S4810 with 48x SFP+ and four QSFP+ for only $1800 right now...
I would buy off eBay all the time if I could, but we have contracted vendors and procurement rules we have to follow.

I had to get director level authorized approval to but three 10Gbe NICs off Amazon instead of the contracted vendor to save $1400. It's completely assinine!

Like the transceivers I had to buy for $1400 each when the exact same brand-new finisar was $150. Drives me nuts :mad:
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I would buy off eBay all the time if I could, but we have contracted vendors and procurement rules we have to follow.

I had to get director level authorized approval to but three 10Gbe NICs off Amazon instead of the contracted vendor to save $1400. It's completely assinine!

Like the transceivers I had to buy for $1400 each when the exact same brand-new finisar was $150. Drives me nuts :mad:

101 reasons I love working for my boss.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
@jgreco I got a couple tranceivers from www.fs.com to test and they have been stable. For the Chelsio card I tried the finisar compatible tranceiver http://www.fs.com/finisar-ftlx8571d3bcl-p-14945.html with the Extreme Networks 10301 compatible for my switch. I haven't seen any packet drops or latency increases so now I have to convince management that clone transceivers are not going to void our support contracts.

Thought I'd mention it since their prices are better then the used market.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
For the Chelsio card I tried the finisar compatible tranceiver http://www.fs.com/finisar-ftlx8571d3bcl-p-14945.html with the Extreme Networks 10301 compatible for my switch. I haven't seen any packet drops or latency increases

Well you knew that'd be the case, right, since they're probably even the same exact device the vendors buy, reprogram, and relabel...

so now I have to convince management that clone transceivers are not going to void our support contracts.

Heh.

Thought I'd mention it since their prices are better then the used market.

This would have been better placed in the 10G primer thread. Posting it here isn't likely to help too many people. If you post a followup there, I'll see if I can edit/link it from the head post where we talk about cards.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
This would have been better placed in the 10G primer thread. Posting it here isn't likely to help too many people. If you post a followup there, I'll see if I can edit/link it from the head post where we talk about cards.
Okay, posted over there. Thanks!
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I played around with using Load balancing on my new server. With one card we hit about 85-90% link utilization, with load balancing all we could hit was 50% on each card. Pretty decent but not worth the effort of dealing with load balancing.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Seems odd that you'd be hitting just that magic point. You're sure there's no other place in your network that you could be hitting a 10G bottleneck? Inter-switch trunks, etc?
 

kspare

Guru
Joined
Feb 19, 2015
Messages
508
I don't think so.

I tested it by migrating all my terminal servers off of the storage to 10 separate hosts.

I think by the time it gets to be a problem we'll just upgrade to 40gb or add other storage servers.

As it is, with all our hosts running, hourly zfz rsync jobs use more bandwidth...

In an emergency to migrate off hitting 90% usage is nothing to complain about.... i'm not very motivated to look into this any more lol
 
Status
Not open for further replies.
Top