Jumbo Frames MTU on lagg

Status
Not open for further replies.

demonition

Dabbler
Joined
Jan 4, 2018
Messages
10
Hi

Should it be possible to just add the 'mtu 9000' option to an aggregated/lagg NIC for a 10GbE network? I've seen somewhere on the forums that you have to add it to the individual NICs before creating the lagg.

I've tried it and it doesn't respond on the network and can't login (although can through wireless weirdly..). But if I remove the option it works fine but defaults to 1500.

FreeNAS 9.10
Q30 Storinator - 45 Drives
Thanks

Dave
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Using jumbo frames is not advisable, especially in the situation you describe.

The support for jumbo comes from the underlying ethernet device drivers, so, yes, the driver needs to be configured for it. I would suggest that you set the underlying interfaces for the largest supported MTU, *not* 9000.

Layering LAGG on top of that adds some complexity and the virtual interface also needs to be configured for jumbo (here you do want to use 9000), but in doing so you're creating a situation where you are exercising poorly-tested codepaths in unusual configurations, which is not a good idea.

It's quite possible that using LAGG in this configuration will result in reduced performance, so unless you are trying to accomplish something like failover between two switches, I will note that both LAGG and jumbo frames are areas of networking that are fraught with peril, and you may find yourself spending lots of time debugging arcane issues.
 

demonition

Dabbler
Joined
Jan 4, 2018
Messages
10
Thanks for the reply!
It's for video editing and the manufacture suggested using Jumbo frames but maybe, as you say, it's not worth the time dealing with the issues that might arise.
 
Last edited by a moderator:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
When you use jumbo frames, you're already exercising codepaths that aren't as well-used and have some caveats, such as the number of jumbo mbufs in the system, see discussions in threads like

https://lists.freebsd.org/pipermail/freebsd-net/2013-March/034833.html

If you enjoy debugging mbuf issues and mysterious hangs, well, it's fun stuff. But the real gotcha is that layering a virtual device like lagg on top of a high performance physical device may actually incur a significant penalty.

With modern hardware, you are quite possibly better off just going with two separate 10G networks and putting an interface on each one, and running half your clients on each. It will certainly be easier to set up and probably offer better performance.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Hi Dave, I think we were actually the first(3+ years ago) to use the Q30 for 2K/4K raw video editing streams. Alan Hillier, Brett Kelly, and the crew at 45Drives came up with a bunch of kernel tunables and 10Gb tweaks for us. We started at ~80-120MB/s reads and maybe 200MB/s writes. After a lot of work, we peaked at 750/920 speed on an empty Q30. Over time, with the NAS 60% full, I think we were getting 500/500. Maybe in the 600's. All on Freenas 9 through 11.04

The performance has been rock steady. We were able to have 5-6 editors working simultaneously without problems. At some points, there were probably 7-10 editors working off it at the same time. (not all doing 2K/4K) Mostly in Adobe Premier

Here's our current performance with the Q30 at 80% full

The hardware specs for our Q30 are in the signature field below this post

[UPDATE]

It seems it took the Q30 and Netgear XS728T 10Gb switch a few mins to settle in at the mtu 9000 setting. We are now back to previous performance for a NAS. Here's the before JUMBO tweak and after it

And, that is with an external drive currently writing to the NAS
 
Last edited:

demonition

Dabbler
Joined
Jan 4, 2018
Messages
10
[UPDATE]

It seems it took the Q30 and Netgear XS728T 10Gb switch a few mins to settle in at the mtu 9000 setting. We are now back to previous performance for a NAS. Here's the before JUMBO tweak and after it

And, that is with an external drive currently writing to the NAS

Thanks VictorR. I see you're using an SAS HBA card, we're using 10Gb over Ethernet. Do you create an LAGG on the device? Right out of the box on a brand new iMac I was 900mb/s read and write! It has a Sonnet Thunderbolt 3 adaptor.
THanks
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Dammit! Speeds are slow again....

We originally had Rocket 750 cards. But, one was intermittently erratic/bad, took a month or more to figure that out, way back when. Of course, that happened right in the middle of deliverables for a pilot show. Everybody was real happy about that one.

Our LAN is very similar to yours, might even be the same. The Q30 is connected to a Netgear XS728T 10Gb switch. We're an all-Mac office. So, clients are connecting via Sonnet Twin 10G adaptors via Thunderbolt.

Now, that I think about it, I should go around the office and update the Sonnet drivers. That probably hasn't been done in a year, or so.
 

demonition

Dabbler
Joined
Jan 4, 2018
Messages
10
It's only on the latest iMac I get that. Most of them are at around 5/600 and one at 700 so no idea really! The other macs are all on ATTOs which haven't been totally stable. Yeah XS716T + XS712T
Yes we have Rocket 750s cards one was DOA, they're a bit sensitive...
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Ryan from 45Drives got everything dialed in today (they have the best tech support of any company I've used)

For anyone else who may have this problem with Macs and NFS - OS X requires a little tuning to get performance on NFS. 45Drives has a great blog post "How to Tune a NAS for Direct-from-Server Editing of 5K Video", at "Example 2: Mac OSX Client, NFS (Final Cut ProX Support)"

You need to use Terminal to add these two lines to /etc/nfs.conf via the command "sudo nano /etc/nfs.conf"

nfs.client.mount.options=nfssvers=3,tcp,async,locallocks,rw,rdirplus,rwsize=65536
nfs.client.allow_async=1

And, a reboot of the Mac client

We went from ~110MB/sec read and 380MB/sec writes to 516MB/s and 530MB/sec
 

c32767a

Patron
Joined
Dec 13, 2012
Messages
371
Ryan from 45Drives got everything dialed in today (they have the best tech support of any company I've used)

For anyone else who may have this problem with Macs and NFS - OS X requires a little tuning to get performance on NFS. 45Drives has a great blog post "How to Tune a NAS for Direct-from-Server Editing of 5K Video", at "Example 2: Mac OSX Client, NFS (Final Cut ProX Support)"

You need to use Terminal to add these two lines to /etc/nfs.conf via the command "sudo nano /etc/nfs.conf"

nfs.client.mount.options=nfssvers=3,tcp,async,locallocks,rw,rdirplus,rwsize=65536
nfs.client.allow_async=1

And, a reboot of the Mac client

We went from ~110MB/sec read and 380MB/sec writes to 516MB/s and 530MB/sec

If I may make 2 public service comments about those mount options..

First, locallocks causes NFS to handle locks locally on the client, instead of via the lock manager on the NFS server. If you click through the link on the 45drives site to the EMC article they used, the EMC article calls this out at the end. In particular, this means that locks are local to a specific workstation, so 2 different users on 2 different workstations could open a file FCPX thinks is "locked", which could cause unexpected results if those 2 users make changes at the same time, etc.. Normal NFS behavior would be for the NFS server to manage locks for all clients simultaneously.

Second, the "async" and "nfs.client.allow_async" disable synchronous writes. This is another case where you end up sacrificing some amount of data integrity checks for additional speed. You can search for any "ESXi performance" topics on this site for details. But in short, by enabling async, the NFS client and server will assume data has been written as soon as they execute the write, rather than waiting for a confirmation from the storage system. The level of risk you are assuming depends on how reliable your storage system is and the impact of a bad write on your data.

I'm not sure how to fix the locallocks thing as that is FCPX specific.. The async issue has a solution, just look for any of the many discussions on how FreeNAS users have solved ESXi performance issues.. They typically involve a very high IOPS performance SSD acting as an SLOG.
 

VictorR

Contributor
Joined
Dec 9, 2015
Messages
143
Thanks for the info, I'll pass it along to the 45Drives team.
We don't use Final Cut Pro. Almost exclusively an Adobe Premier production house(a little Avid). But, I am sure the file locking issue still applies

I remember looking into the SSD as SLOG option a few years ago, when we set up this NAS.
 
Status
Not open for further replies.
Top