SOLVED [BUG] IRQ and network speed problem

Status
Not open for further replies.

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Most O/S's allow you to preallocate some number of buffers. Windows allows this with Intel drivers. I would expect the same for BSD. It will eat up space because of preallocation but the problem would go away.
 
Last edited:

BERKUT

Explorer
Joined
Sep 22, 2015
Messages
70
Most O/S's allow you to reallocate some number of buffers. Windows allows this with Intel drivers. I would expect the same for BSD. It will eat up space because of preallocation but the problem would go away.
Yeah and as you can read from the bug report that problem appeared only in our configuration, or other people just don't have so heavy load.
Anyway we tested one of the solution and where had the problem, we set mtu to 1500 and after didn't see anymore the problem.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
Yeah and as you can read from the bug report that problem appeared only in our configuration, or other people just don't have so heavy load.
Anyway we tested one of the solution and where had the problem, we set mtu to 1500 and after didn't see anymore the problem.

This problem is an O/S problem. Most have a preallocation amount and then an aging formula to recover buffers if they were not being used. I am surprised BSD has not provided a fix for this. Other O/S's I have worked on fixed this 10-15 years ago. Look aside lists are quite common. Not a jumbo packet bug, but an O/S not keeping up with the times...
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Instead of preallocation (which I guess require manual tuning) FreeBSD implements mechanism to cache kernel memory allocations for things like these network buffers, which I suppose solved this well enough for typical server usage when this networking code was written. But here likely the problem is that ZFS uses so huge amount of kernel space memory allocations as nothing done before, and so it wipes those caches too quick to keep them efficient.

It is not my closest area of the kernel last time, but I'd blame the NIC hardware or driver using contiguous physical memory allocation for receive buffers, while doing perfectly fine with mbuf chain on transmit path. I don't know whether this hardware can work differently, but it would be nice from its side. As I have mentioned on the ticket, I don't see how the newly rewritten Intel 1G NIC driver in FreeBSD HEAD branch handles it, but I can't find the old cluster allocation code there any more, so hopefully this is going to improve in this specific case.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It is not my closest area of the kernel last time, but I'd blame the NIC hardware or driver using contiguous physical memory allocation for receive buffers, while doing perfectly fine with mbuf chain on transmit path. I don't know whether this hardware can work differently, but it would be nice from its side. As I have mentioned on the ticket, I don't see how the newly rewritten Intel 1G NIC driver in FreeBSD HEAD branch handles it, but I can't find the old cluster allocation code there any more, so hopefully this is going to improve in this specific case.
Is that work done by the Intel guys?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
Is that work done by the Intel guys?
AFAIK the original 1G Intel NIC drivers were developed and supported by Intel, but recent redesign in HEAD is done by group of FreeBSD committers trying to develop unified NIC driver framework.
 
Status
Not open for further replies.
Top