send_packet: No buffer space available

Status
Not open for further replies.

Beholder101

Dabbler
Joined
Feb 21, 2016
Messages
14
After reading many posts over the years and taking in a wealth of knowledge i'm now in need of your direct support:

I'm running a Freenas 9.3 unit, assembled by IXsystems, in our hosting center which has suddenly began reporting the following message over and over again:

Feb 22 00:01:12 XS-2 dhclient[6241]: send_packet: No buffer space available
Feb 22 00:01:18 XS-2 dhclient[6241]: send_packet: No buffer space available
Feb 22 00:01:30 XS-2 dhclient[6241]: send_packet: No buffer space available

This storage is used mainly for NFS shares and has but 5 datasets defined. Traffic is low, at an average of less than 10Mbit atm on the 10GB interface. I can ping www.freenas.org and internal servers on all active segments and the unit itself can be pinged, so traffic is flowing. Almost all traffic flows over de 10Gb interface.

The dhclient (DHCP client) reporting the error seems odd, as none of the interfaces i use, rely on DHCP. There is NO DHCP server in either network.

The error seems to originate from overfowing buffers on the network interfaces, specifically TCP-buffers and seems to be solved by increasing:

net.inet.tcp.recvbuf_max
net.inet.tcp.sendbuf_max

But i have not yet experimented with this, as the machine is in -semi-production. And because i doubt if the buffers are really the cause, seeing the low volume of traffic.

Can any of you confirm if increasing these settings will actually help me, and what value could best be used?

Services enabled: NFS, SMART, SNMP and SSH.

The unit has several interfaces:
igb0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
ether 0c:c4:7a:20:47:68
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect
status: no carrier
igb1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
ether 0c:c4:7a:20:47:69
inet 10.10.12.1 netmask 0xffffff00 broadcast 10.10.12.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
igb2: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
ether 0c:c4:7a:20:47:6a
inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect
status: no carrier
igb3: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=403bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,VLAN_HWTSO>
ether 0c:c4:7a:20:47:6b
inet 172.28.0.35 netmask 0xfffffc00 broadcast 172.28.3.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet autoselect (1000baseT <full-duplex>)
status: active
ipfw0: flags=8801<UP,SIMPLEX,MULTICAST> metric 0 mtu 65536
nd6 options=9<PERFORMNUD,IFDISABLED>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x8
inet 127.0.0.1 netmask 0xff000000
nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
cxl0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
ether 00:07:43:34:ac:e0
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet none
status: no carrier
cxl1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
ether 00:07:43:34:ac:e8
inet 172.4.0.35 netmask 0xfffffc00 broadcast 172.4.3.255
nd6 options=9<PERFORMNUD,IFDISABLED>
media: Ethernet 10Gbase-SR <full-duplex>
status: active


igb0, igb2 and cxl0 are not connected to any cable. The igb0 and igb2 are not even visible in the FreeNAS GUI.

~# netstat -m
65634/180876/246510 mbufs in use (current/cache/total)
57294/2548/59842/8134666 mbuf clusters in use (current/cache/total/max)
57294/1456 mbuf+clusters out of packet secondary zone in use (current/cache)
1216/7297/8513/4067333 4k (page size) jumbo clusters in use (current/cache/total/max)
16317/167131/183448/1205135 9k jumbo clusters in use (current/cache/total/max)
128/0/128/677888 16k jumbo clusters in use (current/cache/total/max)
284761K/1583682K/1868443K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for mbufs delayed (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters delayed (4k/9k/16k)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/0/0 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
522 requests for I/O initiated by sendfile
0 calls to protocol drain routines
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
My guess is that you have it configured to be running dhclient on those unconnected network interfaces. Disable unused network interfaces and report back what happens.
 

Beholder101

Dabbler
Joined
Feb 21, 2016
Messages
14
That sneaky igb2..... So not active is not completely down.

~# ifconfig cxl0 down
~# ifconfig igb0 down
~# ifconfig igb2 down

Feb 22 04:32:54 XS-2 dhclient[6241]: send_packet: No buffer space available
Feb 22 04:37:38 XS-2 dhclient[6241]: Interface igb2 is down, dhclient exiting
Feb 22 04:37:38 XS-2 dhclient[6189]: connection closed
Feb 22 04:37:38 XS-2 dhclient[6189]: exiting.

I can only guess as to why this has came up only recently, but your suggestion hit the spot! The logs are clean now, for the last 10 minutes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
it's probably only came up recently because the buffers on freenas are pretty large and a dhcp request is tiny. but since the interface is physically down but administratively up, the system queues the data for send. once the buffer fills, then it cries.
 
Status
Not open for further replies.
Top