You shouldn't have to do that. I suspect the issue is with the network interface upon boot up.ya it came back up, and its snappy.
maybe i should make a task for it to do it every couple hours.
You shouldn't have to do that. I suspect the issue is with the network interface upon boot up.ya it came back up, and its snappy.
maybe i should make a task for it to do it every couple hours.
You shouldn't have to do that. I suspect the issue is with the network interface upon boot up.
I suspect the em(4) driver. Does lspci show you which Ethernet chipset is in use, or is that in dmesg somewhere? It’ll be Intel, of some description.
root@freenas:~ # lspci 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers (rev 07) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 07) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 07) 00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06) 00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31) 00:14.0 USB controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31) 00:16.0 Communication controller: Intel Corporation Sunrise Point-H KT Redirection (rev 31) 00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31) 00:17.0 SATA controller: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] (rev 31) 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31) 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H PMC (rev 31) 00:1f.0 ISA bridge: Intel Corporation Ethernet Connection (2) I219-LM (rev 31) 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H Northpeak (rev 31) 00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H SMBus (rev 31) 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) root@freenas:~ #
If thats the case, would it work fine for 4+years then all of the sudden have problems?
man 4 em
does not list it as officially supported by it, and it's not part of igb(4) either. The hardware recommendations guide says i219-LM is officially supported by FreeBSD, so I don't know what to think. what is a em(4)
it started having problems when i went from 11.1U7 to 11.2U8You said the issues started when you upgraded to 11.2 from 11.1, or did I misunderstand that?
my wife works from home so i have to wait an hour or so but im going to swap out the att gateway first.Fine to go to 11.3, just do not update feature flags. As long as you leave the pool the way it is, you can always go back. Once you upgrade feature flags, the pool becomes read-only in 11.2.
i just finished testing my 6 wd easystores for a 6 by 8TB for a space upgrade.
Nice. I'm curious, what was in those? Are these still HGST He8 rebadged, or something else?
Remind me where the check box is again.Did you try the obvious - check "disable hardware acceleration" for the interface in question? Just a quick shot from the hip, sorry if this is already covered, I did not reread the whole thread thoroughly.
i do have Enable autotune checked under system advanced
There was 20 or so things in the tuneables "set by autotune" let me reboot and see if that removes them.Aha! He cried.
Turn that off, like, post haste. It does more harm than good. Check tunables and see whether it created any in there. I think one can just uncheck that and reboot, and, others may have more input.
net.inet.tcp.sendspace 131072 sysctl Generated by autotune yes more_vert vfs.zfs.arc_max 34650000000 sysctl Generated by autotune yes more_vert vfs.zfs.l2arc_headroom 2 sysctl Generated by autotune yes more_vert vfs.zfs.l2arc_noprefetch 0 sysctl Generated by autotune yes more_vert vfs.zfs.l2arc_norw 0 sysctl Generated by autotune yes more_vert vfs.zfs.l2arc_write_boost 40000000 sysctl Generated by autotune yes more_vert vfs.zfs.l2arc_write_max 10000000 sysctl Generated by autotune yes more_vert vfs.zfs.metaslab.lba_weighting_enabled 1 sysctl Generated by autotune yes more_vert vfs.zfs.zfetch.max_distance 33554432 sysctl Generated by autotune yes more_vert
net.inet.tcp.sendspace
vfs.zfs.arc_max
if you need additional room for jails and VMs, reboot, and see whether it's stable that way.vfs.zfs.arc_max
can be human-readable, so if for example you have 32GiB of RAM, and need 4GiB for the system and jails, you can set it to 28G
and that will take effect on boot.Code:net.inet.tcp.sendspace
Not sure how that would interact with things. My recommendation is to remove all of those tunables, manually create avfs.zfs.arc_max
if you need additional room for jails and VMs, reboot, and see whether it's stable that way.
vfs.zfs.arc_max
can be human-readable, so if for example you have 32GiB of RAM, and need 4GiB for the system and jails, you can set it to28G
and that will take effect on boot.
so if it was created by autotune then i should remove it?