Issue with NICs & VMs on patch to FreeNAS-11.3-U4.1

cadamwil

Explorer
Joined
Sep 6, 2013
Messages
60
Hello, I also posted this in Jails & bhvye. I think this is an issue that might have been caused by a previous tunable change to "fix" the issue before, but I can't remember where I read the fix or where it was in the tunable.

Issue: All NICs are seemingly bridged to the FreeNAS host process, causing a broadcast storm when my VM’s NICs are plugged in. Also, the VMs NICs seem to be bridged as well.

Troubleshooting performed: pinged network resources, noticed dropped pings to Freenas IP and PFSense (VM). Unplugged ix3 and broadcast storm stopped, plugged it back in, storm resumed. Rebooted freeNAS, issue persists. Reset network interfaces ix0-3, issue persists. Set Freenas to 10.0.1.100, instead of 0.0.0.0, issue persists.

Narrative:
When I rebooted after applying 11.3-U4.1 my Freenas server seems to have bridged all of the NICs, even the ones assigned to my PFSense server. So, it basically is creating a broadcast storm. I only want FreeNAS to "talk" on ix0. The VM is using ix2 as WAN and ix3 as LAN. Ideas as to the cause?
 

cadamwil

Explorer
Joined
Sep 6, 2013
Messages
60
giphy.gif
 

cadamwil

Explorer
Joined
Sep 6, 2013
Messages
60
Posting this here as well, so hopefully this gets seen.

OK, my suspicion of this being a tunable may be incorrect. There are only two tunables not "generated by autotune". They are
linux_enable / YES / loader & hw.vmm.topology.threads_per_core / 2/ loader
However, I did just notice that the hw.vmm.topology.cores_per_package is set to 6. I have recently upgraded to dual 10 core procs. However, the issue only occurred after the upgrade to 11.3-U4.1. Ideas?
 

cadamwil

Explorer
Joined
Sep 6, 2013
Messages
60
Posting this here as well, so hopefully this gets seen.

OK, my suspicion of this being a tunable may be incorrect. There are only two tunables not "generated by autotune". They are
linux_enable / YES / loader & hw.vmm.topology.threads_per_core / 2/ loader
However, I did just notice that the hw.vmm.topology.cores_per_package is set to 6. I have recently upgraded to dual 10 core procs. However, the issue only occurred after the upgrade to 11.3-U4.1. Ideas?
well, I "think" that this was resolved by changing the hw.vmm.topology.cores_per_package to 10, the correct number of cores for my machine now, and it's segregating the NICs correctly.
 

cadamwil

Explorer
Joined
Sep 6, 2013
Messages
60
I thought wrong. I had an unscheduled reboot, don't plug a cisco serial cable into an APC UPS to update the firmware (it turns off), and after the reboot, back to broken. Any ideas?
 
Top