[FreeNAS 11.3 U4.1] Windows 10 VM blue screens upon booting with 2 or more cores

dosgeek95

Cadet
Joined
Sep 28, 2020
Messages
1
Specs:

MB: ASRock 990FX Extreme3
CPU: AMD FX 8370 @ 4Ghz
RAM: 32GB RAM
Storage: 256GB Intel M.2 SATA SSD [BOOT], 256GB Sandisk SSD [VM], 160GB Intel SSD [Server Plugins]. 8TB Seagate Barracuda HDD [Data Drive]
Host OS: FreeNAS 11.3 U4.1
Guest OS in question: Windows 10 1903
VM Program: bhyve

So recently, I've been trying to install a Windows 10 VM on my FreeNAS server, but when I first set it to use at least 2 cores, the VM blue screens with a MULTIPROCESSOR_CONFIGURATION_NOT_SUPPORTED error, even after install. But it boots just fine overall when it's set to 1 core. I'm trying to set up dedicated game servers on that VM and I would want to use more cores to ensure better performance.

I've been reading about similar issues everywhere, and I do recall it being fixed at one point several years ago, but even after updating my server to the latest version of FreeNAS, I STILL get the BSOD when using multiple cores. I've tried all that I can to resolve this on my own, but to no avail.

So, any help would be greatly appreciated.
 

Evertb1

Guru
Joined
May 31, 2016
Messages
700
The need for a Windows VM was a reason to move to ESXi on my homelabserver. I have tried it on FreeNAS but it never worked satisfactorily. At that time other forum members confirmed the same. There still should be some threads to find on this subject.
 
Last edited:

beaster

Dabbler
Joined
May 17, 2021
Messages
27
There are two things that need to be fixed here to get any version of windows running in at least a reasonably stable state
These issues are not just because of Bhyve, as you are likely to see this with any OEM equipment that does not have native Windows drivers.

No 1,

MULTIPROCESSOR_CONFIGURATION_NOT_SUPPORTED , is a normally a BIOS issue that bhyve and KVM / QEMU all have similar issues with in regards to exposing the guest VM support for multi-cpu correctly.

As an example:
Windows 10 64-bit supports up to 2 sockets (or physical CPUs), with up two 256 cores supported. This is an important distinction, since most workstations have a single physical CPU with 4-8 cores.

Bhyve, by default, however presents each core as a physical CPU to the guest OS, thus if you try to give Windows more than 2 CPUs it will only see the first two.

To fix this the following sysctl must be set in /boot/loader.conf (and a reboot to take effect):

hw.vmm.topology.cores_per_package=4

where this would permit the guest VM to see 4 cores per CPU SOCKET and present that to the guest VM correctly.

in Truenas you would just this setting using the following menu

https:<hostname>//ui/system/tunable

1625061378884.png


No 2, ensure your windows machine NIC is using the VirtIO NIC and not the E1000 NIC
The first is the fix for https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
Whilst these drivers are for KVM, they are based on the same code base that bhyve uses for VirtIO and the Virtual NIC that is shared with the VM guest.
The NIC driver can have a big impact on the stability of the guest VM as you start to push traffic through the host.
In my case the guest was becoming non-responsive once traffic exceeded a persistent 30Mbps over 3-4 minutes.
 
Last edited:

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Bhyve, by default, however presents each core as a physical CPU to the guest OS, thus if you try to give Windows more than 2 CPUs it will only see the first two.
You can easily adjust the CPU topology per VM in TrueNAS 12. In the UI. Time to upgrade?

ensure your windows machine NIC is using the VirtIO NIC and not the E1000 NIC
Of course. And while you are at it you should use VirtIO for the disk instead of AHCI, too.

We run Windows Server 2016 and Windows 10 in production 24x7 on TN 12.
 

beaster

Dabbler
Joined
May 17, 2021
Messages
27
Top