[HOWTO] How-to Boot Linux VMs using UEFI

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
There have been a number of threads were people have failed to boot a Linux VM after the installation phase using the webUI of FreeNAS 11 nightly, finding themselves stuck in the EFI shell and not knowing how to proceed nor how to avoid this happening each time their VM is started.

It has been sugested elsewhere that this is due a grub error, or a grubx64.efi file which is missing or in the wrong place, or you must create a startup.nsh file or change the boot order in the EFI shell. If your Linux install depends on grub and it has completed successfully, creating the correct EFI partition (ESP) and grubx64.efi file, then none of these suggestions are correct.

Creating a VM via the webUI in FreeNAS 11 converts to a bhyve command in the backend, saving you from using a complex string of commands at the CLI e.g:

bhyve -c 4 -m 768M -HAP \
-s 0,hostbridge \
-s 3,ahci-cd,firmware-8.7.1-amd64-netinst.iso \
-s 4,ahci-hd,debianbox.img \
-s 5,virtio-net,tap1 \
-s 29,fbuf,tcp=0.0.0.0:5900,w=800,h=600,wait \
-s 31,lpc -l com1,stdio \
-l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd \
debtest

The file /usr/local/share/uefi-firmware/BHYVE_UEFI.fd provides byve with the firmware to support UEFI guests, this function is based on the OVMF tianocore project which is also used as the basis of VirtualBox's virtual machine UEFI support. The bhyve UEFI firmware conforms to the known “Default Boot Behaviour” and looks for the file \EFI\BOOT\boot64.efi in the EFI partition of your VM. If it's not present you end up in the EFI shell.

One simple remedy is to create this \EFI\BOOT\boot64.efi file in your VM, which is straight forward once your VM has booted.

But how do you boot your VM if you find yourself in the EFI shell at first? Just type exit at the shell prompt, and in the EFI menu system navigate to "Boot Maintenance Manager" and then select "Boot from file" to locate and select your grubx64.efi file.

As root, cd to the /boot/efi/EFI directory of your VM in order to create the new BOOT directory and copy the existing grubx64.efi to /EFI/BOOT/bootx64.efi. The end result should look like this, using Ubuntu as an example:

root@ubuntu-vm:/boot/efi# tree -L 3 .
.
└── EFI
├── BOOT
│ └── bootx64.efi
└── ubuntu
├── fbx64.efi
├── grub.cfg
├── grubx64.efi
├── mmx64.efi
└── shimx64.efi

3 directories, 6 files
root@ubuntu-vm:/boot/efi#

The file bootx64.efi is a copy of the grubx64.efi in your VM.

NB: If grubx64.efi gets updated you will need to re-create bootx64.efi
 
Last edited:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Just for fun I reproduced the boot problem and then followed these steps and everything boots up just fine now. Thanks!
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Thanks for the confirmation.

CentOS is one of the few distros to already have the default EFI files and fallback that the byhve UEFI firmware expects to be present. For example in CentOS 7 minimal:

Code:
[root@localhost efi]# pwd
/boot/efi/efi
[root@localhost efi]# cd BOOT
[root@localhost BOOT]# ls -l
total 1340
-rwx------. 1 root root 1296176 Dec 7 2015 BOOTX64.EFI
-rwx------. 1 root root 73240 Dec 7 2015 fallback.efi
 

scrappy

Patron
Joined
Mar 16, 2017
Messages
347
Thank you for the great write up! I will keep this handy as I plan to redo some Linux VMs once FreeNAS 11 is stable.
 

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
In "Boot from file" I only see "Load File [PCIRoot (0x0)/Pci (0x3,0x0)/Mac(<mac address>,0x1)" to select and when I select that the screen goes blank and then it sends me back to the "EFI menu system".

I tried setting the HDD to both UEFI and to VirtIO and got same result.

I am trying to run Debian VM I created in Corral.

Also I get another error message before I get dropped to EFI shell on initial boot and the error says "Boot Failed. EFI Hard Drive"
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
In "Boot from file" I only see "Load File [PCIRoot (0x0)/Pci (0x3,0x0)/Mac(<mac address>,0x1)" to select and when I select that the screen goes blank and then it sends me back to the "EFI menu system".

I tried setting the HDD to both UEFI and to VirtIO and got same result.

I am trying to run Debian VM I created in Corral.

Also I get another error message before I get dropped to EFI shell on initial boot and the error says "Boot Failed. EFI Hard Drive"

How did you create the VM in Corral? From a template, or ISO? What boot loader did you use, Grub or UEFI?

The messages you've seen suggests your Debian VM has no EFI system partition. You could use the latest gparted iso to check what partitions exist on your VM. Just add a cdrom, using the gparted iso, to your VM and select UEFI boot.
 

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
How did you create the VM in Corral? From a template, or ISO? What boot loader did you use, Grub or UEFI?

The messages you've seen suggests your Debian VM has no EFI system partition. You could use the latest gparted iso to check what partitions exist on your VM. Just add a cdrom, using the gparted iso, to your VM and select UEFI boot.

I created VM using debian template and the bootloader was Grub.

In gparted I only see a 10gb ext4 filesystem partition with 2.4 gb used.

edit: I am also able to mount the partition and get the root filesystem with all the files you expect. Can I edit the files here to get uefi boot to work or do I need a efi partition? I am able to do
Code:
sudo mount /dev/sda1 /mnt
and get shell access with
Code:
sudo chroot /mnt


edit2:
would this work?
Code:
apt-get install --reinstall grub-efi
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=grub
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Last edited:

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
Your VM will not boot using UEFI unless it has a EFI system partition (ESP). I think the simplest route to re-using your VM in FreeNas 11 would be to use iohyve from the CLI, assuming you need text mode only.

Otherwise you'd have to convert the your Debian VM to UEFI. Here are a couple of refs:

https://tanguy.ortolo.eu/blog/article51/debian-efi
https://blog.getreu.net/projects/legacy-to-uefi-boot/

well I have successfully used gparted bootable iso to move the original partition 512MiB to the left and create a new efi partition using fdisk and its formated to fat32 (arch wiki helped in this). I also used gdisk(or fdisk i think) to change the disk number for the new efi partition from 2 to 1.

I then mounted sda2 to /mnt and sda1 to /mnt/boot/efi

I also mounted a few more directories:
Code:
mount -t proc none /mnt/proc
mount -o bind /dev /mnt/dev
mount -t sysfs sys /mnt/sys


then I did chroot
Code:
chroot /mnt


Then I used following command to install grub (i had to click on network icon on the gparted iso desktop
Code:
apt-get install --reinstall grub-efi
grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=grub


it says installed successfully but I still get "Boot Failed. EFI Hard Drive"
when I look at the partition table in gdisk it shows efi partion has efi and boot flags set

EDIT FIXED: I did the boot from file and it works!!!!
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Glad to know you have it working. Actually, I don't think the ESP has to be the first partition on the disk but it's the normal convention. Don't know which moon you're on, but if blue string pudding is on the menu, say hello to the Clangers for me.
 

bodriye

Explorer
Joined
Mar 27, 2016
Messages
82
I was unable to boot the image the first time. Had to edit the fstab with correct device name (/dev/sda2) and I also added (/dev/sda1) to efi folder. Had to reboot the whole server to get it working.
But now it works flawlessly.

One More Question:
Do you know how to share files between the host and the vm? I used to use 9p mount but I don't know now...
edit: ended up with nfs and it works fine
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Is there a way to point the startup file in the EFI shell to the correct location? The EFI file update caviat won't be a problem in that case right?

Read my first post again. Bhyve UEFi has no memory of its own, you'd be dropped into the byhve firmware EFI shell every time you boot your VM. So fix your VM, but be aware updating grub in your VM may require the fix being re-applied.
 

microbug

Dabbler
Joined
Dec 14, 2016
Messages
44
Or if you want to avoid this altogether you could use CentOS 7 as the guest, which includes the right EFI file and boot as expected.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Or if you want to avoid this altogether you could use CentOS 7 as the guest, which includes the right EFI file and boot as expected.

Already mentioned in #3 above . And if you did use centos7, you would never have seen this problem and probably have little have reason to notice this thread. But why restrict yourself when creating a Linux VM?

You could equally ague that you could just keep a refind iso attached t your VM everytime you want to boot it. I perfer to fix the VM and have a wider choice of what linux VMs can be created.
 

Bdarley5

Cadet
Joined
Mar 3, 2016
Messages
4
Yeah cool, I had already applied the fix you suggested worked great btw. I guess I was just interested in other options also.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Is there any reason an ubuntu VM doesn't start until I actually connect VIA VNC?

Try unchecking "wait" on your VNC device.
 
Top