[HOWTO] How-to Boot Linux VMs using UEFI

jazo97

Cadet
Joined
Feb 4, 2020
Messages
1
brand new to freeNAS and Linux, this was excellent it worked beautifully, Thanks to all that contributed. Long read well worth it.
 

KevDog

Patron
Joined
Nov 26, 2016
Messages
462
@KrisBee - I want to thank you for all the work you've put in this thread. I recently accomplished my first ever UEFI VM creation which I used a ZFS-on-root installation of Arch Linux. I haven't install Ubuntu/Debian for quite some time so I'm surprised that these distributions haven't figured out this problem -- or maybe its a byhve things.

When setting up Arch, everything is done manually so you really get a good feel for what is going on since there really isn't anything automated. During setup I created 2 partitions (Partition 1 - Size 512MB as UEFI, Partition 2 - Remainder of disk as Solaris Root (given ZFS rather than ext4, etc)). UEFI was formatted as vfat32, and during partitioning setup, it was mounted at /boot. During the installation, rather than grub I utilized systemd-boot (I think it used to be called gummiboot). With gummiboot I simply did a:
Code:
bootctl --path=/boot install
, and this process automatically populated /boot/EFI/BOOT/BOOTX64.EFI. I didn't need to copy anything and have never had to copy anything within console during bootup. systemd-boot is grub, so it kind of avoids all the problems described above.
(Entire process documented here: https://ramsdenj.com/2016/06/23/arch-linux-on-zfs-part-2-installation.html)


I'm not sure if during the installation of Debian/Ubuntu/CentOS and all the other linux systems mentioned here if you can actually choose your bootloader, but it seems to avoid a lot of the problems with grub. I'm aware @Patrick M. Hausen has mentioned rEFInd as an alternative to grub, and I suspect he's onto something here since it might avoid some of the problems described in this thread. I've been through the Ubuntu installation on bare-metal many times, and I don't think using the automated installer it gives you an option other than Grub, however I'm sure there is a workaround.

Finally just a comment. After getting my Arch installation up and running, I of course needed another VM for a different purpose. I'm aware other people in this thread are adept at possibly spinning up new VMs quickly, however after taking many hours to get the Arch ZFS installation up and running, it wasn't anything I wanted to repeat because of time constraints. FreeNAS or any ZFS installation honestly however makes spinning up new VM's rather simple -- clone one existing VM -- do a zfs send/receive -- and use the cloned VM as the source for the new installation (Use FreeNAS gui and during VM setup point setup to use cloned dataset as source). The only problems with this method is that the initramfs needs to be regenerated because the "attached hardware IDs" changed. Once initramfs is regenerated -- VM simply works. Honestly its a huge time saver. With combination of making new VMs (from a template VM) and the use of an alternative bootloader (ie not GRUB), it seems like a lot of the problems presented in this thread could be side-stepped.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
It's been a long time but I got it going, I have no idea how anymore. Mostly help here.

I have a UbuntuVM that's near a year old, has served me so well, no more nasty jails.
 

etegration

Cadet
Joined
Nov 22, 2019
Messages
7
It's been a long time but I got it going, I have no idea how anymore. Mostly help here.

I have a UbuntuVM that's near a year old, has served me so well, no more nasty jails.

thanks for coming in to reply. i gave up eventually and ran it on unraid and left the freenas alone.
 

zamana

Contributor
Joined
Jun 4, 2017
Messages
163
More than 3 years later and this thread is still useful. Thanks!

Just to complement (I didn't read all posts, so I don't know if this was already posted...) in my case (FreeNAS 11.3-U5 and Debian 10 as VM), it only worked with the BOOT folder inside the EFI folder:

Code:
root@docker:/boot/efi# tree
.
└── EFI
    ├── BOOT
    │   └── bootx64.efi
    └── debian
        ├── BOOTX64.CSV
        ├── fbx64.efi
        ├── grub.cfg
        ├── grubx64.efi
        ├── mmx64.efi
        └── shimx64.efi

3 directories, 7 files
 

omcn7

Dabbler
Joined
May 19, 2015
Messages
20
More than 3 years later and this thread is still useful. Thanks!

Just to complement (I didn't read all posts, so I don't know if this was already posted...) in my case (FreeNAS 11.3-U5 and Debian 10 as VM), it only worked with the BOOT folder inside the EFI folder:

Code:
root@docker:/boot/efi# tree
.
└── EFI
    ├── BOOT
    │   └── bootx64.efi
    └── debian
        ├── BOOTX64.CSV
        ├── fbx64.efi
        ├── grub.cfg
        ├── grubx64.efi
        ├── mmx64.efi
        └── shimx64.efi

3 directories, 7 files
I am curious if this should work on other Linux /Unix systems? I am pretty sure Arch, RHEL, FreeBSD... etc each have their own unique tree. If someone better at Googling then myself came up with a wiki-list , kindly share. :smile:
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
UEFI boot partitions are supposed to contain a unique directory per bootable operating system. In the case of real hardware you can then pick the loader to boot from the BIOS which will save your choice for subsequent boots. In the standard there's a fallback/default location for removable media, namely \EFI\BOOT\bootx64.efi.

The "problem" is that bhyve does not yet support the "choose and persist" part of the UEFI boot and only supports the default location. But if you know that it's pretty easy to make it work for any OS.

Or install rEFInd - a multi OS EFI boot manager - into the default path and let it load whatever comes next.
 
Top