[HOWTO] How-to Boot Linux VMs using UEFI

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737
JD, the filesystem of the EFI partition is FAT or a derivation thereof. So no symlinks, I fear.

I still recommend installing rEFInd ...

Patrick
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,737

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
So will the release build of FreeNAS 11.2 include all the bhyve improvements put in to regular BSD 11.2?

Do they continue to always add all the improvements that FreeBSD gets?

Finally, is FreeBSD developers actively improving bhyve, so that threads like this sound less troublesome?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
It might not be straight pull, but anything of interest is very likely to end up in FreeNAS sooner rather than later. Keep in mind that FreeNAS doesn't track FreeBSD releases these days, instead pulling from the stable branch before point releases.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
I had a quick glance, there's some activity but not a heap. I guess this will be a slow burn for improvement.
 

mmontgom

Cadet
Joined
Jan 26, 2018
Messages
8
Install a Debian 9.4 OS under bhyve FreeNAS 11.1-U5.

I found that when I utilized all the VirtIO settings in the VM device setup and kept the disk settings simple in the OS install (no logical volumes, no encryption), It just worked.

I found that when I attempted to use a logical volume for the OS install, the VM would not boot regardless of attempting all the file system manipulation suggestions contained in the previous 6 pages of this topic.
 

soko

Dabbler
Joined
Jul 24, 2012
Messages
18
I've read through this thread from start to finish attempting to get an Ubuntu VM up.

I'm on Freenas 11.1U6, trying to run up ubuntu-16.04.4-desktop-amd64.iso
1) setup a storage dataset/zvol for 25GB (permissions standard o:g=root:wheel
2) created a new VM with four devices: NIC, VNC (port 5901), DISK (type=disk, mode=virtIO, ZVol=set to zvol created in (1), Sectorsize=0, CDROM (set to ..../ubuntu-16.04.4-desktop-amd64.iso file)
3) Start VM, shows running
4) connect with VNC Viewer, I get:
upload_2018-8-31_18-20-37.png


type exit, get menu for BHYVE, then select 'Boot Maintenance Manager>Boot from file> EMPTY

upload_2018-8-31_18-22-37.png


This thread has morphed into many discussions. Apologise if this is not the right place/time to ask. I cannot get the BHYVE menu's to present boot files or directories. I'm missing something including brain cells. My thinking was to manually mount the zvol to copy the EFI boot files from the iso.

I've gone to /dev/ssd/ubuntu to figure out how to mount the zvol to manually copy the boot files over per the thread, but there is no obvious device to mount. Just need some direction to get me pointed in right direction.

Thanks Newbie.
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@soko Check your iso. Booting from the ubuntu iso (cdrom device attached to VM) shouldn't drop you to the EFI shell, it's only post install that you may end up there.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288

soko

Dabbler
Joined
Jul 24, 2012
Messages
18
@KrisBee I redownloaded 16.04.5 from Ubuntu alternative. I think I have success using a new iso instead of 16.04.4. Should have tried that first. Thankyou.

I got a grub boot menu instead of falling to EFI Boot menu, I've selected install Ubuntu, problem is I am running a resilver for the last 24hrs for encrypted disk replacement. Performance has degraded!

tail -f /var/log/middlewared.log

Code:
[2018/09/03 03:12:27] (DEBUG) VMService.run():155 - ====> NIC_ATTACH: re0
[2018/09/03 03:12:27] (DEBUG) VMService.run():231 - Starting bhyve: bhyve -H -w -c 1 -m 4096 -s 0:0,hostbridge -s 31,lpc -l com1,/dev/nmdm1A -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -s 3,e1000,tap0,mac=00:a0:98:7b:b4:ad -s 29,fbuf,vncserver,tcp=0.0.0.0:5901,w=1024,h=768,, -s 30,xhci,tablet -s 4,virtio-blk,/dev/zvol/ssd480enc/ubuntu/ubuntu -s 5,ahci-cd,/mnt/vol/Install_Files/ubuntu/ubuntu-16.04.5-desktop-amd64.iso 1_ubuntu
[2018/09/03 03:12:27] (DEBUG) VMService.run():250 - ubuntu: 02/09/2018 20:12:27 Listening for VNC connections on TCP port 5901

[2018/09/03 03:12:27] (DEBUG) VMService.run():250 - ubuntu: 02/09/2018 20:12:27 Listening for VNC connections on TCP6 port 5901


Connect with VNCViewer I get the following which looks promising once the resilver completes. Will report back.

upload_2018-9-2_20-18-28.png
 

neubert

Dabbler
Joined
Jun 24, 2011
Messages
26
For comparison, this is what I get (cd-rom removed, only NIC, VNC and Disk attached to the vm):


Code:
[2018/09/04 18:15:40] (DEBUG) VMService.run():155 - ====> NIC_ATTACH: igb0
[2018/09/04 18:15:40] (DEBUG) VMService.run():231 - Starting bhyve: bhyve -H -w -c 4 -m 4096 -s 0:0,hostbridge -s 31,lpc -l com1,/dev/nmdm2A -l bootrom,/usr/local/share/uefi-firmware/BHYVE_UEFI.fd -s 3,e1000,tap0,mac=00:a0:98:7c:b0:ec -s 29,fbuf,vncserver,tcp=192.168.31.12:5902,w=1920,h=1200,, -s 30,xhci,tablet -s 4,ahci-hd,/dev/zvol/volume0/vm-epiphyt-sda,sectorsize=4096 2_epiphyt
[2018/09/04 18:15:40] (DEBUG) VMService.run():244 - ==> Start WEBVNC at port 5802 with pid number 33869
[2018/09/04 18:15:40] (DEBUG) VMService.run():250 - epiphyt: 04/09/2018 20:15:40 Listening for VNC connections on TCP port 5902

[2018/09/04 18:15:40] (DEBUG) VMService.run():250 - epiphyt: 04/09/2018 20:15:40 Listening for VNC connections on TCP6 port 5902
 

neubert

Dabbler
Joined
Jun 24, 2011
Messages
26
I spotted the differenceL soko uses virtio driver instead of ahci. Turning to virtio makes the virtual disk visible in the "boot from file" menu of the Boot Maintenance Manager. The vm properly boots after manually choosing the grubx64.efi. So the issue is related to the ahci setting.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I spotted the differenceL soko uses virtio driver instead of ahci. Turning to virtio makes the virtual disk visible in the "boot from file" menu of the Boot Maintenance Manager. The vm properly boots after manually choosing the grubx64.efi. So the issue is related to the ahci setting.

Correct, this problem surface some time ago in FreeNAS 11.2 betas. Since I habitually use virtio for NIC and HDD devices I clean forgot the obvious when looking at your posts.
 

hexley

Dabbler
Joined
Sep 8, 2017
Messages
11
I had to do a little searching to find this since it doesn't show up in Debian's example preseed, but you can also do this if you use preseeds:

Code:
# Work around buggy UEFI implementations that do not support boot entries
# correctly:
# https://wiki.debian.org/UEFI#Force_grub-efi_installation_to_the_removable_media_path
d-i grub-installer/force-efi-extra-removable boolean true

Unfortunately this is still necessary with 11.2-BETA3.
 

diskdiddler

Wizard
Joined
Jul 9, 2014
Messages
2,374
I thought all that stuff was going to be fixed in B3, is it RC1?
What's the ticket / job# on the tracker?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I had to do a little searching to find this since it doesn't show up in Debian's example preseed, but you can also do this if you use preseeds:

Code:
# Work around buggy UEFI implementations that do not support boot entries
# correctly:
# https://wiki.debian.org/UEFI#Force_grub-efi_installation_to_the_removable_media_path
d-i grub-installer/force-efi-extra-removable boolean true

Unfortunately this is still necessary with 11.2-BETA3.

d-i preseed? Not for the casual user then. But just use expert install, surely that's easier than mounting then copying an ios , changing the preseed file and then generating a new iso.
 
Top