[How-To] Cloning new instances of a template VM for BHYVE

jimmy_1969

Cadet
Joined
Sep 27, 2015
Messages
7
Background
The use-case is rather straight forward. Rather than installing BHYVE VM instances from scratch every time, it would be nice to be able to configure once, and then clone to new VM instances every time we make a new deployment.

I spend a few days researching this topic, and testing the new VM clone feature in FreeNAS, but the end result was poor. I found a multitude of questions online regarding BHYVE VM cloning, but no solutions that I could easily apply in a FreeNAS context. The new clone feature did not work for me either. I could clone a VM but the cloned instance didn't want to start. Spent hours trying to figure out a way to troubleshoot BHYVE in FreeNAS but found nothing tangible (where are the BHYVE logs?!#?!).

This prompted me to develop my own work-flow, where I try to leverage the (classic) GUI as much as possible, combined with a few CLI interactions to fill the functional gaps. This method might not be everyone's cup of tea, but hopefully it is easy enough for non-expert users to give it a go.

High Level Work-Flow
  1. Create a template VM
  2. Configure it according to your requirements
  3. Clone the template VM to create new instances
IMPORTANT: CentOS7 as a BHYVE Guest VM
Some users have reported that booting a fresh installation of a CentOS7 guest VM will result failure, and ending up at the UEFI shell instead. This is a known issue and the work-around can be found here.

Method of Procedure
1. Create a new VM template dataset, e g 'CentOS7_Template'

2. Create a new ZVOL inside the dataset to be used as boot drive. Specify:
  • Name, e g 'CentOS7_Template_Bootdisk'
  • Size, e g 12 GB
You will now have a new VM template dataset looking something like below.
PAXYZkVP_o.png


3. Create a new template VM, e g 'CentOS7_Template'

4. Select 'Devices' and 'Add Device' and specify:
  • VM, e g 'CentOS7_Template'
  • Type: 'Disk'
  • ZVOL: newly created template, e g 'CentOS7_Template_Bootdisk'
  • Mode: 'VirtIO'
5. Start the template VM and log in using VNC. Set-up the template as per your requirements, e g update the software, configure services, install additional packages, etc. Stop/Start the VM and verify its new configuration. If everything has gone right, you now have a working VM that can be used as master for future instances.

6. Next, we will make a snapshot in the event we later want to rollback to the initial install state. Stop the VM. Then go to 'Storage' and select the template dataset, e g 'CentOS7_Template', and 'Create Snapshot'. Specify:
  • Recursive Snapshot: Select the box
  • Snapshot Name: e g Initial_Install-20180205
Now it is time to clone the VM template's dataset and ZVOL and prepare a new VM drive instance.

7. Log in to your FreeNAS server's command prompt. We will assume using a SUDO enabled account.

8. Locate the template dataset.
sudo zfs list | grep bigvolume/vm
Password:
bigvolume/vm 36.2G 201G 96K /mnt/bigvolume/vm
bigvolume/vm/CentOS7_Template 13.2G 201G 88K /mnt/bigvolume/vm/CentOS7_Template
bigvolume/vm/CentOS7_Template/CentOS7_Template_Bootdisk 13.2G 214G 1.06G -


9. Create a new recursive snapshot of the VM template dataset (we could have used the initial snapshot but this way the same procedure will apply even after the template has been modified/updated).
sudo zfs snapshot -r bigvolume/vm/CentOS7_Template@backup

10. Clone the dataset by using zfs send/receive. The new VM instance's dataset is specified on the receiving side, e g 'bigvolume/vm/NewCentOS7VM'
sudo zfs send -R bigvolume/vm/CentOS7_Template@backup | sudo zfs receive -Fv bigvolume/vm/NewCentOS7VM

11. Locate the new cloned dataset. Note that the cloned ZVOL still has its original name.
sudo zfs list | grep bigvolume/vm
Password:
bigvolume/vm 36.2G 201G 88K /mnt/bigvolume/vm
bigvolume/vm/CentOS7_Template 13.3G 201G 88K /mnt/bigvolume/vm/CentOS7_Template
bigvolume/vm/CentOS7_Template/CentOS7_Template_Bootdisk 13.3G 214G 1.06G -
bigvolume/vm/NewCentOS7VM 13.2G 201G 88K /mnt/bigvolume/vm/NewCentOS7VM
bigvolume/vm/NewCentOS7VM/CentOS7_Template_Bootdisk 13.2G 214G 1.06G -


12. Rename the cloned ZVOL to reflect the new VM instance, e g 'NewCentOS7VM_Bootdisk'.
sudo zfs rename bigvolume/vm/NewCentOS7VM/CentOS7_Template_Bootdisk bigvolume/vm/NewCentOS7VM/NewCentOS7_Bootdisk


The new cloned ZVOL shown in below picture.
CtTn3TOG_o.png


13. Next step is to create the new VM instance. Go to GUI menu 'VMs', and select 'Add VM'. Specify:
  • VM Type: 'Virtual Machine' (Default)
  • Name: e g 'NewCentOS7VM'
  • Comment: e g 'My new CentOS7 instance'
  • Config Virtual CPUs and Memory Size as per your requirements
  • Boot Method: UEFI (Default)
  • Autostart: Select box
14. Select 'Devices' and 'Add Device' and specify:
  • VM, e g 'NewCentOS7VM'
  • Type: 'Disk'
  • ZVOL: newly cloned ZVOL, e g 'NewCentOS7VM_Bootdisk'
  • Mode: 'VirtIO'
Below picture shows the newly created VM instance.
wjRskugZ_o.png


OPTIONAL
Depending on your IP configuration, you might have to change MAC address of the new instance's NIC to avoid IP address conflicts. Edit the NIC device to allocate a unique MAC address if required.

15. Start the new VM and log in using VNC.

16. Change the hostname of the new VM instance, e g 'newcentos7vm'.
hostnamectl set-hostname newcentos7vm

17. Re-create SSH keys (to ensure they are unique) by deleting the existing ones and restarting SSH service.
rm -f /etc/ssh/ssh_host*key*
systemctl restart sshd


18. [CentOS Specific] NetworkManager can loose IP connectivity if it is based on a cloned configuration. UUID and UDEV configurations typically doesn't play nicely after being cloned. If running in headless mode or if eth0 never changes, NetworkManager isn't needed. So let us disabled it and use good old Network package instead, which should work right out-of-the-box.
chkconfig NetworkManager off
chkconfig network on


You might want to add "NM_CONTROLLED=no" in your ifcfg-eth0 file to ensure that in the event NetworkManager is ever activated again, the eth0 device will still operate independently.


19. Stop the new VM instance.

20. Create a snapshot of the new VM instance to facilitate future rollback to initial state. Select 'Storage', and the new instance dataset, and 'Create Snapshot'.
Specify:
  • Recursive Snapshot: Select the box
  • Snapshot Name: e g Initial_Install-20180205

Conclusion
The described work-flow supports cloning of VMs and all new instances are recognized by the GUI. Suggestions to improve or simplify the procedure is most welcome.

Best Regards

//Jimmy
 
Last edited:

hiro5id

Dabbler
Joined
Aug 21, 2016
Messages
35
Thanks for this guide @jimmy_1969 ! Curious, why not use the built in "CLONE" button in the FreeNAS UI under VMs ? Is there any difference with your approach?
 

kaipee

Dabbler
Joined
Dec 20, 2014
Messages
27
Background
The use-case is rather straight forward. Rather than installing BHYVE VM instances from scratch every time, it would be nice to be able to configure once, and then clone to new VM instances every time we make a new deployment.
................./Jimmy

Thanks for the guide Jimmy, t works quite well for an Arch installation except that when I boot the cloned VM (UEFI boot) I get presented with the UEFI Interactive Shell. Looks like the disk gets renamed in some way ??
 

KevDog

Patron
Joined
Nov 26, 2016
Messages
462
Old thread - however I thought it was going to be useful.
I have Freenas --> created VM (ARCH with ZFS on root filesystem)
I recreated your steps and created a new VM with the sent/receive (clone).

I booted new VM however at this stage in the boot loader (UEFI) I getting can't import zpool tank. Seems to be a problems.
 

KevDog

Patron
Joined
Nov 26, 2016
Messages
462
I know I'm reviving an old thread - however I followed this thread to clone a base bhyve VM installation which ran Arch Linux with ZFS on root. The instructions that were given in the first thread above are accurate and complete in order to clone a bhyve VM through a combination of manual and GUI tactics. Since my underyling VM incorporated an internal ZFS, the cloned image wouldn't boot. Although I am using Arch Linux, I expect any linux distribution that is configured with ZFS on root to behave in a similar manner since the hardware ids associated with the VM have changed from the original installation to the cloned new installation.

I'm posting a link here how to rectify this problem, since it manually requires importing the pool and mountpoints, performing a chroot, and then regenerating the initramfs which is needed to boot the system. https://bbs.archlinux.org/viewtopic.php?pid=1885776#p1885776. Some of my solution contains steps that are specific to Arch linux -- ie arch-chroot, however I'm wondering/betting that similar steps -- ie debchroot (chroot for debian) could be interchanged for my instructions.
 
Top