patsoffice
Cadet
- Joined
- Mar 7, 2020
- Messages
- 1
Hrmmm. I upgraded to 11.3u1 this morning and my .bhyve_containers directory was not removed. Just an FYI for anybody that hasn't upgraded yet.
After migrating to Freenas 11.3 my Docker Host (Rancher) won't start any more with error "[EFAULT] grub-bhyve timed out, please check your grub config.".
1) The path of grubconfig was in wrong dataset in freenas db (/data/freenas-v1.db , table: vm_vm, field: grubconfig).
2) The timeout in vm.py is too short (2s) change to 20s (and reboot) (line 280 in /usr/local/lib/python3.7/site-packages/middlewared/plugins/vm.py)
After this my Docker Host work again.
I upgraded yesterday and nothing was removed.
You just have to edit the vm.py like jeud said in post 7 in this thread.
Edit that file, restart and everything is working again!
So you didn't have to do this step: 1) The path of grubconfig was in wrong dataset in freenas db (/data/freenas-v1.db , table: vm_vm, field: grubconfig).
Cause I have no idea how I'm supposed to mod that. Ok changed the vm.py and gonna try a reboot.
*update* Just changing the timeout in vm.py is not sufficient to get it to boot.
The file Jeud in post #7 - /data/freenas-v1.db is sqlite3 format and should be only edited via sqlite3 shell.
In my case after upgrade the path to the grub.conf was incorrect, after correcting the path I was able to start my dockworker without reboot.
For the less technical or lazzy persons :) here is few steps to udpate the sqlite3 db file.
By the way - "I will not accept any responsibility if YOU going to screw your DB" :p
The guide:
I also update the vm.py file according to #7 just in case.
- Open SSH connection to your FreeNAS server.
- Make sure your DB existing - ls -l /data/freenas-v1.db
- Expected output - "-rw-r----- 1 root operator <size> <date> /data/freenas-v1.db"
- Open shell to the DB - sqlite3 /data/freenas-v1.db
- Great, now we in the sqlite3 shell, let's print the content of the table we are interested in:
- select * from vm_vm;
- This should output all entries from vm_vm table.
- Now one should use his brain and find the entry that is relevant for his environment with the docker configuration.
- Mine was: "9|DockerHub||4|4096|1|LOCAL|/mnt/wdstorage/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg|GRUB"
- So this path was incorect the pool name was wrong.
- Locate in your FreeNAS setup grub.cfg of the docker you want to resurrect.
- Finally, last step, let's update the entry to the correct path.
- Look a the entry that we printed in step 5.1, first column is ID (in my case) it has value 9 (very important).
- We want to update only this specific entry and specific column.
- shell cmd: "update vm_vm set grubconfig='/mnt/truck/.bhyve_containers/configs/9_DockerHub/grub/grub.cfg' where id=9;"
- Pay attention at the end we have pointed to the id=9 and grubconfig column to update with the correct path.
- Verify you change by running: select * from vm_vm where id=9; You should see the entry with the correct path now.
- Finito.
Now open your FreeNAS WebUI and go to the VM tabs, start your docker VM.
Note: in my case it didn't worked for the first time I pressed start but the second time it started successfully, docker back to life :)
There is no /usr/share/zos on my freenas machine. Was working fine on 11.2. I upgrade to 11.3 and it's still fine till I reboot and then rancher never came up. If I create a new vm with debian and install docker is there a simple method to transfer the config's/images for the defined docker containers?
Sorry mate for your issue with docker, must be frustrating... but maybe some one with more exp will help here.
About second part, where is your docker's config/volume/images/yada yada yada are resides?
I can tell you how I did it to make it less dependent on the OS you run you docker in.
Since VM is not a jail we can't just mount dataset here, need to work with zvol.
Create a zvol that will serve you as a docker configuration storage, I went with 20GB over the top....
Next you go to the VM tab in your FreeNAS and add new device (DISK) to your VM.
Reboot the VM, SSH to it whatever and permanently mount it in you system, let's say "/mnt/storage", you will need to format the disk, create partition and all that crap to make it usable.
One you finish you need to point your docker to store it's configuration folder in that mounted disk, in the rancherOS it's simple, you just "vi /etc/docker/daemon.json" and add there something like this:
{
"data-root": "/mnt/storage/docker"
}
and reboot the OS, once you back online, go to the directory (/mnt/storage/docker) and you should see all the docker images, volumes and other happy goodies there.
From this point everything is stored in the mounted disk, that can be mounted anywhere, once you point docker to it (with few adjustments but nothing serious).
In your situation, you need to create new vm with debian (your words), mount your docker disk that been created for the rancheros and copy from it docker configuration to the new destination, then point the docker to that destination, restart docker and that should do it.
I am using zvol as additional mounted disk to store docker configuration. By using NFS you introducing huge bottle neck and performance degradation for your system docker. You can use NFS to store specific container data but docker settings it's a bad idea.All container storage is available under /mnt/tank/shared/<appname> on freenas which becomes /mnt/nfs-1/<appname> on rancher. This gets passed to each of the docker containers. The only image I have access to is the VM image @ /mnt/tank/vm named rancher.img_docker1 which is 20GB in size. How does that affect your instructions?
set root='hd0,msdos1'
directive.set timeout=0 set default=rancheros menuentry "RancherOS" --id rancheros { set root='hd0,msdos1' linux /boot/vmlinuz-4.14.138-rancher printk.devkmsg=on panic=10 rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait rancher.resize_device=/dev/vda initrd /boot/initrd-v1.5.5 }
"boot"=true
attribute, and to point the VM "grubconfig"
attribute to any existing grub.cfg in .bhyve_containers.Yes, it's possible to get RancherOS working in 11.3, but it's non-obvious. It took me a month of dinking around before I figured out what to do. For the record, I have RancherOS 1.5.5 running in a bhyve VM. No database tweaks nor source code modifications to tweak timeouts needed.
You can use any old RancherOS RAW file as your VM boot volume. Just use the REST 2.0 API to set the
- First, you have to understand the prerequisites for the bhyve grub bootloader. There are 2 non-GUI settings that can ONLY be set via the REST 2.0 API. For the gory details, see my HOW-TO for grub boot. Note, to avoid the time-out issue, use single-quotes, not parenthesis, with the grub.cfg
set root='hd0,msdos1'
directive.- Next, since RancherOS boots via syslinux, you have to generate a grub.cfg that emulates what syslinux does. Mine is as follows:
Code:set timeout=0 set default=rancheros menuentry "RancherOS" --id rancheros { set root='hd0,msdos1' linux /boot/vmlinuz-4.14.138-rancher printk.devkmsg=on panic=10 rancher.state.dev=LABEL=RANCHER_STATE rancher.state.wait rancher.resize_device=/dev/vda initrd /boot/initrd-v1.5.5 }
- RancherOS 1.5.5 can boot via VirtIO. I use a 2 GB RAW file as the RANCHER_STATE boot volume, and a 25 GB zvol to house a 4 GB RANCHER_SWAP and a 20 GB Docker root partition. Both are set to 512 bytes/sector, and attach via VirtIO. See my post on Docker options for FreeNAS for more details.
"boot"=true
attribute, and to point the VM"grubconfig"
attribute to any existing grub.cfg in .bhyve_containers.
How are you doing an upgrade?
If you add the options rancher.autologin=tty1 rancher.recovery=true to the end of grub linux entry, you'll be able to access the RancherOS recovery console on the next boot, which will allow you to mount /dev/sda1 to access the /boot menu to find the paths to the new vmlinuz and initrd.
I've followed HugoPol's insctructions from his website as well, and was able to get everything up and running, except for the grub.cfg part.I followed the instructions here: https://blog.hugopoi.net/2020/03/01/install-rancheros-on-freenas-11-3/ and it was using vmlinuz-4.14.73 and initrd-v1.4.2 even though I could not find those files anywhere. I copied the new vmlinuz and initrd to /boot and suddenly I'm at a grub line...so I added the rancher.img as a raw device and at least got booted up..now I can mount /dev/vda1 to /mnt/efipart and get to the grub entry...I've fixed it back to the way it was and rebooted...still got the grub rescue...so I mounted vda2 and made sure I deleted the vmlinuz-4.14.138 and initrd-v1.5.5 from /boot...so it should be back to what was booting fine....still getting grub rescue. If ever get it booting again so I can get to my 12 containers I'm not touching the thing again. On top of everything else I was previously able to ssh in after assigning a password to user rancher..now even booting off the rancher image eth0 isn't getting an ip address.
*update* I resolved everything by wiping the virtual machine and starting over. Luckily since I saved the docker run "templates" on nfs recreating the 11 containers(got rid of 1) wasn't a big deal and their configs were saved as well. I'm about resolved to stop upgrading. I had everything in jails and then after an upgrade could no longer create new ones because they needed to be in new jails. To avoid that happenings to me again I decided to put everthing in docker containers...but then the support for that was dropped leaving me here. In the process of redoing everything the version was resolved.
I've followed HugoPol's insctructions from his website as well, and was able to get everything up and running, except for the grub.cfg part.
I don't quite understand that part.
when I do sudo reboot, it boots me to a grub> prompt.