Ubuntu Server 17.10.1 fails on boot with 'end Kernel panic - not syncing: VFS: unable to mount root

Status
Not open for further replies.

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
I have had this Ubuntu server running on a UEFI Bhyve VM on Freenas 11.1-RELEASE for a while, and it has been working fine. I recently ran some updates on Ubuntu, but never restarted the server. After getting some Freenas messages relating to running out of Swap Utilization space recently, I decided to reboot the Freenas server. Now my Ubuntu Server VM refuses to load, and the attached messages are displayed, and it will not take any keyboard input. I also have difficulty trying to get any boot options when restarting the VM as it restarts too fast.

The system is a Intel(R) Atom(TM) CPU C2550 @ 2.40GHz with 16GB RAM. It hasn't struggled running it previously, and I haven't recently ran an upgrade to freenas, so I can only assume something has been broken when running Ubuntu updates. Problem is, I don't know how to get access to repair that if that is even the problem.
ubuntuboot.PNG
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It looks like something is wrong with the path because it is saying it cannot find the root file system / vfs.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
I haven't changed anything like that. The file system is still there. I've tried a repair from the original ISO install, and it didn't work. I found another post that talked about copying the grubx64.efi into the boot folder as bootx64.efi, and that had the same result as above. I'm not an expert on this stuff at all. As I say, last week I did run updates on the ubuntu server, but it kept running fine. I rebooted my freenas yestereday, and now the VM won't boot.
I'm a bit stuck, could it be something wrong with Bhyve?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Did you close your Ubuntu VM before re-booting FreeNAS? That error messages is indicative of a corrupt file system. I'd investigate the state of the Ubuntu virtual machine filesystem by trying to boot from a rescue iso, e.g. ubuntu in live mode or systemrescue cd

I'd guess that you tried the grub-repair, perhaps as described here:
https://www.howtogeek.com/196740/how-to-fix-an-ubuntu-system-when-it-wont-boot/

but you need to get to the root shell and poke around to see what can be mounted.

VMs, like everything else, need backups.
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
I am truly wishing I had created a backup, but I am honestly not sure if this is actually a Ubuntu or Freenas VM issue. This page https://forums.freenas.org/index.php?threads/howto-how-to-boot-linux-vms-using-uefi.54039/ talks about issues with booting and bhyve. I've tried moving the files around as suggested but it doesn't seem to work. I've also tried to re-install Ubuntu now, and the Grub loader screen has returned, but I still get the kernel panic issue when booting. It's all a bit baffling as nothing has happened to the file systems that should have caused this. I am wondering if it is a bug or issue with Bhyve more now, as I have exhausted almost everything I can try from the Ubuntu side.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I've also tried to re-install Ubuntu now, and the Grub loader screen has returned, but I still get the kernel panic issue when booting

Just to be clear, are you saying that starting from scratch and creating a new Ubuntu server VM leads to the same results ? What iso did you use? Forget to ask before, but what ubuntu kernel was in use in your original VM.

If so, tell me the VM device config you used, and whether you used LVM etc, during the install and I'll see if I can reproduce this error.

P.S. Did you update the kernel in the original Ubuntu VM? If you've got grub back, have you tried booting an earlier kernel on the original VM?
 
Last edited:

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
No, I didn't start from scratch, I tried re-installing Ubuntu on the same file system, as I'd like to recover some things if possible. I am using the ubuntu-17.10.1-server-amd64.iso which I believe might be 4.13.0-21 kernel. I have a feeling the upgrade I ran was for everything, including the kernel, and that is now on the file system at 4.13.0-38. As part of trying to fix this issue, I thought it might be due to running out of space on the boot partition, so I (stupidly perhaps) ran apt-get autoremove. I think this has partially removed the 4.13.0-21 kernel, so I am not sure how I can go back to that. The grub options at the moment are only for 4.13.0-37 and 38 and their recovery modes, all of which fail with kernel panic messages. I've also been trying to get help here https://askubuntu.com/questions/102...cing-vfs-after-updating-ubuntu-server-17-10-1 which covers more details about what's been going on.

I'm fairly amateur with freenas, VMs, Bhyve, etc, and new to Ubuntu. I've mostly been running from guides to set things up. I set up the VM from freenas UI, give it 8GB RAM, 4 cores, UEFI. I added a zvol, and that is the disk that the VM is using. The CD is the ISO I have mentioned. Do you need more details than that?
I'm fairly sure I had to do some workaround when I first installed Ubuntu server, which involved copying the grubx64.efi from the ubuntu folder to the boot folder and renaming it bootx64.efi. I don't know if this is anything to do with the issue. I've retried doing that but had no luck.
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
I've just figured out how to look at the grub loader command. The set root command doesn't look right to me?

upload_2018-4-20_13-13-13.png
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
It's difficult to say what state your VM's filesystem may be in after your various attempts to revive it. If you can mount /var in rescue mode, booting the VM from a live cd, the contents of /var/log/apt/history.log and/or /var/log/apt/term.log might help to determine if the problem is down to a failed update.

The "Kernel Panic - not syncing: VFS Unable to mount Root is on unknown ..." error message can be due to a corrupt or missing initrd as has been suggested elsewhere, did you make sure you used chroot form the LiveCD to your Vm's root before attempting to re-build the initrd with update-initramfs -u -k etc.?

Your paritions appear intact but I don't see you've managed to mount sda2 anywhere. The usual method is to try to mount sda2 of the VM to the /mnt directory of the LiveCD and then bind mount the correct elements before chrooting to /mnt. This means further actions as root operate on the VM's filesystem and not the filesystem of the LiveCD.

One complication is that the special vfat EFI partition needs to be mounted after sda2 , which I think should be at /mnt/boot. You can then check if the upgrade failed because your boot directory ran out of space . etc. After chrooting you can also look for the last few commands executed as root using the history command.
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I've made that sound more complex than need be as I had forgotten that "recovery mode" - the "Rescue a Broken System" option - on the ubuntu server install iso does the chroot stuff for you. I guess you must already have used that to view your VM's partitions. Once you reach the choose a "device to use as root file system" screen select the "dev/xxxxx/root" device and select "yes" to mount the separate /boot partition. The select "Execute a shell in /dev/xxxx/root".

/var is available to check apt logs, root history can be viewed by cat .bash_history in /root. You should also be able to view the contents of /boot.
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
After a weekend off from this problem, I'm going to try a few things out again. It's getting to the stage where it's wasted so much of my time I'm tempted to just cut my losses! So with your second message, yeah this is what I've been doing most of the time. I can get into the execute shell and mount the boot partition fine it seems. Whenever I run live from the CD, I am always running on kernel 21 which I assume is because I have booted from CD, whereas the system is set to boot 38 as you can see in the last screen grab I posted above (does that look okay by the way? The set root command with the large set of characters after it? As set root is where there seems to be a problem).
After going back through some of the messages of support, I now remember the original stages to get Ubuntu to work on the VM where I had to copy grubx64.efi into the boot folder renamed as bootx64.efi. After this I had a working system. I can only assume that perhaps when I ran the recent updates, the grubx64.efi got updated. However, I have already tried renaming bootx64.efi to oldbootx64.efi, and copying the potentially new grubx64.efi in as the new bootx64.efi. This very briefly displays what looks like an error message (same as booting from grubx64.efi file in the bhyve bios screen), but I can't tell what it says, before going to the grub loader screen.
I am wondering if perhaps I need to re-install the 21 kernel and try that. However, almost everything I try to apt-get fails...

I will try and take a look at the apt logs. Is there anything in particular I should look for in /boot too?
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@Joe Fenton I was in too much of a hurry last week and not thinking clearly. Reading your OP again is seems very likely the VM failed to boot after a restart simply because of an upgrade error after a new kernel was installed and/or incorrect removal of packages. The VM wouldn't boot to the newer kernel until restarted, so you were not aware of the problem until then. If this was the case, the remedy was to use choose the "advanced option" at the first grub screen and pick an older kernel to boot from then sort the problem. ( Best pratice is always to keep at least one old known working kernel on your VM. Also, if /boot is on a separate partition, monitor the space used and don't install new kernels if space is low.)

But now your VM install is in an uncertain state and you may have compounded the problem. If you can be certain all the original partitions are intact and apt-get update && apt-get upgrade can complete without error, you may be able to restore your VM to a functioning state.

These screen shots are an example of what I'd expect to see using "rescue mode" when the original install was done by choosing the "guided with LVM" option for disk layout. The thing to note here is that /boot lies outside the LVM device in its own partition as does the ESP EFI partition ( the device is dev/vda as I used a virtio disk).

1. Device mapper root and /boot mounted in rescue mode as shown when executing shell in the dev mapper root:

2. You still need to manually mount the ESP EFI partion at /boot/efi, for example with: mount -t vaf /dev/vda1 /boot/efi, after making sure the /boot partition is already mounted.

3. Example contents of /boot.


I'd bet the apt-get commands are failing as you can't ping out from the rescue mode chroot. You are right, after a "rescue mode chroot" it's the live installer kernel that is running.

First check all partitions are intact and the contents of /boot and /boot/efi/EFI before proceeding.
 

Attachments

  • u1.jpeg
    u1.jpeg
    95.8 KB · Views: 690
  • u2.jpeg
    u2.jpeg
    61.2 KB · Views: 656
  • u3.jpeg
    u3.jpeg
    46.3 KB · Views: 604

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
Do I need to manually mount the ESP EFI partition? It looks like it might be right to me
upload_2018-4-23_13-44-55.png


upload_2018-4-23_13-46-34.png


upload_2018-4-23_13-49-3.png


I can ping 8.8.8.8, but not www.google.com so perhaps the issue with apt-get is a DNS one for some reason?

I wish I'd realised before I did the autoremove that I could have gone back to the previous kernel. Currently can't seem to boot 37 or 38. I'm not sure if it's an incorrect root mount option though as it complains about in the original screenshot.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Will reply fully when I return to my desk in about 60-90mins.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Answer to your 1st question is no, not in your case as you have a different partition layout and /boot.efi is already mounted.

To fix name resolution you'd typically use something like echo "nameserver 8.8.8.8" > /etc/resolv.conf, but Ubuntu complicates matters as it uses the resolvconf system , so /etc/resolv.conf is not a file but a symlink to /run/resolvconf/resolv.conf

Deleting the symlink /etc/resolv.conf and creating an /etc/resolv.conf file gets round this:


rm /etc/resolv.conf
touch /etc/resolv.conf
echo "nameserver 8.8.8.8" > /etc/resolv.conf


Once you finished, and before exiting the rescue mode chroot , reverse this change:


rm /etc/resolv.conf
ln -s /run/resolvconf/resolv.conf /etc/resolv.conf


While you're in the chroot, does an apt-get update complete without error? This will tell you if the local packages repo is in a consistent state. If so, an apt-get dist-upgrade should show if a new kernel is to be installed, in which case an update-grub automatically occurs. Otherwise you can reinstall 38 generic kernel packages with apt-get install --reinstall linux-headers-gneric linux-signed-gneric linux-signed-image-generic

There ought to be just one entry in directory /boot/efi/EFI/BOOT, the file BOOTX64.EFI

Remove any unwanted entries and cd to / and then execute this command grub-install --efi-directory=/boot/efi --boot-directory=/boot --removable

Exit the rescue mode chroot and reboot without the iso attached to your VM.
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
This is so great, thanks KrisBee! Feel like finally making progress. So doing what you said finally let me add 8.8.8.8 as a DNS. This let apt-get update work, with no error. I ran apt-get dist-upgrade, and there were around 40 packages to update, so I have set that going. Some are failing because it is in chroot, but not many, and some are failing possibly because a log area isn't mounted, but most seem to be working. I did see this which is what I have seen before when trying to fix the grub:
upload_2018-4-23_17-0-5.png


I don't know if that indicates there might still be a problem?

Do I delete the bootx64.efi, and re-copy the grubx64.efi from the ubuntu folder, or put the old bootx64.efi that I renamed back in place. Just thought I'd check that?

Fingers crossed this gets me there!

Thanks again
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
The errors messages are because the "rescue mode chroot" from the Ubuntu server iso doesn't bind mount "/run" or "/dev/pts", which you would normally do if you created the chroot manually. I did try this myself, as I normally use debian and not Ubuntu and it didn't seem to be any unwanted side effects. ( If you had created the chroot manually, then DNS would work in the chroot and you would use any of the Ubunut/Kubunut/Xubunut Live CD as your starting point).

You didn't say if apt-get dist-upgrade indicated a newer kernel would be installed or not. If you have to go the reinstall 38 kernel route, I omitted the update-grub command that should follow.

No, don't delete the bootx64.efi and re-copy. Follow the last part of my last post and use that special grub-install command instead. If you do that and then check /boot/efi/EFI/BOOT contents you'll see the byte count of the BOOT64X.EFI file that is created there matches the shimx64.efi file that exists in /boot/efi/EFI/ubunut.

And don't forget to reverse the /etc/resolv.conf change you made before exiting the "rescue mode chroot".

If and when the VM is working - snapshot snapshot snaphot
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
I tried recopying the grubx64.efi as the bootx64.efi. I reverted the resolv.conf change, rebooted. The grub screen worked, I let it continue. It still failed with a kernel panic, although it looked slightly different. Unfortunately I didn't get a screen grab, I just thought I'd reboot. Now the grub is broken.
I get:
upload_2018-4-23_17-36-2.png

and then the grub loader prompt.

I don't know why it booted first time and not second. It seems to have lost the /boot mount somehow...?
 

Joe Fenton

Dabbler
Joined
May 5, 2015
Messages
40
The apt-get dist-upgrade didn't indicate a newer kernel would be installed. So perhaps I could have omitted the update-grub command. I also need to undo the changes I made in the boot folder then, and then perhaps try and fix the grub one more time...
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I tried recopying the grubx64.efi as the bootx64.efi. I reverted the resolv.conf change, rebooted. The grub screen worked, I let it continue. It still failed with a kernel panic, although it looked slightly different. Unfortunately I didn't get a screen grab, I just thought I'd reboot. Now the grub is broken.
I get:
View attachment 23902
and then the grub loader prompt.

I don't know why it booted first time and not second. It seems to have lost the /boot mount somehow...?

You were too quick, I would reinstall the 38 kernel packages, followed by an update-grub. Then, while still in the the rescue mode chroot, follow the part of my posts #15 & #17 re: using the special grub-install command.
 
Status
Not open for further replies.
Top