SOLVED Unable to upgrade VM of Ubuntu 18.04 Server LTS to 20.04 Server LTS due to disk space issues

sleeper52

Explorer
Joined
Nov 12, 2018
Messages
91
Hi guys,

I'm a bit of a Ubuntu novice and would appreciate the help. I'm on Freenas 11.3-U2.1 (see hardware specs on my sig). I installed my VM of Ubuntu Server 18.04 a while back and it's running a pi-hole server fine. I tried updating to 20.04 LTS since it's available but I'm getting an error saying that I don't have enough disk space. This is bizarre to me because I allocated 100GB zvol for my ubuntu server. I currently only run Pi-hole in this VM so I should have a ton of space left. This VM is on a mirrored SSD pool (2x 1TB WD Blue SSDs). The name of the pool is VMs-Plugins-Jails. On installation, I configured the disk and NIC to run as VIRTIO.

update error.png

ubuntu_zvol.png

disk.png

fstab.png

file-tree.png
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Why upgrade from U18 to U20 LTS when your VM works as you want? Ubunutu 18.0 4 LTS is supported until Aprill 2023. In any case, Ubuntu 20.04 LTS on bhyve is causing errors at the moment ( see: https://www.ixsystems.com/community/threads/vm-issues.84219/#post-582521 )

From the images you posted it looks like our install has only 3.9GB of space with 90% used - see image no. 3 /dev/mapper/....... entry in df -h

Check for free space on your volume group and/or use fdisk or cfdisk to check for unused space on your virtual disk device /dev/vda. But if this VM is only for pihole, then a zvol size of 10GB would have been more than sufficient. I would also question whether you want/need LVM (Logical Volume Management) on your VM.
 
Last edited:

sleeper52

Explorer
Joined
Nov 12, 2018
Messages
91
Why upgrade from U18 to U20 LTS when your VM works as you want? Ubunutu 18.0 4 LTS is supported until Aprill 2023. In any case, Ubuntu 20.04 LTS on bhyve is causing errors at the moment ( see: https://www.ixsystems.com/community/threads/vm-issues.84219/#post-582521 )

From the images you posted it looks like our install has only 3.9GB of space with 90% used - see image no. 3 /dev/mapper/....... entry in df -h

Check for free space on your volume group and/or use fdisk or cfdisk to check for unused space on your virtual disk device /dev/vda. But if this VM is only for pihole, then a zvol size of 10GB would have been more than sufficient. I would also question whether you want/need LVM (Logical Volume Management) on your VM.

Hi Kris. Updating to 20.04 isn't really urgent or anything but rather me wanting to try it out. Seeing that 20.04 has problems i'll probably hold off doing so however it concerns me why I'm having disk space issues. I chose to allocate 100GB on the VM just in case I want to add more stuff on it in the future. How do I check free space on my volume group and what should I do to resolve the disk issues? Also, below are the results of the fdisk and cfdisk check you suggested:

fdisk.png

cfdisk.png
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
It looks like you elected to use the default LVM disk layout when you installed your Ubuntu VM via the live server iso. See this example with a 20GB virtual disk device:

u18-lvm-install.jpg


Notice how partition /dev/vda3 is LVM2 member with a volume group of a matching size (in your case it is 98GB). But the root logical volume has only been given 4GB, leaving a lot of free space on the volume group.

Post-install inside my example VM:
Code:
root@u18vm:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               966M     0  966M   0% /dev
tmpfs                              200M  1.1M  199M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  1.8G  1.9G  49% /
tmpfs                              997M     0  997M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              997M     0  997M   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/8268
/dev/vda2                          976M   77M  832M   9% /boot
tmpfs                              200M     0  200M   0% /run/user/1000
root@u18vm:~# vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       1024 / 4.00 GiB
  Free  PE / Size       3839 / <15.00 GiB
  VG UUID               y33HEq-VweL-VAfK-H4ja-xQAh-W55d-cULkny
  
root@u18vm:~# lvdispaly
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                aaV1Ez-UOq1-rjZs-vtj4-1NTE-Comz-ewzYEF
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2020-04-26 07:44:39 +0000
  LV Status              available
  # open                 1
  LV Size                <19.00 GiB
  Current LE             4863
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
root@u18vm:~#


To expand the logical volume ubuntu-lv to use all the available free space on the volume group ubuntu-vg, you need to add in my example 3839 extents and resize the filesystem on ubuntu-lv. You can do that in one command : "lvresize -l +3839 --resizefs ubuntu-vg/ubuntu-lv" , eg:

Code:
root@u18vm:~# lvresize -l +3839 --resizefs ubuntu-vg/ubuntu-lv
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 4.00 GiB (1024 extents) to <19.00 GiB (4863 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 4979712 (4k) blocks long.

root@u18vm:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                aaV1Ez-UOq1-rjZs-vtj4-1NTE-Comz-ewzYEF
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2020-04-26 07:44:39 +0000
  LV Status              available
  # open                 1
  LV Size                <19.00 GiB
  Current LE             4863
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
  
root@u18vm:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               966M     0  966M   0% /dev
tmpfs                              200M  1.1M  199M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   19G  1.8G   17G  11% /
tmpfs                              997M     0  997M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              997M     0  997M   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/8268
/dev/vda2                          976M   77M  832M   9% /boot
tmpfs                              200M     0  200M   0% /run/user/1000
root@u18vm:~# 


Adjust the 3839 number for your case.

A couple of useful refs:
 

sleeper52

Explorer
Joined
Nov 12, 2018
Messages
91
It looks like you elected to use the default LVM disk layout when you installed your Ubuntu VM via the live server iso. See this example with a 20GB virtual disk device:

View attachment 37955

Notice how partition /dev/vda3 is LVM2 member with a volume group of a matching size (in your case it is 98GB). But the root logical volume has only been given 4GB, leaving a lot of free space on the volume group.

Post-install inside my example VM:
Code:
root@u18vm:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               966M     0  966M   0% /dev
tmpfs                              200M  1.1M  199M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv  3.9G  1.8G  1.9G  49% /
tmpfs                              997M     0  997M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              997M     0  997M   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/8268
/dev/vda2                          976M   77M  832M   9% /boot
tmpfs                              200M     0  200M   0% /run/user/1000
root@u18vm:~# vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg
  System ID          
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <19.00 GiB
  PE Size               4.00 MiB
  Total PE              4863
  Alloc PE / Size       1024 / 4.00 GiB
  Free  PE / Size       3839 / <15.00 GiB
  VG UUID               y33HEq-VweL-VAfK-H4ja-xQAh-W55d-cULkny

root@u18vm:~# lvdispaly
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                aaV1Ez-UOq1-rjZs-vtj4-1NTE-Comz-ewzYEF
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2020-04-26 07:44:39 +0000
  LV Status              available
  # open                 1
  LV Size                <19.00 GiB
  Current LE             4863
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

root@u18vm:~#


To expand the logical volume ubuntu-lv to use all the available free space on the volume group ubuntu-vg, you need to add in my example 3839 extents and resize the filesystem on ubuntu-lv. You can do that in one command : "lvresize -l +3839 --resizefs ubuntu-vg/ubuntu-lv" , eg:

Code:
root@u18vm:~# lvresize -l +3839 --resizefs ubuntu-vg/ubuntu-lv
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 4.00 GiB (1024 extents) to <19.00 GiB (4863 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.
resize2fs 1.44.1 (24-Mar-2018)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 4979712 (4k) blocks long.

root@u18vm:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg/ubuntu-lv
  LV Name                ubuntu-lv
  VG Name                ubuntu-vg
  LV UUID                aaV1Ez-UOq1-rjZs-vtj4-1NTE-Comz-ewzYEF
  LV Write Access        read/write
  LV Creation host, time ubuntu-server, 2020-04-26 07:44:39 +0000
  LV Status              available
  # open                 1
  LV Size                <19.00 GiB
  Current LE             4863
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

root@u18vm:~# df -h
Filesystem                         Size  Used Avail Use% Mounted on
udev                               966M     0  966M   0% /dev
tmpfs                              200M  1.1M  199M   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   19G  1.8G   17G  11% /
tmpfs                              997M     0  997M   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
tmpfs                              997M     0  997M   0% /sys/fs/cgroup
/dev/loop0                          90M   90M     0 100% /snap/core/8268
/dev/vda2                          976M   77M  832M   9% /boot
tmpfs                              200M     0  200M   0% /run/user/1000
root@u18vm:~# 


Adjust the 3839 number for your case.

A couple of useful refs:

Thanks Kris! I read through the links you provided. I decided to extend my logical volume by 11GB by using this command:

$ sudo lvresize -L +11G --resizefs ubuntu-vg/ubuntu-lv

Cheers mate.
 
Top