I am lost with the ZVOL sizes, like many I read 
The ZVOL for the 2 VMs were set at 20 and 48GB respectively upon creation (see table below).
I assume with the compression of 1.86 the 20GB turns into 36GB and the 48GB with 1.51 into 79GB?
But where does the Available space of these Volumes come from? That would turn my VMs in to 36+42=78GB and 79+64 = 143GB sizes, way different from the set 20GB and 48GB.
Inside the VM's, Ubuntu sees disk spaces of 20GB and 48GB only, so it can't expand there anyway automatically.
Snapshot usage of the VMs is around 4Gb, so not really of influence here.
I think the Available space of the VOLUME reduces the overall Available space on the pool?
I would like to reduce the free space of the VMs in order to have more overall free space on the pool.
I am lost here how free space works/manage that. Should I have set Force under the zvol option? Or set -S refreservation?
I ran into out-of-disk space on this pool last night, so I wiped 3 snapshots of around 5-9Gb each to get going again but I think the VMs are a big culprit here. Any explanations and suggestions are welcome.
I did read these links:
www.truenas.com
www.truenas.com
nex7.blogspot.com
ssdpool(System Dataset Pool)ONLINE | 199.69 GiB (92%) Used | 17.31 GiB Free
The ZVOL for the 2 VMs were set at 20 and 48GB respectively upon creation (see table below).
I assume with the compression of 1.86 the 20GB turns into 36GB and the 48GB with 1.51 into 79GB?
But where does the Available space of these Volumes come from? That would turn my VMs in to 36+42=78GB and 79+64 = 143GB sizes, way different from the set 20GB and 48GB.
Inside the VM's, Ubuntu sees disk spaces of 20GB and 48GB only, so it can't expand there anyway automatically.
Snapshot usage of the VMs is around 4Gb, so not really of influence here.
I think the Available space of the VOLUME reduces the overall Available space on the pool?
I would like to reduce the free space of the VMs in order to have more overall free space on the pool.
I am lost here how free space works/manage that. Should I have set Force under the zvol option? Or set -S refreservation?
I ran into out-of-disk space on this pool last night, so I wiped 3 snapshots of around 5-9Gb each to get going again but I think the VMs are a big culprit here. Any explanations and suggestions are welcome.
I did read these links:
ZVOL why is it so confusing?
Hi Folks I am confused about a ZVOL i created in TRUE NAS. When i look at the properties of the ZVOL some the properties that are puzzling me are shown below. This is from the output of zfs get all . PROPERTY VALUE SOURCE used 115G - available 16.0T -...

avail size mismatch : zpool list vs zfs list
I've created a 1Tb RAID-1 pool named NASBackup, and enabled dedup & compression. I'm getting confusing numbers for the available size on the pool vs the dataset. Pool has 209G available, dataset has only 77G available. There is only one dataset in the pool. [root@freenas ~]# zpool list...

Reservation & Ref Reservation - An Explanation (Attempt)
So in this article I'm going to try to explain and answer a lot of the questions I get and misconceptions I see in terms of ZFS and space ut...
ssdpool(System Dataset Pool)ONLINE | 199.69 GiB (92%) Used | 17.31 GiB Free
Name | Type | Used | Available | Compression | Compression Ratio |
---|---|---|---|---|---|
ssdpool | FILESYSTEM | 199.69 GiB | 17.31 GiB | zstd | 1.70 |
iocage | FILESYSTEM | 61.06 GiB | 17.31 GiB | Inherits (zstd) | 2.00 |
share | FILESYSTEM | 22.48 GiB | 17.31 GiB | Inherits (zstd) | 1.03 |
virtual-machines | FILESYSTEM | 115.44 GiB | 17.31 GiB | Inherits (zstd) | 1.60 |
freenasubuntu-w5xza | VOLUME | 36.46 GiB | 42.53 GiB | Inherits (zstd) | 1.86 |
truenaspython-ev6kqg | VOLUME | 78.98 GiB | 64.21 GiB | Inherits (zstd) | 1.51 |