rsriverrat
Cadet
- Joined
- Feb 5, 2023
- Messages
- 1
Hey everyone, my apologies in advance if this has been answered but I could not find a answer anywhere.
tl;dr
snapshot then replicated zvols do not show in ui as original size, only show the refer size, but the command line reports the volsize correctly
Now for the too long:
I have been using zfs for a while now, learned enough to get me up and going, used freenas, then TN Core, then moved to proxmox with 2 zfs pools, but wanted to change things up a bit since TN Scale seems to be going good. So trying to follow a tute on how to use docker/portainer I spun up a Debian VM (100G), followed along and tanked somewhere along the way, and had to start over with a fresh VM, reinstall takes a bit so said to self, self your a dufus, make a copy and work from there. So now down that rabbit hole I go since TN Scale uses Zvol instead of the way proxmox did it..... Here we go boys and girls, did a lot of reading the last week or so and think I finally fingered it out, however one thing that still baffles me is the way the storage screen shows the new Zvol as being the refer size not the actual volsize.
Here is what I did to snapshot and replicate the VM Zvol
Here you can see the snapshot
Replicate the snapshot
Destroy the snapshot and original VM
and finally the volsize is the same the original VM
changed VM to use debtest Zvol and everything woks perfect
So anyway I know I probably wrote a lot for nothing, but hey, gotta learn somehow.
I am sure this has something to do with the way ZFS reports data, but I am still curious, is there a way for TN to report this new volume as the correct size, might just be my OCD, but I prefer to see what my disk usage is, including the entire VM volume, not just what it is taking up currently.
Thanks guys for taking the time to read, if I did something wrong, I am all ears, and don't mind criticism.
Dale
tl;dr
snapshot then replicated zvols do not show in ui as original size, only show the refer size, but the command line reports the volsize correctly
Now for the too long:
I have been using zfs for a while now, learned enough to get me up and going, used freenas, then TN Core, then moved to proxmox with 2 zfs pools, but wanted to change things up a bit since TN Scale seems to be going good. So trying to follow a tute on how to use docker/portainer I spun up a Debian VM (100G), followed along and tanked somewhere along the way, and had to start over with a fresh VM, reinstall takes a bit so said to self, self your a dufus, make a copy and work from there. So now down that rabbit hole I go since TN Scale uses Zvol instead of the way proxmox did it..... Here we go boys and girls, did a lot of reading the last week or so and think I finally fingered it out, however one thing that still baffles me is the way the storage screen shows the new Zvol as being the refer size not the actual volsize.
Here is what I did to snapshot and replicate the VM Zvol
Here you can see the snapshot
Code:
root@truenas[~]# zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT pool/virtual_machines/debian_11-diip9y@test 0B - 4.09G -
Replicate the snapshot
Code:
root@truenas[~]# zfs send -v pool/virtual_machines/debian_11-diip9y@test | zfs receive pool/virtual_machines/debtest full send of pool/virtual_machines/debian_11-diip9y@test estimated size is 4.16G total estimated size is 4.16G TIME SENT SNAPSHOT pool/virtual_machines/debian_11-diip9y@test 22:53:30 33.4M pool/virtual_machines/debian_11-diip9y@test 22:53:31 204M pool/virtual_machines/debian_11-diip9y@test 22:53:32 530M pool/virtual_machines/debian_11-diip9y@test 22:53:33 744M pool/virtual_machines/debian_11-diip9y@test 22:53:34 978M pool/virtual_machines/debian_11-diip9y@test 22:53:35 1.21G pool/virtual_machines/debian_11-diip9y@test 22:53:36 1.46G pool/virtual_machines/debian_11-diip9y@test 22:53:37 1.75G pool/virtual_machines/debian_11-diip9y@test 22:53:38 2.00G pool/virtual_machines/debian_11-diip9y@test 22:53:39 2.19G pool/virtual_machines/debian_11-diip9y@test 22:53:40 2.33G pool/virtual_machines/debian_11-diip9y@test 22:53:41 2.42G pool/virtual_machines/debian_11-diip9y@test 22:53:42 2.53G pool/virtual_machines/debian_11-diip9y@test 22:53:43 2.72G pool/virtual_machines/debian_11-diip9y@test 22:53:44 2.98G pool/virtual_machines/debian_11-diip9y@test 22:53:45 3.29G pool/virtual_machines/debian_11-diip9y@test 22:53:46 3.50G pool/virtual_machines/debian_11-diip9y@test 22:53:47 3.76G pool/virtual_machines/debian_11-diip9y@test 22:53:48 3.92G pool/virtual_machines/debian_11-diip9y@test 22:53:49 4.07G pool/virtual_machines/debian_11-diip9y@test
Destroy the snapshot and original VM
Code:
zfs destroy pool/virtual_machines/debian_11-diip9y@test zfs destroy pool/virtual_machines/debian_11-diip9y
and finally the volsize is the same the original VM
Code:
root@truenas[~]# zfs get all pool/virtual_machines/debtest NAME PROPERTY VALUE SOURCE pool/virtual_machines/debtest type volume - pool/virtual_machines/debtest creation Mon Feb 6 22:53 2023 - pool/virtual_machines/debtest used 4.09G - pool/virtual_machines/debtest available 4.40T - pool/virtual_machines/debtest referenced 4.09G - pool/virtual_machines/debtest compressratio 1.00x - pool/virtual_machines/debtest reservation none default pool/virtual_machines/debtest volsize 100G local pool/virtual_machines/debtest volblocksize 32K - pool/virtual_machines/debtest checksum on default pool/virtual_machines/debtest compression off default pool/virtual_machines/debtest readonly off default pool/virtual_machines/debtest createtxg 11679765 - pool/virtual_machines/debtest copies 1 inherited from pool/virtual_machines pool/virtual_machines/debtest refreservation none default pool/virtual_machines/debtest guid 16229713511141347148 - pool/virtual_machines/debtest primarycache all default pool/virtual_machines/debtest secondarycache all default pool/virtual_machines/debtest usedbysnapshots 0B - pool/virtual_machines/debtest usedbydataset 4.09G - pool/virtual_machines/debtest usedbychildren 0B - pool/virtual_machines/debtest usedbyrefreservation 0B - pool/virtual_machines/debtest logbias latency default pool/virtual_machines/debtest objsetid 30547 - pool/virtual_machines/debtest dedup off default pool/virtual_machines/debtest mlslabel none default pool/virtual_machines/debtest sync standard default pool/virtual_machines/debtest refcompressratio 1.00x - pool/virtual_machines/debtest written 0 - pool/virtual_machines/debtest logicalused 4.07G - pool/virtual_machines/debtest logicalreferenced 4.07G - pool/virtual_machines/debtest volmode default default pool/virtual_machines/debtest snapshot_limit none default pool/virtual_machines/debtest snapshot_count none default pool/virtual_machines/debtest snapdev hidden default pool/virtual_machines/debtest context none default pool/virtual_machines/debtest fscontext none default pool/virtual_machines/debtest defcontext none default pool/virtual_machines/debtest rootcontext none default pool/virtual_machines/debtest redundant_metadata all default pool/virtual_machines/debtest encryption off default pool/virtual_machines/debtest keylocation none default pool/virtual_machines/debtest keyformat none default pool/virtual_machines/debtest pbkdf2iters 0 default pool/virtual_machines/debtest org.truenas:managedby 192.168.50.167 inherited from pool/virtual_machines pool/virtual_machines/debtest org.freenas:description dataset for virtual machines inherited from pool/virtual_machines
changed VM to use debtest Zvol and everything woks perfect
So anyway I know I probably wrote a lot for nothing, but hey, gotta learn somehow.
I am sure this has something to do with the way ZFS reports data, but I am still curious, is there a way for TN to report this new volume as the correct size, might just be my OCD, but I prefer to see what my disk usage is, including the entire VM volume, not just what it is taking up currently.
Thanks guys for taking the time to read, if I did something wrong, I am all ears, and don't mind criticism.
Dale