Automatic snapshot task keeps failed. There is one raid-z1 dataset/pool that has one zvol on it that uses 80% of the available space.
pool: SSD7
state: ONLINE
scan: scrub repaired 0B in 00:12:15 with 0 errors on Sun Nov 28 00:12:15 2021
config:
NAME STATE READ WRITE CKSUM
SSD7 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/a9b5707e-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9af0f6c-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9d14a26-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9c869e5-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9da2790-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
The volume:
zfs list -ro space SSD7/SSD7-VOL1
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
SSD7/SSD7-VOL1 9.11T 11.3T 0B 4.95T 6.33T 0B
zfs get -r all SSD7/SSD7-VOL1
SSD7/SSD7-VOL1 type volume -
SSD7/SSD7-VOL1 creation Mon Dec 6 17:05 2021 -
SSD7/SSD7-VOL1 used 11.3T -
SSD7/SSD7-VOL1 available 9.11T -
SSD7/SSD7-VOL1 referenced 4.96T -
SSD7/SSD7-VOL1 compressratio 1.44x -
SSD7/SSD7-VOL1 reservation none default
SSD7/SSD7-VOL1 volsize 11.2T local
SSD7/SSD7-VOL1 volblocksize 32K -
SSD7/SSD7-VOL1 checksum on default
SSD7/SSD7-VOL1 compression lz4 inherited from SSD7
SSD7/SSD7-VOL1 readonly off default
SSD7/SSD7-VOL1 createtxg 1339912 -
SSD7/SSD7-VOL1 copies 1 default
SSD7/SSD7-VOL1 refreservation 11.3T local
SSD7/SSD7-VOL1 guid 2007483176272222030 -
SSD7/SSD7-VOL1 primarycache all default
SSD7/SSD7-VOL1 secondarycache all default
SSD7/SSD7-VOL1 usedbysnapshots 0B -
SSD7/SSD7-VOL1 usedbydataset 4.96T -
SSD7/SSD7-VOL1 usedbychildren 0B -
SSD7/SSD7-VOL1 usedbyrefreservation 6.33T -
SSD7/SSD7-VOL1 logbias latency default
SSD7/SSD7-VOL1 objsetid 14289 -
SSD7/SSD7-VOL1 dedup off default
SSD7/SSD7-VOL1 mlslabel none default
SSD7/SSD7-VOL1 sync standard default
SSD7/SSD7-VOL1 refcompressratio 1.44x -
SSD7/SSD7-VOL1 written 4.96T -
SSD7/SSD7-VOL1 logicalused 6.59T -
SSD7/SSD7-VOL1 logicalreferenced 6.59T -
SSD7/SSD7-VOL1 volmode default default
SSD7/SSD7-VOL1 snapshot_limit none default
SSD7/SSD7-VOL1 snapshot_count none default
SSD7/SSD7-VOL1 snapdev hidden default
SSD7/SSD7-VOL1 context none default
SSD7/SSD7-VOL1 fscontext none default
SSD7/SSD7-VOL1 defcontext none default
SSD7/SSD7-VOL1 rootcontext none default
SSD7/SSD7-VOL1 redundant_metadata all default
SSD7/SSD7-VOL1 encryption off default
SSD7/SSD7-VOL1 keylocation none default
SSD7/SSD7-VOL1 keyformat none default
SSD7/SSD7-VOL1 pbkdf2iters 0 default
SSD7/SSD7-VOL1 org.truenas:managedby 10.250.0.11 local
The empty space in the dataset is still 2.77 TiB (the remaining 20%).
Because the ZVOL (SSD7-VOL1) only has one iscsi target on in the uses the complete space of that volume, there wil never be more than 11.2 TiB on the volume.
Why is the remaining 20% not enough to facilitate snapshots?
I have done some research and saw that the value of refreservation had something to do with no space being available for snapshots.
I am not looking for way to bend best practises of perfectly okay default values. I can recreate volumes if needed, so I can reconfigure variables needed.
Just looking for some guidance to make automatic snapshots work, once and for all :)
Could anyone guide me?
pool: SSD7
state: ONLINE
scan: scrub repaired 0B in 00:12:15 with 0 errors on Sun Nov 28 00:12:15 2021
config:
NAME STATE READ WRITE CKSUM
SSD7 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
gptid/a9b5707e-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9af0f6c-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9d14a26-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9c869e5-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
gptid/a9da2790-3327-11ec-bae3-78ac4459de7c ONLINE 0 0 0
The volume:
zfs list -ro space SSD7/SSD7-VOL1
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
SSD7/SSD7-VOL1 9.11T 11.3T 0B 4.95T 6.33T 0B
zfs get -r all SSD7/SSD7-VOL1
SSD7/SSD7-VOL1 type volume -
SSD7/SSD7-VOL1 creation Mon Dec 6 17:05 2021 -
SSD7/SSD7-VOL1 used 11.3T -
SSD7/SSD7-VOL1 available 9.11T -
SSD7/SSD7-VOL1 referenced 4.96T -
SSD7/SSD7-VOL1 compressratio 1.44x -
SSD7/SSD7-VOL1 reservation none default
SSD7/SSD7-VOL1 volsize 11.2T local
SSD7/SSD7-VOL1 volblocksize 32K -
SSD7/SSD7-VOL1 checksum on default
SSD7/SSD7-VOL1 compression lz4 inherited from SSD7
SSD7/SSD7-VOL1 readonly off default
SSD7/SSD7-VOL1 createtxg 1339912 -
SSD7/SSD7-VOL1 copies 1 default
SSD7/SSD7-VOL1 refreservation 11.3T local
SSD7/SSD7-VOL1 guid 2007483176272222030 -
SSD7/SSD7-VOL1 primarycache all default
SSD7/SSD7-VOL1 secondarycache all default
SSD7/SSD7-VOL1 usedbysnapshots 0B -
SSD7/SSD7-VOL1 usedbydataset 4.96T -
SSD7/SSD7-VOL1 usedbychildren 0B -
SSD7/SSD7-VOL1 usedbyrefreservation 6.33T -
SSD7/SSD7-VOL1 logbias latency default
SSD7/SSD7-VOL1 objsetid 14289 -
SSD7/SSD7-VOL1 dedup off default
SSD7/SSD7-VOL1 mlslabel none default
SSD7/SSD7-VOL1 sync standard default
SSD7/SSD7-VOL1 refcompressratio 1.44x -
SSD7/SSD7-VOL1 written 4.96T -
SSD7/SSD7-VOL1 logicalused 6.59T -
SSD7/SSD7-VOL1 logicalreferenced 6.59T -
SSD7/SSD7-VOL1 volmode default default
SSD7/SSD7-VOL1 snapshot_limit none default
SSD7/SSD7-VOL1 snapshot_count none default
SSD7/SSD7-VOL1 snapdev hidden default
SSD7/SSD7-VOL1 context none default
SSD7/SSD7-VOL1 fscontext none default
SSD7/SSD7-VOL1 defcontext none default
SSD7/SSD7-VOL1 rootcontext none default
SSD7/SSD7-VOL1 redundant_metadata all default
SSD7/SSD7-VOL1 encryption off default
SSD7/SSD7-VOL1 keylocation none default
SSD7/SSD7-VOL1 keyformat none default
SSD7/SSD7-VOL1 pbkdf2iters 0 default
SSD7/SSD7-VOL1 org.truenas:managedby 10.250.0.11 local
The empty space in the dataset is still 2.77 TiB (the remaining 20%).
Because the ZVOL (SSD7-VOL1) only has one iscsi target on in the uses the complete space of that volume, there wil never be more than 11.2 TiB on the volume.
Why is the remaining 20% not enough to facilitate snapshots?
I have done some research and saw that the value of refreservation had something to do with no space being available for snapshots.
I am not looking for way to bend best practises of perfectly okay default values. I can recreate volumes if needed, so I can reconfigure variables needed.
Just looking for some guidance to make automatic snapshots work, once and for all :)
Could anyone guide me?