Cannot create snapshot - out of space

vicpassy

Cadet
Joined
Apr 12, 2021
Messages
7
Hello
im using truenas

Version:
TrueNAS-12.0-U3

I know this question is frequently asked.

I'm trying to replicate to another truenas via ssh

I can't create a snapshot os my dataset

Am i doing something wrong ?

im getting this error

Code:
middlewared.service_exception.CallError: [EFAULT] Failed to snapshot tank/XCP001-ISCSI@manual-2021-11-22_15-08: out of space



My pool is tank and my zvol is tank/XCP001-ISCSI

Code:
root@truenas003[~]# zpool list tank
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank  16.4T  6.02T  10.3T        -         -    41%    36%  1.00x    ONLINE  /mnt



Code:
root@truenas003[~]# zfs list -o space tank/XCP001-ISCSI
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank/XCP001-ISCSI  6.55T  7.11T        0B   4.00T          3.11T         0B




Code:
root@truenas003[~]# zfs list -o space tank
NAME  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank  3.44T  7.11T        0B    192K             0B      7.11T
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Check your quota using zfs get quota tank/XCP001-ISCSI
If that's not it, please post the entire output using zfs get all tank/XCP001-ISCSI

Oh, and by the way, this sub-forum is for the older FreeNAS. Users of TrueNAS Core & SCALE have other sub-forums. But, this particular question and answer applies to all.
 

vicpassy

Cadet
Joined
Apr 12, 2021
Messages
7
the volume XCP001-ISCSI doesnt have any quota



Code:
root@truenas003[~]# zfs get quota tank/XCP001-ISCSI
NAME               PROPERTY  VALUE  SOURCE
tank/XCP001-ISCSI  quota     -      -



Code:
root@truenas003[~]# zfs list -o space tank
NAME  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank  3.44T  7.11T        0B    192K             0B      7.11T



Code:
root@truenas003[~]# zfs list -o space tank/XCP001-ISCSI
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank/XCP001-ISCSI  6.50T  7.11T        0B   4.05T          3.06T         0B



i have 6 Drives in RAIDZ2 in the pool tank

according to zfs calculator I have 10.49 TiB of usable storage capacity ( 16.37 TiB) RAW capacity

the pool has available 3.44TiB

i cant imagine why i can not make a snapshot of tank/XCP001-ISCSI


all data are confirmed by the commands below


Code:
root@truenas003[~]# zpool list tank                   
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank       16.4T  6.09T    10.3T        -         -                       42%       37%  1.00x    ONLINE  /mnt


root@truenas003[~]# zfs list -o space tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 3.44T 7.11T 0B 192K 0B 7.11T



root@truenas003[~]# zfs list -r -o space tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 3.44T 7.11T 0B 192K 0B 7.11T
tank/.system 3.44T 858M 0B 774M 0B 84.5M
tank/.system/configs-17a88b624aa64e01b68939563b5fcb8c 3.44T 31.9M 0B 31.9M 0B 0B
tank/.system/cores 1024M 192K 0B 192K 0B 0B
tank/.system/rrd-17a88b624aa64e01b68939563b5fcb8c 3.44T 50.3M 0B 50.3M 0B 0B
tank/.system/samba4 3.44T 368K 0B 368K 0B 0B
tank/.system/services 3.44T 192K 0B 192K 0B 0B
tank/.system/syslog-17a88b624aa64e01b68939563b5fcb8c 3.44T 1.40M 0B 1.40M 0B 0B
tank/.system/webui 3.44T 192K 0B 192K 0B 0B
tank/XCP001-ISCSI 6.50T 7.11T 0B 4.05T 3.06T 0B


1637936156313.png
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Please post the entire output using zfs get all tank/XCP001-ISCSI

What you listed is useful, but missing ZFS dataset attributes like "refquota". I don't clearly understand the "refquota" and "refreservation" attributes but I do recall someone getting bit on used space from a left over value in "refreservation".
 

vicpassy

Cadet
Joined
Apr 12, 2021
Messages
7
OK here is the result

.... if you see something wrong , please let me know..

thanks.

Code:
root@truenas003[~]# zfs get all tank/XCP001-ISCSI
NAME               PROPERTY                 VALUE                    SOURCE
tank/XCP001-ISCSI  type                     volume                   -
tank/XCP001-ISCSI  creation                 Fri Apr 16 16:18 2021    -
tank/XCP001-ISCSI  used                     7.11T                    -
tank/XCP001-ISCSI  available                6.41T                    -
tank/XCP001-ISCSI  referenced               4.14T                    -
tank/XCP001-ISCSI  compressratio            1.41x                    -
tank/XCP001-ISCSI  reservation              none                     default
tank/XCP001-ISCSI  volsize                  7T                       local
tank/XCP001-ISCSI  volblocksize             16K                      -
tank/XCP001-ISCSI  checksum                 on                       default
tank/XCP001-ISCSI  compression              lz4                      local
tank/XCP001-ISCSI  readonly                 off                      default
tank/XCP001-ISCSI  createtxg                403                      -
tank/XCP001-ISCSI  copies                   1                        default
tank/XCP001-ISCSI  refreservation           7.11T                    local
tank/XCP001-ISCSI  guid                     1460418923813688160      -
tank/XCP001-ISCSI  primarycache             all                      default
tank/XCP001-ISCSI  secondarycache           all                      default
tank/XCP001-ISCSI  usedbysnapshots          0B                       -
tank/XCP001-ISCSI  usedbydataset            4.14T                    -
tank/XCP001-ISCSI  usedbychildren           0B                       -
tank/XCP001-ISCSI  usedbyrefreservation     2.97T                    -
tank/XCP001-ISCSI  logbias                  latency                  default
tank/XCP001-ISCSI  objsetid                 86                       -
tank/XCP001-ISCSI  dedup                    off                      default
tank/XCP001-ISCSI  mlslabel                 none                     default
tank/XCP001-ISCSI  sync                     standard                 default
tank/XCP001-ISCSI  refcompressratio         1.41x                    -
tank/XCP001-ISCSI  written                  4.14T                    -
tank/XCP001-ISCSI  logicalused              4.75T                    -
tank/XCP001-ISCSI  logicalreferenced        4.75T                    -
tank/XCP001-ISCSI  volmode                  default                  default
tank/XCP001-ISCSI  snapshot_limit           none                     default
tank/XCP001-ISCSI  snapshot_count           none                     default
tank/XCP001-ISCSI  snapdev                  hidden                   default
tank/XCP001-ISCSI  context                  none                     default
tank/XCP001-ISCSI  fscontext                none                     default
tank/XCP001-ISCSI  defcontext               none                     default
tank/XCP001-ISCSI  rootcontext              none                     default
tank/XCP001-ISCSI  redundant_metadata       all                      default
tank/XCP001-ISCSI  encryption               off                      default
tank/XCP001-ISCSI  keylocation              none                     default
tank/XCP001-ISCSI  keyformat                none                     default
tank/XCP001-ISCSI  pbkdf2iters              0                        default
tank/XCP001-ISCSI  org.truenas:managedby    192.168.50.252           local
tank/XCP001-ISCSI  org.freenas:description  IScsi XCP                local
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @vicpassy,

That is a good one you have here...

So know that the space available announced on your pool is "wrong" in that it illustrates how much free space you have in total. The thing is, because of Raid-Z2, for whatever you save in the pool, ZFS must add 2 sets of parity that will also consume space. As such, if it says that you have 600G of free space, you have in fact room for only 400G of data, (4 data disks out of 6 disks in your case) and the other 200G will have to be used for parity. This is an over-simplification but just to illustrate how that specific measure may be counter-intuitive. Should a pool be made or mirrors, the entire free space announced is usable space because there is no parity to add for mirrors.

So that means :
Code:
root@truenas003[~]# zfs list -o space tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 3.44T 7.11T 0B 192K 0B 7.11T

4/6 * 3.44 T = 2.15 T usable

So indeed, that is not enough to store the maximum size your zvol snapshot can grow up to. But I would not have expected that it would prevent the snapshot to be taken at all. It may be a protection to ensure the pool will not be loaded to 100% but that is only a guess... By default, all snapshots start as being basically empty and 0 space.

Code:
root@truenas003[~]# zfs list -o space tank
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
tank 3.44T 7.11T 0B 192K 0B 7.11T

How can that command returns 3.44T available now... Do you have any snapshots ?
zfs list -t snapshot -o space

Any other datasets ?

So please post these results and I will keep searching what may be wrong here...
 

vicpassy

Cadet
Joined
Apr 12, 2021
Messages
7
Hi @Heracles

thanks for your help!


so the free space does not have space reserved for the double parity used by RAIDZ2 , these data will be used in the available space


so what you mean is that i couldn't have created the volume with up to 70% of the total... should have done the 4/6 math when i created the volume...


I don't have any other volume or snapshot , only a zvol tank/XCP001-ISCSI


Code:
here are all volumes of my storage

root@truenas003[~]# zfs list             
NAME                                                    USED  AVAIL     REFER  MOUNTPOINT
boot-pool                                              2.22G  90.3G       24K  none
boot-pool/ROOT                                         2.22G  90.3G       24K  none
boot-pool/ROOT/12.0-U3                                 2.22G  90.3G     1.16G  /
boot-pool/ROOT/Initial-Install                            1K  90.3G     1.06G  legacy
boot-pool/ROOT/default                                  232K  90.3G     1.06G  legacy
tank                                                   7.11T  3.44T      192K  /mnt/tank
tank/.system                                            859M  3.44T      774M  legacy
tank/.system/configs-17a88b624aa64e01b68939563b5fcb8c  33.1M  3.44T     33.1M  legacy
tank/.system/cores                                      192K  1024M      192K  legacy
tank/.system/rrd-17a88b624aa64e01b68939563b5fcb8c      50.3M  3.44T     50.3M  legacy
tank/.system/samba4                                     368K  3.44T      368K  legacy
tank/.system/services                                   192K  3.44T      192K  legacy
tank/.system/syslog-17a88b624aa64e01b68939563b5fcb8c   1.42M  3.44T     1.42M  legacy
tank/.system/webui                                      192K  3.44T      192K  legacy
tank/XCP001-ISCSI                                      7.11T  6.41T     4.14T  -




root@truenas003[~]# zfs list -t snapshot -o space tank
no datasets available


root@truenas003[~]# zfs list -o space tank
NAME  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank       3.44T   7.11T                0B          192K                            0B              7.11T




 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hi again,

Thanks for the extra data. About the command to list snapshots, do it without the poolname at the end please, as I wrote it in my previous post :

zfs list -t snapshot -o space

So, lets try to resume all the measurements. I will compare that with my server Hades which also uses 3TB drives in RaidZ2 (5x instead of 6)

zpool list
Returns me total size of 13.6 T and 16.4 T on your side. So this is clearly total space without counting the space needed for parity. Should we factor in the parity, my size is about 3/5 * 13.6 = 8.1 T and yours, 4/6 * 16.4 = 10.8 T

so what you mean is that i couldn't have created the volume with up to 70% of the total... should have done the 4/6 math when i created the volume...
Actually, you did hit the 70% mark : these 7.1 TB represent 70% of your 10.8 T. You should be good here.

zfs list -o space poolname
Returns me 4.8 T used and 3 T free when yours returns 7.1 used and 3.4 free. So these totals are to be matched with the usable space as evaluated just above. That would means you actually have 3.4 of usable space and not 4/6 of it as I wrote before (my bad... looks like I got confused here).

Still, why does your system complains that there is no space left for a snapshot... Nothing is marked as read-only in what you posted; a snapshot starts at basically 0, so fits easily. 3 T and 30% left is plenty not to have problems related to fully loaded pools, .... Still searching for an explanation here...
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
@vicpassy The screenshots in here do not look anything like TrueNAS 12 at all. What exactly are you running?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I have to agree with @Heracles, I don't see why you can't create a snapshot.

However, you did not list the snapshots correctly. Use either of these 2 syntaxes;

zfs list -t snapshot -o space
or
zfs list -t snapshot -o space -r tank
 

vicpassy

Cadet
Joined
Apr 12, 2021
Messages
7
Hi @Heracles

I'm wondering if I have to manually migrate my 100 VMS to another freesh install truenas server that i could sync to another truenas as a backup... it would be a pain in the ass..... so i still have hope


here are the results

i have only one pool in my system ...

Code:
root@truenas003[~]# zpool list
NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool     95.5G  2.22G  93.3G        -                                 -     0%     2%       1.00x    ONLINE  -
tank              16.4T  6.31T  10.0T        -                                  -    44%    38%      1.00x    ONLINE  /mnt



Code:
root@truenas003[~]# zfs list -t snapshot -o space
NAME                                        AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
boot-pool/ROOT/12.0-U3@2021-04-14-15:32:07      -  1.70M         -       -              -          -
boot-pool/ROOT/12.0-U3@2021-04-14-09:32:45      -  1.79M         -       -              -          -



no snapshot here...

Code:
root@truenas003[~]# zfs list -t snapshot -o space -r tank
no datasets available





Code:
root@truenas003[~]# zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool  95.5G  2.22G     93.3G        -                          -          0%     2%       1.00x    ONLINE  -
tank            16.4T  6.31T     10.0T        -                          -        44%    38%      1.00x    ONLINE  /mnt



Code:
root@truenas003[~]# zfs list -o space tank     
NAME  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank  3.44T  7.11T        0B    192K             0B      7.11T



Code:
root@truenas003[~]# zfs list -o space tank/XCP001-ISCSI
NAME               AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank/XCP001-ISCSI  6.35T  7.11T        0B   4.20T          2.91T         0B
 
Top