USEDCHILD size doesn't add up

tofof

Cadet
Joined
Aug 29, 2015
Messages
2
Problem in a nutshell: I have 11.8T used by my tank/media dataset, along with some 400M used by .system and 47G used by iocage. As I understand it, tank's USEDCHILD should be, then, about 12T, but instead it shows as 20.8T.

My available space is listed as only 2.5T. I have no refreservation and my snapshots are a sane size (total in GB).

How am I ending up with 21T used with only 12T of data actually in the pool?

zfs list -o space
Code:
root@rainbow-:~ # zfs list -o space
NAME                                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
tank                                                           2.49T  20.8T        0B    481M             0B      20.8T
tank/.system                                                   2.49T   378M        0B   4.39M             0B       374M


tank/.system/configs-3681405ce2b146f19775096dc8d04355 2.49T 244M 0B 244M 0B 0B tank/.system/configs-4a7da323489644858925be23483a2e70 2.49T 176K 0B 176K 0B 0B tank/.system/cores 1024M 208K 0B 208K 0B 0B tank/.system/rrd-3681405ce2b146f19775096dc8d04355 2.49T 86.5M 0B 86.5M 0B 0B tank/.system/rrd-4a7da323489644858925be23483a2e70 2.49T 13.3M 0B 13.3M 0B 0B tank/.system/samba4 2.49T 9.26M 6.44M 2.83M 0B 0B tank/.system/services 2.49T 192K 0B 192K 0B 0B tank/.system/syslog-3681405ce2b146f19775096dc8d04355 2.49T 19.7M 0B 19.7M 0B 0B tank/.system/syslog-4a7da323489644858925be23483a2e70 2.49T 639K 0B 639K 0B 0B tank/.system/webui 2.49T 176K 0B 176K 0B 0B
tank/backups 2.49T 192K 0B 192K 0B 0B tank/iocage 2.49T 47.0G 88.6M 30.1M 0B 46.9G
tank/iocage/download 2.49T 364M 128K 176K 0B 363M tank/iocage/download/11.2-RELEASE 2.49T 125M 0B 125M 0B 0B tank/iocage/download/12.3-RELEASE 2.49T 238M 0B 238M 0B 0B tank/iocage/images 2.49T 176K 0B 176K 0B 0B tank/iocage/jails 2.49T 45.1G 575K 192K 0B 45.1G tank/iocage/jails/emby 2.49T 24.1G 12.2M 607K 0B 24.1G tank/iocage/jails/emby/root 2.49T 24.1G 21.7G 2.41G 0B 0B tank/iocage/jails/kodi_database 2.49T 11.1G 7.98M 208K 0B 11.1G tank/iocage/jails/kodi_database/root 2.49T 11.1G 10.3G 849M 0B 0B tank/iocage/jails/transmission 2.49T 8.45G 13.5M 631K 0B 8.44G tank/iocage/jails/transmission-manual 2.49T 1.46G 5.31M 224K 0B 1.45G tank/iocage/jails/transmission-manual/root 2.49T 1.45G 831M 656M 0B 0B tank/iocage/jails/transmission/root 2.49T 8.44G 7.22G 1.21G 0B 0B tank/iocage/log 2.49T 1.74M 1.52M 224K 0B 0B tank/iocage/releases 2.49T 1.40G 128K 176K 0B 1.40G tank/iocage/releases/11.2-RELEASE 2.49T 576M 0B 176K 0B 576M tank/iocage/releases/11.2-RELEASE/root 2.49T 576M 98.0M 478M 0B 0B tank/iocage/releases/12.3-RELEASE 2.49T 862M 0B 192K 0B 862M tank/iocage/releases/12.3-RELEASE/root 2.49T 862M 192K 862M 0B 0B tank/iocage/templates 2.49T 751K 575K 176K 0B 0B
tank/media 2.49T 11.8T 13.0G 11.8T 0B 0B


zpool list -v
Code:
root@rainbow-:~ # zpool list -v
NAME                                             SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
freenas-boot                                       7G  3.21G  3.79G        -         -     4%    45%  1.00x  DEGRADED  -
  da6p2                                            7G  3.21G  3.79G        -         -     4%  45.8%      -  DEGRADED
tank                                            27.2T  23.3T  3.93T        -         -    60%    85%  2.48x    ONLINE  /mnt
  raidz2                                        27.2T  23.3T  3.93T        -         -    60%  85.6%      -    ONLINE
    gptid/967a6dfe-1e5a-11e8-bf00-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
    gptid/dd5cf9e1-2a00-11e8-b70a-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
    gptid/ce83b023-ffcd-11e8-a47d-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
    gptid/410c1e66-31fa-11e8-8216-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
    gptid/09acd21f-4e6d-11e5-b9d9-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
    gptid/5068ac61-4f60-11e5-85e0-d43d7ee19b8f      -      -      -        -         -      -      -      -    ONLINE
 

tofof

Cadet
Joined
Aug 29, 2015
Messages
2
Solution: deleted datasets did NOT delete accompanying snapshots for whatever reason. Snapshots were not listed (as visible above, no TB of usedsnap), but manually deleting all snapshots dropped me to just under 12.0 TB used.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
<edited out response to wrong thread>
Destroying a ZFS dataset will destroy all associated snapshots BTW and is not easily reversed, so please exercise some caution there.
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
deleted datasets did NOT delete accompanying snapshots for whatever reason.
That can't be possible. A snapshot is a property of the dataset; completely intertwined. You cannot have snapshots of the dataset mydata without the existence of the actual dataset mydata. Can you demonstrate the existence of these snapshots for a dataset that doesn't exist?

but manually deleting all snapshots dropped me to just under 12.0 TB used.
What exactly did you do? (In the command-line or via the GUI?)

---

My hunch is you destroyed snapshots for all existing datasets, which then liberated the extra 10 TB.

You cannot feasibly calculate how much space is being used up by snapshots when you're dealing with multiple snapshots, for the fact that there is overlap of data. It can be deceiving.

For example, snapshot001 might claim to be using up 1GB. snapshot002 might claim to be using up 2GB. And snapshot003 might claim to be using up 1GB. Yet when you destroy all three at once, you free up 50 GB. This is because the "used" space of each individual snapshot is approximated by comparing its unique records/data against everything else in the dataset: the live filesystem and the other snapshots. The records it shares with other snapshots is not considered "unique".

To get more accurate numbers, you can make use of the -n and -v flags and the % symbol when issuing the destroy command. I would only do this as an unprivileged user, to make sure you don't accidentally destroy important snapshots if you execute the wrong command.

To see how much space would be freed up, here is an example:
Code:
zfs destroy -n -v mypool/mydata@snap001%snap009


This will simulate the destruction of all snapshots from snap001 to snap009 (including everything in between based on their sequential creation time).

---

Here's a real life example, in which I demonstrate this with three snapshots that follow each other in sequential order on one of my datasets. I changed the names for privacy reasons. Keep in mind, there are no other snapshots in between these three snapshots.

Code:
zfs destroy -n -v mypool/datadump@keepsnap-20191130.0000              
 would destroy mypool/datadump@keepsnap-20191130.0000
 would reclaim 1.11G

zfs destroy -n -v mypool/datadump@keepsnap-20200430.0000
 would destroy mypool/datadump@keepsnap-20200430.0000
 would reclaim 806M

zfs destroy -n -v mypool/datadump@keepsnap-20200606.0000              
 would destroy mypool/datadump@keepsnap-20200606.0000
 would reclaim 944M

zfs destroy -n -v mypool/datadump@keepsnap-20191130.0000%keepsnap-20200606.0000
 would destroy mypool/datadump@keepsnap-20191130.0000
 would destroy mypool/datadump@keepsnap-20200430.0000
 would destroy mypool/datadump@keepsnap-20200606.0000
 would reclaim 85.8G


Yes. You read those numbers correctly. Read it again.

1.11 GB plus 806 MB plus 944 MB equals... 85.8 GB?!?!

But it's true. :wink: Because of the reasons I explained above.
 
Last edited:
Top