Understanding capacity shown on datasets from tools like df / smb share properties

Nvious1

Explorer
Joined
Jul 12, 2018
Messages
67
So I am looking to better understand what drives a dataset capacity ceiling using freespace utils. So some specs of my setup

Raid-Z2 pool media01 - HEALTHY (7.21 TiB (13%) Used / 47.29 TiB Free)

Under that I have multiple datasets but for the conversation will focus on 3. The datasets might vary in permissions mode, but they are all inheriting the pool config.

media - was created as first dataset in the pool
iocage - was created later after seeding initial data from previous NAS
media2 - was created later as well


Using df -h from the shell on freenas, I am trying to understand the differences why there are capacity or total size differences against the different mount points under the same pool.
Code:
Filesystem                                                    Size    Used   Avail Capacity  Mounted on
media01/iocage                                                 30T    4.4M     30T     0%    /mnt/media01/iocage
media01/media                                                  35T    4.5T     30T    13%    /mnt/media01/media
media01/media2                                                 31T    332G     30T     1%    /mnt/media01/media2
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
You want to use zpool list instead
 

Nvious1

Explorer
Joined
Jul 12, 2018
Messages
67
You want to use zpool list instead

Yeah I am familiar with that command and it gives me the pool level metrics. I also understand the difference between the pool capactity and the default dataset that is created inside the pool. What I don't understand is the why all the sub-datasets under the default one aren't all 35TB but instead they are of different values. I have no quotas or reserve space limits on any of them.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Avail + Used = Size
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
df -h is not the command you should use with zfs. 'df' will show you the correct amount of consumed and available space on a ZFS filesystem, but not the correct "size" of a filesystem, because (unless you've set a quota on a ZFS filesystem) there is no real "size." ZFS handles space differently than traditional filesystems, and since a ZFS pool fills the entirety of a disk partition, the "size" output of `df` doesn't really matter. If it were to display the "size" accurately, it would still show every "partition" as having the same size, totaling several times the size of your disk.

The proper command to use is 'zfs list'. Example:
Code:
root@clnas:~ # zfs list
NAME                                                            USED  AVAIL  REFER  MOUNTPOINT
zpool1                                                         12.1T  32.9T   205K  /mnt/zpool1
zpool1/endpoint                                                23.8G  32.9T  23.8G  /mnt/zpool1/endpoint
zpool1/ftpstore                                                 141M  32.9T   141M  /mnt/zpool1/ftpstore
zpool1/image                                                    139G  32.9T   139G  /mnt/zpool1/image
zpool1/labstorage                                               286G  32.9T   286G  /mnt/zpool1/labstorage
zpool1/limsatarchive                                           3.51G  32.9T  3.51G  /mnt/zpool1/limsatarchive
zpool1/macstorage                                              10.4G  32.9T  10.4G  /mnt/zpool1/macstorage
zpool1/rapiddna                                                14.0G  32.9T  14.0G  /mnt/zpool1/rapiddna
zpool1/sabfiles                                                2.39T  32.9T  2.39T  /mnt/zpool1/sabfiles
zpool1/sqlbackups                                              3.69T  32.9T  3.69T  /mnt/zpool1/sqlbackups
zpool1/vcenter                                                 70.3G  32.9T  70.3G  /mnt/zpool1/vcenter
zpool1/veeamRI                                                 5.46T  32.9T  5.46T  /mnt/zpool1/veeamRI

root@clnas:~ # df -h
Filesystem                                                       Size    Used   Avail Capacity  Mounted on
freenas-boot/ROOT/11.2-RELEASE-U1                                101G    783M    100G     1%    /
devfs                                                            1.0K    1.0K      0B   100%    /dev
tmpfs                                                             16G     11M     16G     0%    /etc
tmpfs                                                            2.0G    8.0K    2.0G     0%    /mnt
tmpfs                                                             11T     95M     11T     0%    /var
fdescfs                                                          1.0K    1.0K      0B   100%    /dev/fd
freenas-boot/.system                                             100G    3.0M    100G     0%    /var/db/system
freenas-boot/.system/cores                                       100G     44M    100G     0%    /var/db/system/cores
freenas-boot/.system/samba4                                      100G    1.4M    100G     0%    /var/db/system/samba4
freenas-boot/.system/syslog-292733c495b14ae7a81537f055e446c9     100G    2.1M    100G     0%    /var/db/system/syslog-292733c495b14ae7a81537f055e446c9
freenas-boot/.system/rrd-292733c495b14ae7a81537f055e446c9        100G     31M    100G     0%    /var/db/system/rrd-292733c495b14ae7a81537f055e446c9
freenas-boot/.system/configs-292733c495b14ae7a81537f055e446c9    100G    100M    100G     0%    /var/db/system/configs-292733c495b14ae7a81537f055e446c9
freenas-boot/.system/webui                                       100G     29K    100G     0%    /var/db/system/webui
tmpfs                                                            1.0G    143M    881M    14%    /var/db/collectd/rrd
zpool1                                                            33T    205K     33T     0%    /mnt/zpool1
zpool1/vcenter                                                    33T     70G     33T     0%    /mnt/zpool1/vcenter
zpool1/sabfiles                                                   35T    2.4T     33T     7%    /mnt/zpool1/sabfiles
zpool1/veeamRI                                                    38T    5.5T     33T    14%    /mnt/zpool1/veeamRI
zpool1/sqlbackups                                                 37T    3.7T     33T    10%    /mnt/zpool1/sqlbackups
zpool1/rapiddna                                                   33T     14G     33T     0%    /mnt/zpool1/rapiddna
zpool1/macstorage                                                 33T     10G     33T     0%    /mnt/zpool1/macstorage
zpool1/labstorage                                                 33T    286G     33T     1%    /mnt/zpool1/labstorage
zpool1/limsatarchive                                              33T    3.5G     33T     0%    /mnt/zpool1/limsatarchive
zpool1/image                                                      33T    139G     33T     0%    /mnt/zpool1/image
zpool1/ftpstore                                                   33T    141M     33T     0%    /mnt/zpool1/ftpstore
zpool1/endpoint                                                   33T     24G     33T     0%    /mnt/zpool1/endpoint
 

Nvious1

Explorer
Joined
Jul 12, 2018
Messages
67
Avail + Used = Size
Thanks for explaining how the size is calculated

The proper command to use is 'zfs list'.
Thanks this does help piece together what the real usage and availability ends up being amount all the zfs pool items.

Now i need to spin up a thread in the jails subforum on handling free space tracking between the jail root and a mounted data set inside the jail
 
Top