Nvious1
Explorer
- Joined
- Jul 12, 2018
- Messages
- 67
So I am looking to better understand what drives a dataset capacity ceiling using freespace utils. So some specs of my setup
Raid-Z2 pool media01 - HEALTHY (7.21 TiB (13%) Used / 47.29 TiB Free)
Under that I have multiple datasets but for the conversation will focus on 3. The datasets might vary in permissions mode, but they are all inheriting the pool config.
media - was created as first dataset in the pool
iocage - was created later after seeding initial data from previous NAS
media2 - was created later as well
Using df -h from the shell on freenas, I am trying to understand the differences why there are capacity or total size differences against the different mount points under the same pool.
Raid-Z2 pool media01 - HEALTHY (7.21 TiB (13%) Used / 47.29 TiB Free)
Under that I have multiple datasets but for the conversation will focus on 3. The datasets might vary in permissions mode, but they are all inheriting the pool config.
media - was created as first dataset in the pool
iocage - was created later after seeding initial data from previous NAS
media2 - was created later as well
Using df -h from the shell on freenas, I am trying to understand the differences why there are capacity or total size differences against the different mount points under the same pool.
Code:
Filesystem Size Used Avail Capacity Mounted on media01/iocage 30T 4.4M 30T 0% /mnt/media01/iocage media01/media 35T 4.5T 30T 13% /mnt/media01/media media01/media2 31T 332G 30T 1% /mnt/media01/media2