Yet another thread about how to assess disk usage on ZFS

Status
Not open for further replies.

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
Hello all,

I can see this is a recurrent question about data usage and properly determining whether you have enough space or not when working with ZFS. I know, compression, dedup and all sorts of features make it hard to assess how much actual space is consumed.

The problem I have is that all the numbers tell me I should have enough space to create a dataset yet i get the message that I'm out of space and I'm hoping the community can show me what I'm doing wrong.

Attached is a screenshot of the settings used to create said dataset.

From the command-line I have collected the below information as best as I'm capable:
Code:
freenas# df -Th
Filesystem													 Type	   Size	Used   Avail Capacity  Mounted on
freenas-boot/ROOT/11.0-RELEASE								 zfs		 86G	737M	 85G	 1%	/
devfs														  devfs	  1.0K	1.0K	  0B   100%	/dev
tmpfs														  tmpfs	   32M	9.6M	 22M	30%	/etc
tmpfs														  tmpfs	  4.0M	8.0K	4.0M	 0%	/mnt
tmpfs														  tmpfs	  5.3G	 57M	5.3G	 1%	/var
freenas-boot/grub											  zfs		 86G	6.3M	 85G	 0%	/boot/grub
fdescfs														fdescfs	1.0K	1.0K	  0B   100%	/dev/fd
zpool														  zfs		 69G	151K	 69G	 0%	/mnt/zpool
zpool/backup												   zfs		2.0T	1.6T	402G	80%	/mnt/zpool/backup
zpool/jails													zfs		 69G	151K	 69G	 0%	/mnt/zpool/jails
zpool/jails/.warden-template-pluginjail-9.3-x64				zfs		 70G	496M	 69G	 1%	/mnt/zpool/jails/.warden-template-pluginjail-9.3-x64
zpool/media													zfs		2.0T	1.9T	 70G	97%	/mnt/zpool/media
zpool/nasware												  zfs		3.0T	1.1T	1.9T	35%	/mnt/zpool/nasware
zpool/nasware/san											  zfs		600G	 69G	531G	11%	/mnt/zpool/nasware/san
zpool/nasware/vsphere_hb									   zfs		100M	256K	100M	 0%	/mnt/zpool/nasware/vsphere_hb
zpool/nasware/vsphere_hb2									  zfs		100M	140K	100M	 0%	/mnt/zpool/nasware/vsphere_hb2
freenas-boot/.system										   zfs		 86G	452M	 85G	 1%	/var/db/system
freenas-boot/.system/cores									 zfs		 85G	478K	 85G	 0%	/var/db/system/cores
freenas-boot/.system/samba4									zfs		 85G	147K	 85G	 0%	/var/db/system/samba4
freenas-boot/.system/syslog-5ece5c906a8f4df886779fae5cade8a5   zfs		 86G	9.1M	 85G	 0%	/var/db/system/syslog-5ece5c906a8f4df886779fae5cade8a5
freenas-boot/.system/rrd-5ece5c906a8f4df886779fae5cade8a5	  zfs		 86G	 12M	 85G	 0%	/var/db/system/rrd-5ece5c906a8f4df886779fae5cade8a5
freenas-boot/.system/configs-5ece5c906a8f4df886779fae5cade8a5  zfs		 86G	 45M	 85G	 0%	/var/db/system/configs-5ece5c906a8f4df886779fae5cade8a5
freenas# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h7m with 0 errors on Sat Jun 17 03:52:46 2017
config:

		NAME		STATE	 READ WRITE CKSUM
		freenas-boot  ONLINE	   0	 0	 0
		  ada4p2	ONLINE	   0	 0	 0

errors: No known data errors

  pool: zpool
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
		still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
		the pool may no longer be accessible by software that does not support
		the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 35h52m with 0 errors on Mon Jun 26 11:52:43 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		zpool										   ONLINE	   0	 0	 0
		  raidz1-0									  ONLINE	   0	 0	 0
			gptid/6d988301-c8fe-11e4-8c3b-a01d48c7c344  ONLINE	   0	 0	 0
			gptid/6df34582-c8fe-11e4-8c3b-a01d48c7c344  ONLINE	   0	 0	 0
			gptid/6e5510ef-c8fe-11e4-8c3b-a01d48c7c344  ONLINE	   0	 0	 0
			gptid/6eba7ba6-c8fe-11e4-8c3b-a01d48c7c344  ONLINE	   0	 0	 0

errors: No known data errors
freenas# zpool list
NAME		   SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
freenas-boot	93G  4.60G  88.4G		 -	  -	 4%  1.00x  ONLINE  -
zpool		 10.9T  6.41T  4.46T		 -	26%	58%  1.00x  ONLINE  /mnt
freenas# zfs list
NAME															USED  AVAIL  REFER  MOUNTPOINT
freenas-boot												   4.60G  85.5G	31K  none
freenas-boot/.system											518M  85.5G   452M  legacy
freenas-boot/.system/configs-5ece5c906a8f4df886779fae5cade8a5  44.8M  85.5G  44.8M  legacy
freenas-boot/.system/cores									  478K  85.5G   478K  legacy
freenas-boot/.system/rrd-5ece5c906a8f4df886779fae5cade8a5	  11.7M  85.5G  11.7M  legacy
freenas-boot/.system/samba4									 145K  85.5G   145K  legacy
freenas-boot/.system/syslog-5ece5c906a8f4df886779fae5cade8a5   9.08M  85.5G  9.08M  legacy
freenas-boot/ROOT											  4.04G  85.5G	25K  none
freenas-boot/ROOT/11.0-RC3									   56K  85.5G   736M  /
freenas-boot/ROOT/11.0-RELEASE								 4.04G  85.5G   737M  /
freenas-boot/ROOT/9.10.2										 47K  85.5G   635M  /
freenas-boot/ROOT/9.10.2-U1									  59K  85.5G   636M  /
freenas-boot/ROOT/9.10.2-U2									  56K  85.5G   637M  /
freenas-boot/ROOT/9.10.2-U3									  51K  85.5G   638M  /
freenas-boot/ROOT/9.10.2-U4									  45K  85.5G   639M  /
freenas-boot/ROOT/Initial-Install								 1K  85.5G   510M  legacy
freenas-boot/ROOT/default										41K  85.5G   511M  legacy
freenas-boot/grub											  33.4M  85.5G  6.28M  legacy
zpool														  7.59T  69.4G   151K  /mnt/zpool
zpool/backup													  2T   402G  1.61T  /mnt/zpool/backup
zpool/jails													 500M  69.4G   151K  /mnt/zpool/jails
zpool/jails/.warden-template-pluginjail-9.3-x64				 500M  69.4G   496M  /mnt/zpool/jails/.warden-template-pluginjail-9.3-x64
zpool/media													   2T  69.6G  1.93T  /mnt/zpool/media
zpool/nasware												  3.59T  1.95T  1.05T  /mnt/zpool/nasware
zpool/nasware/san											   600G   531G  68.6G  /mnt/zpool/nasware/san
zpool/nasware/vsphere_hb										100M  99.8M   256K  /mnt/zpool/nasware/vsphere_hb
zpool/nasware/vsphere_hb2									   100M  99.9M   140K  /mnt/zpool/nasware/vsphere_hb2
freenas#



Under the "/mnt/zpool/nasware" dataset I want to create the "nutanix-datastore" dataset which has the same properties as the "/mnt/zpool/nasware/san" dataset but I get the error pictured in the screenshot.

As far as I could see, the disk space should be there. Am I hitting a bug that can be worked around from the command-line?
 

Attachments

  • dataset.PNG
    dataset.PNG
    814 KB · Views: 500

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Two questions:

1) Why are you "reserving" space for the dataset? Have you done that on other datasets?

2) Can I see, in pastebin, the output of
Code:
zpool get all
and
Code:
zfs get -H all


We'll get to the bottom of this, almost certainly not a bug, but rather something in your dataset/zpool configuration.
 

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
I'm reserving space so I don't get into weird errors in VMs or other places. This is what I did with the "/mnt/zpool/nasware/san" dataset.

Here's the paste:
https://pastebin.com/Yqa7fvj0

Thank you for your reply.
 

dtom10

Explorer
Joined
Oct 16, 2014
Messages
81
I've managed to identify the problem, I was reserving space in all my datasets. I've left quota setting on and used the reserve setting only for the VMware datastore dataset.
 
Status
Not open for further replies.
Top