Why does Windows and FreeNAS Reporting show different disk space usage?

Status
Not open for further replies.
Joined
Dec 1, 2016
Messages
9
pool_01.png


Anyone know what's causing this discrepancy? AFAIK, both freenas and windows uses Tebibytes?
LZ4 compression is on, but ratio is reported as 1.00x
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
If you look at the output of zfs list it will match your windows total storage.

There might also be some snapshots that are skewing the numbers from the reporting section. Do you really care? the numbers don't really matter that much.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
pool_01.png


Anyone know what's causing this discrepancy? AFAIK, both freenas and windows uses Tebibytes?
LZ4 compression is on, but ratio is reported as 1.00x
IMHO, there is a real problem with the way FreeNAS reports disk space, both in the Reporting-Partition section of the GUI and via SMB. See bug report 148 for details.

Here's my comment at that bug report:
I have the same experience currently on 9.10-STABLE, and every other version of FreeNAS I've used since last May. This includes not just CIFS shares, but Partition Reporting in the GUI.

It seems to me that the total capacity of a pool -- or a RAID volume, or a simple disk -- is invariant, and is determined by its geometry. For datasets without a quota, the space available is the total capacity of the pool less the space used in the pool. So the 'X TB free of X TB' reported by FreeNAS/CIFS should report the same values for all of the datasets on the pool. For datasets with a quota, the space available is the quota size less any space used in the dataset. (Note that the space available would be reduced further if the free space on the pool is less than the remaining allowance based on the quota, but we'll ignore that for now.)

My FreeNAS 9.10-STABLE system is a Supermicro X10SL7 with seven 2TB disks configured in a RAIDZ2 pool named 'tank'. It has a total capacity of five 2TB disks, less overhead. This is roughly 8.15TB. But FreeNAS/CIFS never reports this as the total capacity; it shows instead some varying number that seems to be loosely based on the amount of free space available on the pool.

My Synology Diskstation NAS has four 4TB disks in a RAID6 volume. It has a total capacity of two 4TB disks, less overhead. This is roughly 7.21TB.

The attached image is a screenshot of shares on a Windows 7 PC. These include 3 shares from FreeNAS, one with a quota, and two shares from the Synology.

The two Synology shares (\\BERTRAND\hardware and \\BERTRAND\domains) report the same values: 3.65TB free of 7.21TB. These accurately reflect the space available and the total capacity of the Synology volume.

But FreeNAS reports different values for the two shares without a quota (\\BOOMER\systools and \\BOOMER\opsys), though both are on the same pool. However, it reports the expected values for AFP share \\BOOMER\atm, which has a 512GB quota.

In a nutshell, it seems to me that CIFS should retain its current behavior for datasets with quotas. But for datasets without quotas it ought to report the space available and the total volume capacity instead of whatever it's currently returning.
And the referenced image:
freenas-cifs-space.jpg
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
IMHO, there is a real problem with the way FreeNAS reports disk space, both in the Reporting-Partition section of the GUI and via SMB. See bug report 148 for details.

Here's my comment at that bug report:

And the referenced image:
View attachment 19003

Code for vfs_zfs_space.c
Code:
#include "includes.h"
#include "system/filesys.h"
#include "lib/util/tevent_ntstatus.h"

#include "modules/zfs_disk_free.h"


static uint64_t vfs_zfs_space_disk_free(vfs_handle_struct *handle, const char *path,
	uint64_t *bsize, uint64_t *dfree, uint64_t *dsize)
{
	uint64_t res;
	char rp[PATH_MAX] = { 0 };

	if (realpath(path, rp) == NULL)
		return (-1);

	DEBUG(9, ("realpath = %s\n", rp));

	res = smb_zfs_disk_free(rp, bsize, dfree, dsize);
	if (res == (uint64_t)-1)
		res = SMB_VFS_NEXT_DISK_FREE(handle, path,  bsize, dfree, dsize);
	if (res == (uint64_t)-1)
		return (res);

	DEBUG(9, ("*bsize = %" PRIu64 "\n", *bsize));
	DEBUG(9, ("*dfree = %" PRIu64 "\n", *dfree));
	DEBUG(9, ("*dsize = %" PRIu64 "\n", *dsize));

	return (res);
}

static struct vfs_fn_pointers vfs_zfs_space_fns = {
	.disk_free_fn = vfs_zfs_space_disk_free
};

NTSTATUS vfs_zfs_space_init(void);
NTSTATUS vfs_zfs_space_init(void)
{
	return smb_register_vfs(SMB_VFS_INTERFACE_VERSION,
		"zfs_space", &vfs_zfs_space_fns);
}


zfs_disk_free.c
Code:
#define NEED_SOLARIS_BOOLEAN

#include <libzfs.h>

#include "modules/zfs_disk_free.h"


uint64_t
smb_zfs_disk_free(char *path, uint64_t *bsize, uint64_t *dfree, uint64_t *dsize)
{
	size_t blocksize = 1024;
	libzfs_handle_t *libzfsp;
	zfs_handle_t *zfsp;
	uint64_t available, usedbysnapshots, usedbydataset,
		usedbychildren, usedbyrefreservation, real_used, total;

	if (path == NULL)
		return (-1);

	if ((libzfsp = libzfs_init()) == NULL)
		return (-1);

	libzfs_print_on_error(libzfsp, B_TRUE);

	zfsp = zfs_path_to_zhandle(libzfsp, path,
		ZFS_TYPE_VOLUME|ZFS_TYPE_DATASET|ZFS_TYPE_FILESYSTEM);
	if (zfsp == NULL)
		return (-1);

	available = zfs_prop_get_int(zfsp, ZFS_PROP_AVAILABLE);
	usedbysnapshots = zfs_prop_get_int(zfsp, ZFS_PROP_USEDSNAP);
	usedbydataset = zfs_prop_get_int(zfsp, ZFS_PROP_USEDDS);
	usedbychildren = zfs_prop_get_int(zfsp, ZFS_PROP_USEDCHILD);
	usedbyrefreservation = zfs_prop_get_int(zfsp, ZFS_PROP_USEDREFRESERV);

	zfs_close(zfsp);
	libzfs_fini(libzfsp);

	real_used = usedbysnapshots + usedbydataset + usedbychildren;

	total = (real_used + available) / blocksize;
	available /= blocksize;

	*bsize = blocksize;
	*dfree = available;
	*dsize = total;

	return (*dfree);	
}


Caveat: I'm not a developer (or even a programmer), but I did stay at a Holiday Inn Express.

So the calculation of the total storage in a samba share path is total = (real_used + available) / blocksize;, and real_used = usedbysnapshots + usedbydataset + usedbychildren;.

It doesn't seem like the way that we're calculating dsize (total) works very well for regular datasets without quotas. From the way I read the above, dsize will increase as space is used on the pool because "available" reflects the pool-wide amount of available space, but "real_used" only reflects the amount used for the path of the samba share. I personally think that dsize for shares on datasets without quotas should instead reflect the size of the zpool.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
I made a pull request (with patch) regarding this issue. Basically, it makes the disk capacity = zpool size if there isn't a quota on the dataset. I'm not sure if there's a better number to throw at users. zpool_size isn't exactly the same as hard drive capacity, but I rather like it because it's (1)consistent and (2)correct from a certain point of view.
 
Status
Not open for further replies.
Top