Datasets as mapped network drives and space displayed

Status
Not open for further replies.

cookie1338

Dabbler
Joined
Nov 24, 2015
Messages
18
Greetings, I am facing a rather curious bug. I am running 12x6tb drives as 2xRaidZ2 (4+2). When i first created the volume I opted for a single dataset.
Now I wanted to create another one to keep some of my data there so I could finetune permissions. I had 13.4TiB free of 41.6TiB usuable storage as displayed by the single mapped drive to the single dataset when I created the other dataset. I mapped it to another drive but its total capacity is shown as 13.4TiB now. Didnt it have to display 41.6TiB too?
If I ignore it and start moving data from the big dataset to the small what will happen when I reach 13.4TiB on the small one? Will I get refused by Windows because it is full? Thanks
Capture.JPG Capture2.JPG
 

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
You only have 13TB storage remaining. Windows disk space utilization is a bit weird. I've tried to correct this by writing my own dfree script and have Samba run that when a client asks for disk space, but I never got it to work right.

So, you've used space on Pool1, so your new dataset max size is whatever is left remaining on your pool, in this case 13TB. This is why Windows displays it like this.
 

cookie1338

Dabbler
Joined
Nov 24, 2015
Messages
18
But if I move stuff from Pool1 to General, General doesnt increase the total storage.Shouldn't General have the same maximum storage (41.6TB) and decrease available space together?
So then General would only ever take 13.4TB? So all datasets must always be created when creating the volume?
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Still I don't know what to do. Could "General" go above 13.4TB if I start moving files to it from "Pool1"? Is https://bugs.pcbsd.org/issues/1481 related?

My observations tell me that the amounts shown in Windows Explorer

<win_free_share> free of <win_total_share>

are possibly calculated as follows for each CIFS share:

win_free_share = zfs_avail_dataset
win_total_share = zfs_used_dataset + zfs_avail_dataset

Where <zfs_used_dataset> and <zfs_avail_dataset> are the amounts shown in the FreeNAS GUI and as output of a 'zfs list -r <volume_name>' command in the shell.

So if you have two CIFS shares pointing to different datasets in a common pool and move a certain amount of data from datasetA to datasetB, the values <zfs_used_dataset> for these datasets will change and so will the values <win_total_share> for the corresponding shares shown by Windows.

<zfs_avail_dataset> will be identical before and after the move (property of the underlying pool, same settings for compression assumed for both datasets) and so will be <win_free_share> for both shares involved. Of course the presence of snapshots might complicate things further (freeing up space in a given dataset). See also 'available' and 'used' native properties in https://www.freebsd.org/cgi/man.cgi?query=zfs for more information.

I don't think that there's a bug involved, the information as shown in Windows might just look confusing on first sight.
 

cookie1338

Dabbler
Joined
Nov 24, 2015
Messages
18
Thanks for the reply. Does that mean that "General" can go above 13.4TB if I start moving files to it from "Pool1" or not? What would Windows display then?
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Thanks for the reply. Does that mean that "General" can go above 13.4TB if I start moving files to it from "Pool1" or not?

If my conclusions are correct this means that <win_total_share> (as shown in Windows Explorer) for your General share will grow after moving data from the Pool1 dataset to the General dataset. At the same time <win_total_share> for your Pool1 share will shrink.

That is if no snapshot on Pool1 still holds a copy of the data after moving, in which case zfs_avail_dataset will shrink and the sum win_total_share = zfs_used_dataset + zfs_avail_dataset for General will stay (at least roughly) unchanged.
 
Status
Not open for further replies.
Top