Storage Free/Used display error

archialsta

Dabbler
Joined
Oct 31, 2020
Messages
12
Hi!
After upgrading from 12.0-U8.1 to 13, on windows explorer doesn't show correctly the used and free space available on truenas. It only shows the 'free' available space as the total capacity of the nas. Even that is not updated real time, because I've tried to copy and delete some large files and the data displayed remains almost the same.

01.JPG


02.JPG
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Both show same amount of free. Over SMB we're presenting statfs results (which wraps around dsl_dataset_space() on the ZFS side). You're not going to get different numbers without doing something that may ultimately hurt performance on a very hot code path.
 

archialsta

Dabbler
Joined
Oct 31, 2020
Messages
12
Both show same amount of free. Over SMB we're presenting statfs results (which wraps around dsl_dataset_space() on the ZFS side). You're not going to get different numbers without doing something that may ultimately hurt performance on a very hot code path.
Previously I get used to see the total amount, 22TB with 7 TB (used) and 14 TB (free). Now only shows the free space as the total amount.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
You can try using auxiliary parameter: `zfs_core:zfs_space_enabled = yes` as an auxiliary parameter on the share to see if it gets back the behavior you want. Though there may be performance impact depending on use-case.
 
Last edited:

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
statfs() output should update with txg though. So if your `df -h` output in shell on NAS isn't changing based on file writes, then you may want to file a bug ticket so that we can investigate on ZFS side.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
So yes, there was a change in 13. We removed a vfs module wrapper that was basically doing something wrong and slow and went with proper OS interface. statfs() in FreeBSD (and Linux) wraps around following in ZFS which does much of the more complex figuring out of stats for us

Code:
void
dsl_dataset_space(dsl_dataset_t *ds,
    uint64_t *refdbytesp, uint64_t *availbytesp,
    uint64_t *usedobjsp, uint64_t *availobjsp)
{
        *refdbytesp = dsl_dataset_phys(ds)->ds_referenced_bytes;
        *availbytesp = dsl_dir_space_available(ds->ds_dir, NULL, 0, TRUE);
        if (ds->ds_reserved > dsl_dataset_phys(ds)->ds_unique_bytes)
                *availbytesp +=
                    ds->ds_reserved - dsl_dataset_phys(ds)->ds_unique_bytes;
        if (ds->ds_quota != 0) {
                /*
                 * Adjust available bytes according to refquota
                 */
                if (*refdbytesp < ds->ds_quota)
                        *availbytesp = MIN(*availbytesp,
                            ds->ds_quota - *refdbytesp);
                else
                        *availbytesp = 0;
        }
        rrw_enter(&ds->ds_bp_rwlock, RW_READER, FTAG);
        *usedobjsp = BP_GET_FILL(&dsl_dataset_phys(ds)->ds_bp);
        rrw_exit(&ds->ds_bp_rwlock, FTAG);
        *availobjsp = DN_MAX_OBJECT - *usedobjsp;
}


That said space accounting in ZFS is non-trivial, and doesn't fit well into the paradigm that Windows wants. Old behavior of summing ZFS_PROP_USEDSNAP, ZFS_PROP_USEDDS, and ZFS_PROP_USEDCHILD to present a total size for the volume is I think overly simplistic and incorrect.

Now what SMB clients actually care about is `available` which should be correct and now retrieved more efficiently.
 

carma42

Cadet
Joined
Feb 23, 2022
Messages
2
So yes, there was a change in 13. We removed a vfs module wrapper that was basically doing something wrong and slow and went with proper OS interface. statfs() in FreeBSD (and Linux) wraps around following in ZFS which does much of the more complex figuring out of stats for us

Code:
void
dsl_dataset_space(dsl_dataset_t *ds,
    uint64_t *refdbytesp, uint64_t *availbytesp,
    uint64_t *usedobjsp, uint64_t *availobjsp)
{
        *refdbytesp = dsl_dataset_phys(ds)->ds_referenced_bytes;
        *availbytesp = dsl_dir_space_available(ds->ds_dir, NULL, 0, TRUE);
        if (ds->ds_reserved > dsl_dataset_phys(ds)->ds_unique_bytes)
                *availbytesp +=
                    ds->ds_reserved - dsl_dataset_phys(ds)->ds_unique_bytes;
        if (ds->ds_quota != 0) {
                /*
                 * Adjust available bytes according to refquota
                 */
                if (*refdbytesp < ds->ds_quota)
                        *availbytesp = MIN(*availbytesp,
                            ds->ds_quota - *refdbytesp);
                else
                        *availbytesp = 0;
        }
        rrw_enter(&ds->ds_bp_rwlock, RW_READER, FTAG);
        *usedobjsp = BP_GET_FILL(&dsl_dataset_phys(ds)->ds_bp);
        rrw_exit(&ds->ds_bp_rwlock, FTAG);
        *availobjsp = DN_MAX_OBJECT - *usedobjsp;
}


That said space accounting in ZFS is non-trivial, and doesn't fit well into the paradigm that Windows wants. Old behavior of summing ZFS_PROP_USEDSNAP, ZFS_PROP_USEDDS, and ZFS_PROP_USEDCHILD to present a total size for the volume is I think overly simplistic and incorrect.

Now what SMB clients actually care about is `available` which should be correct and now retrieved more efficiently.
So if I understand you correctly @anodos from a layman's standpoint is the way this is getting displayed in Windows via SMB is more correct and we shouldn't worry about it??? What if we want it back the old way ??
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
So if I understand you correctly @anodos from a layman's standpoint is the way this is getting displayed in Windows via SMB is more correct and we shouldn't worry about it??? What if we want it back the old way ??
The only thing that SMB clients really care about is the available space matches up to reality. If you want the old way of doing it I provided an auxiliary parameter above.
 

carma42

Cadet
Joined
Feb 23, 2022
Messages
2
The only thing that SMB clients really care about is the available space matches up to reality. If you want the old way of doing it I provided an auxiliary parameter above.
Thank you, I just wanted clarification.
 

InQuize

Explorer
Joined
May 9, 2015
Messages
81
The only thing that SMB clients really care about is the available space matches up to reality. If you want the old way of doing it I provided an auxiliary parameter above.
Not quite for humans, this change renders bar graph of SMB share on Win 10 useless.
They all show up as empty drives, even though most of mine are completely full. And that looks very confusing.
Bar graph was even useful for quick visual indication of 90% threshold for a critical ZFS breakpoint as it changes color from blue to red.

... there may be performance impact depending on use-case.
Could you elaborate on use cases where it hurts?
 
Last edited:

InQuize

Explorer
Joined
May 9, 2015
Messages
81
You can try using auxiliary parameter: `zfs_core:zfs_space_enabled = yes`
Also, after enabling this option on TN13, reported values are stuck and only update after SMB service restart :/
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Also, after enabling this option on TN13, reported values are stuck and only update after SMB service restart :/
Windows polls the value. You might be seeing client behavior artifact. The simple fact of the matter is ZFS usage statistics do not fit well with Windows File Explorer GUI concepts. The bar never really lined up with reality BTW with regard to % used.

Here's basically the legacy code generating dfree:
Code:
uint64_t
smb_zfs_disk_free(smbzhandle_t hdl,
                  uint64_t *bsize, uint64_t *dfree,
                  uint64_t *dsize)
{       
        size_t blocksize = 1024;
        zfs_handle_t *zfsp = NULL;
        uint64_t available, usedbysnapshots, usedbydataset,
                usedbychildren, real_used, total;
        
        
        ZFS_LOCK();
        zfsp = get_zhandle_from_smbzhandle(hdl);
        available = zfs_prop_get_int(zfsp, ZFS_PROP_AVAILABLE);
        usedbysnapshots = zfs_prop_get_int(zfsp, ZFS_PROP_USEDSNAP);
        usedbydataset = zfs_prop_get_int(zfsp, ZFS_PROP_USEDDS);
        usedbychildren = zfs_prop_get_int(zfsp, ZFS_PROP_USEDCHILD);
        ZFS_UNLOCK();
        
        real_used = usedbysnapshots + usedbydataset + usedbychildren;
        
        total = (real_used + available) / blocksize;
        available /= blocksize;
        
        *bsize = blocksize;
        *dfree = available;
        *dsize = total;
        
        return (*dfree);
}


You basically get two numbers out of it. A total size of the volume underlying the path, and the amount of space available. The total size was calculated by summing up the amount used by snapshots, the dataset itself, and any child datasets. Windows presents %used based on total and available numbers. Unfortunately, this has no real relationship to the actual %used by the zpool. So if your bar graph worked, then it was by fortuitous accident. For example, the total doesn't include space used by parent dataset or other non-child datasets in the pool.

The moral of this story is that if you want warnings on %used, then you should probably have a tool that understands ZFS (like configuring our automated alerts).
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
Am I seeing the same / similar behavior via a different lens?

I am running 8 x 4TB drives in RaidZ2 so that should show 6 x 3.5tb (21) roughly.

I just upgraded my TrueNAS from 12.0-U8 to 13.0-U2.

With 12.0-U8, under windows 10, my mapped drive (Z) reports 8.1TB used with 12.4TB free ... total of 20.4TB
With 13.0-U2, under windows 10, my mapped drive (Z) reports 3.5MB used with 12.2TB free ... total of 12.2TB

I think this thread (https://www.truenas.com/community/t...rong-used-space-after-upgrade-to-13-0.101328/) is discussing the same / similar issue.

I tried the 'You can try using auxiliary parameter: `zfs_core:zfs_space_enabled = yes`' option and it didn't appear to work for me after a reboot / SMB stop, restart.
 

Ruff.Hi

Patron
Joined
Apr 21, 2015
Messages
271
I tried the 'You can try using auxiliary parameter: `zfs_core:zfs_space_enabled = yes`' option and it didn't appear to work for me after a reboot / SMB stop, restart.

Update: I take it back. I edited the share in 12.0 and added that parameter ... then rebooted to 13.0 and complained when the parameter didn't take :(. I just added it to 13.0 and my display is showing 'used' like I like.

Just as a side comment ... I also have a share of a dataset (original share I was talking about was the whole pool) and it was always showing 'used' as I was use to. So ... different behavior for a pool than a dataset.
 
Top