SMB mounted drive reports the total size as what's left

Johanna12221

Cadet
Joined
Oct 21, 2022
Messages
5
Hello,
I have found several threads here on the forum confirming that this is a expected behavior of a TrueNAS SMB share.
But for my use case it complicates things as seen in the image, one would believe there is at least 40,2 TB left.
The shares come from the same dataset/vdev.
image.png


Here are some threads that confirms that this is expected:

In my case I'm going to unmount Media and only keep Private. But to know the true size of what's left I must go into the TrueNAS web interface, which I don't want my users to have to do.

How can I modify this behavior with a patch or setting to make the total reported size (and used) be the size of the actual full vdev/dataset?
 

jace92

Dabbler
Joined
Dec 14, 2021
Messages
46
I too am having this issue and have read through as many post as I could find with varying explanations. Once I realized it was an issue I added 3 more shares as mount points and they are all different in the total space number.

I am running TrueNAS Scale 22.12.0 with a mirrored pool of 2 14TB Exos drives resulting in a total usable capacity of 12.59 TiB. Pictured I have V, W, X, and Z which are different datasets that I have shared from here and Y is an 8TB mirrored pool from my Synology NAS that has it's folder structure the same as the TrueNAS. Because of the way Synology works, it shows the full space usable of the drives and the space remaining (which I want\like). TrueNAS on the other hand seems to only show the space used by the dataset and the remaining space of the pool which results in different numbers from each other.

1673136264660.png


I understand the math (at least in principle) as to why it does this, but is there a way to have it show the TOTAL pool size minus what's used like on Synology?

1673136336482.png

1673136349471.png
 

StarTrek133

Contributor
Joined
Sep 5, 2022
Messages
112
I also have the same thing coming up on mine .. It does look very strange ...
 

Attachments

  • Screenshot 2023-01-26 202240.jpg
    Screenshot 2023-01-26 202240.jpg
    57.3 KB · Views: 254
  • Screenshot 2023-01-26 202403.jpg
    Screenshot 2023-01-26 202403.jpg
    96.1 KB · Views: 196

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
How can I modify this behavior with a patch or setting to make the total reported size (and used) be the size of the actual full vdev/dataset?

The picture is accurately reporting the situation; you're perhaps not understanding what it is saying.

Media has 20.1TB of an approximate 101TB free, Private has 20.1TB of 20.1TB free. The SMB protocol does not have a way of identifying that the 101TB is (I'm guessing) a full pool or that the 20.1TB in Private is probably a reserved/quota; this means that the "free" 20.1TB" in each of these is not independently free, but rather a part of the storage pool. This is a side effect of having a shared pool of storage. If you don't like it, do not create multiple mount points within the pool, which would then make it a dedicated pool for your use. Or complain to Microsoft that their protocol blows chunks. Or reserve dedicated amounts of space.

Additionally, if this is a ZFS pool that is RAIDZ, note that free space numbers are approximated because ZFS has no way to provide an absolute guarantee as to the available space; RAIDZ variable size block allocation makes this impossible, and compression makes it even more impossible.

There's nothing wrong with the reporting. It's this:

one would believe there is at least 40,2 TB left

that's broken. It's easy to see WHY you have misunderstood this, but there isn't a correction available that would actually be correct. Any attempt to "fix" this on the software side will make it lie to someone else for some other common use case. The ZFS folks were very thorough when they designed per-dataset free space reporting. You really have to correct your incorrect worldview about what "free space" means when you have a shared pool of free space.
 

dustojnikhummer

Dabbler
Joined
Apr 14, 2022
Messages
18
The picture is accurately reporting the situation; you're perhaps not understanding what it is saying.

Media has 20.1TB of an approximate 101TB free, Private has 20.1TB of 20.1TB free. The SMB protocol does not have a way of identifying that the 101TB is (I'm guessing) a full pool or that the 20.1TB in Private is probably a reserved/quota; this means that the "free" 20.1TB" in each of these is not independently free, but rather a part of the storage pool. This is a side effect of having a shared pool of storage. If you don't like it, do not create multiple mount points within the pool, which would then make it a dedicated pool for your use. Or complain to Microsoft that their protocol blows chunks. Or reserve dedicated amounts of space.

Additionally, if this is a ZFS pool that is RAIDZ, note that free space numbers are approximated because ZFS has no way to provide an absolute guarantee as to the available space; RAIDZ variable size block allocation makes this impossible, and compression makes it even more impossible.

There's nothing wrong with the reporting. It's this:



that's broken. It's easy to see WHY you have misunderstood this, but there isn't a correction available that would actually be correct. Any attempt to "fix" this on the software side will make it lie to someone else for some other common use case. The ZFS folks were very thorough when they designed per-dataset free space reporting. You really have to correct your incorrect worldview about what "free space" means when you have a shared pool of free space.
Okay I get that, but why does it see remaining capacity correctly then?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
Okay I get that, but why does it see remaining capacity correctly then?

The numbers being reported to File Explorer are correct in that they're identical to what you would see if you typed df -h <path> in the local console. You are basically seeing the ZFS `available` property for the dataset underlying the given path as "Free Space" and the `referenced` property as "Used Space", and File Explorer sums them to show "capacity". It's a GUI issue in windows, but has zero functional impact. If you want detailed and accurate space accounting on ZFS you will need to use ZFS-aware tools (or the webui / reporting framework we have).
 
Top