Is my available space correct?

Status
Not open for further replies.

Norlig

Explorer
Joined
Jul 13, 2013
Messages
59
Good day,

I have a feeling I might be missing around 1TB of usable space on my CIFS Share.

I am running Freenas 9.10.
I got 4 x 4TB drives running in RaidZ1.

when looking at Storage -> Volume , it looks like this:
20160610094948-d9a347ae.png


The top volume has 3.4TB available, but the second volume only has 2.2TB available.

Looking at my mapped drive in windows, I have 2.17 TB free of 10.2 TB

I feel like it should say: 3.37 TB of 11.4 TB free ?

the original raid was made with 4x 2TB drives, and I replaced one drive at a time with 4TB ones, if that could be the reason?

Or maybe I have mounted the share at the wrong place, or have the wrong setting on the second volume?
I have limited knowledge on freenas, and only have 1 used for all my data and one jail I cant delete, so its not a perfect environment either.
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The top level shows the raw disk in the pool. You have 11.1 TiB + 3.4 TiB = 14.5 TiB (tebibytes) which is 16TB (terabytes) of disk. This number is referring to raw space.

The number below shows the used/available disk in the dataset. The available is an approximation, because RAIDZ has variable space usage. These numbers are referring to usable space, i.e. space not including parity. So if you have 16TB of RAIDZ1 space, you only have around 12TB (or 10.9TiB) of usable space. The actual number ends up being just a little less due to ZFS using some of this for metadata, swap space, etc. Your system shows 8.0+2.2TiB=10.2TiB.

As Windows reports, you have around 2.2TB free. The art of figuring out whether "TB" means "terabyte" or "tebibyte" for a given OS is left as an exercise to the reader.

This all looks perfectly consistent.

On a related note, please note that ZFS performance starts to degrade once you're up in the 80-90% range, so it might be time to consider some housecleaning, or possibly expanding your pool.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
As Windows reports, you have around 2.2TB free. The art of figuring out whether "TB" means "terabyte" or "tebibyte" for a given OS is left as an exercise to the reader.
Last I checked, all OSes reported 2^40 bytes, regardless of what they call it.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Not true, Mac OS X in particular reports in GiB/TiB, which is really annoying.
A true TB is 10^12 bytes; a true GB is 10^9 bytes. A TiB is 2^40 bytes. I'm pretty sure the Macs use TB and GB, not TiB and GiB.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Not true, Mac OS X in particular reports in GiB/TiB, which is really annoying.
You mean 10^6 / 10^9 bytes?

I had the feeling that might be the case, but I try to keep my interactions with OS X to the bare minimum. I'd rather use Ubuntu's train wreck of a shell.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Uh yeah sorry guys, insufficient coffee and trying to keep track of bass-ackward abbreviations, not good bedfellows.
 
Joined
Dec 2, 2015
Messages
730
A true TB is 10^12 bytes; a true GB is 10^9 bytes. A TiB is 2^40 bytes. I'm pretty sure the Macs use TB and GB, not TiB and GiB.
I can confirm that on my Mac, with OS X 10.11.5, the Finder reports disk space in true GB and TB (i.e. base 10). The sizes reported in the Finder agree closely with the output of "df -H".
 
Joined
May 5, 2016
Messages
5
Referring to Norlig's question "Is my available space correct" may I also ask?.
My setup is with 8x3Tb WesternDigital Red HDDs and FreeNas-9.3 in raidz2, then have I lost 3TB?...see screen shot...Sorry for any formatting errors.
HDD pool.JPG
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
No, not really, you get around 2.6-2.7 TiB out of a 3TB HDD, and since you have six data and two parity, 6 * 2.6 = 15.6TiB. The system is reporting 2.2TiB used and 12.8TiB available, which strikes me as around 15TiB, so you might be short 0.6 TiB, but that's just as likely to be metadata and suboptimal block allocation issues.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
You should have around 16 TiB and you have 2.2 + 12.8 = 15 TB so it's about right if you take into account the metadata and misalignment overheads.
 

Dan Lee

Dabbler
Joined
Jun 22, 2016
Messages
16
The top level shows the raw disk in the pool. You have 11.1 TiB + 3.4 TiB = 14.5 TiB (tebibytes) which is 16TB (terabytes) of disk. This number is referring to raw space.

The number below shows the used/available disk in the dataset. The available is an approximation, because RAIDZ has variable space usage. These numbers are referring to usable space, i.e. space not including parity. So if you have 16TB of RAIDZ1 space, you only have around 12TB (or 10.9TiB) of usable space. The actual number ends up being just a little less due to ZFS using some of this for metadata, swap space, etc. Your system shows 8.0+2.2TiB=10.2TiB.

As Windows reports, you have around 2.2TB free. The art of figuring out whether "TB" means "terabyte" or "tebibyte" for a given OS is left as an exercise to the reader.

This all looks perfectly consistent.

On a related note, please note that ZFS performance starts to degrade once you're up in the 80-90% range, so it might be time to consider some housecleaning, or possibly expanding your pool.


Ahhh, this explanation clears up why I couldn't wrap my head around what storage was displaying for me... the devil is in the details and I never once looked at TiB and thought wait; why is it displaying all my GBs in TiBs and what's a TiB?!

This doesn't however answer my remaining question of where is all my space being used up? I have a 16.8TiB iSCSI share presented to a Windows 2k8 r2 box for Veeam Backup Copies... Windows is reporting 50% free space but Veeam won't write to the target as it's write protected. The write protected symptom didn't occur until FreeNAS hit 100% used on the zvol. (i think i'm using the correct terminology there). I don't have any snapshots, I scrub once a month on the first day of each month.. no idea what is eating up the space or where to look to clean things up... (I'm also happy to start a new thread if needed although I'm going to continue to read through posts because I'm willing to wager lunch I'm not the first second or third FreeNAS noob to ask this question.)

Dan
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
The write protected symptom didn't occur until FreeNAS hit 100% used on the zvol. (i think i'm using the correct terminology there).
When you say 'zvol' do you mean your entire pool? Or the zvol on which you based your iSCSI storage?

If the latter, then you may simply have filled up the zvol you created for block storage. You could do this despite having plenty of free space on your pool. And note that FreeNAS won't let you consume more than 80% of this block storage unless you select the 'Force size' option. There is also a 'Pool Available Space Threshold (%)' setting at the Block (iSCSI) -> Target Global Configuration setup screen that comes into play.

You can increase the size of an existing zvol, which may be all you need to do to fix your space problem.
 

Dan Lee

Dabbler
Joined
Jun 22, 2016
Messages
16
I'm referring to the zvol named FreeNAS on the attached webgui snippet.

storageview.JPG


My interpretation of what I did was to create a single volume that would allow full utilization of the 6 4TB hard disks (I created a RAIDZ1 and assumed I'd have 20TB available total space available). I assumed I'd have somewhere around 20TeraBytes of space to present as an iSCSI target. When I did and very shortly after I started to copy data to the share; I was receiving webgui alerts that the capacity for the volume "vol1" is currently at 82%, while the recommended value is below 80%. I found this puzzling but assumed that since I had created block storage, freenas was seeing the entirety of my iSCSI as a single block file using 82% of 20TB as I had specified the size to 16T (perhaps this is my error).

zvol settings.JPG


Here's a CLI output that doesn't make much sense either...

It doesn't even list the vol1/freeNAS mount where as it did on Monday. And to clarify, the only Windows server that has access to this iSCSI share thinks it is only 50% utilized, it was nearly 75% but I had deleted files off thinking it would shrink the usage in FreeNAS. I believe that to be wrong thinking.

Thanks for the explanations on this...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Your assumption was incorrect. ZFS is a copy-on-write filesystem, and "using it all" is never wise with a CoW filesystem, and with ZFS it isn't even really possible. ZFS is unable to find any suitable blocks for allocation and so writes are failing.

A pool hosting iSCSI zvols or other block storage-like abstractions should be built out of mirrors and should never pass about 50% capacity if you want some moderate level of performance, and should never pass around 60% capacity unless you know why you may safely break that rule.

https://forums.freenas.org/index.ph...d-why-we-use-mirrors-for-block-storage.44068/

and plus any of my other semi-regular rants on fragmentation and maintaining performance
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It doesn't even list the vol1/freeNAS mount where as it did on Monday. And to clarify, the only Windows server that has access to this iSCSI share thinks it is only 50% utilized, it was nearly 75% but I had deleted files off thinking it would shrink the usage in FreeNAS. I believe that to be wrong thinking.

Thanks for the explanations on this...

Oh and the other half of this, I kinda failed to answer. So you got a five gallon gasoline can, and painted "SIX GALLONS" on it because you thought it looked like it should hold six. You went to the gas station and put the pump nozzle in, which promptly shut off around 4.5 gallons. You maybe then proceeded to top it off 'til the gas was right up to the fill opening and the stupid gas pump still only reads 5.3 gallons.

You've tried to store too much in your pool. Those df's and all that all say zero free because it is painfully full to the point where it might not even be possible to fix it. You created a "16TB" zvol for iSCSI on a filesystem that only had maybe 16.8TiB of space and it's full way past the danger point.

Now the thing is, Windows has no way of knowing how full the ZFS pool is. You've told Windows that it has "16TB" of block storage. It is telling you that there was 75% utilization and now there's 50% utilization, but the way that Windows "deletes" files is to merely make a little note that such-and-so block is now an available free block. The data on that block doesn't actually get nuked (this is actually not always true, since you can set Windows to use TRIM/UNMAP), so ZFS is still dutifully storing that "freed" data. This is the nature of virtual storage. Unless/until something makes ZFS aware that the block is no longer in use, either by zero-filling the block or by sending a TRIM/UNMAP command for the block, ZFS will continue to store that data.

PLEASE go read https://technet.microsoft.com/en-us/magazine/2009.08.utilityspotlight.aspx

Now, after having deleted those files, had you maybe run something like "sdelete -z X:", that would have zeroed all the unused space on X:, and ZFS would have compressed those zero-fill blocks. This isn't actually a true or complete fix, just a remediation.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
It seems that @Dan Lee is in a tough spot, and probably needs to re-build his pool with more/larger disks and switch to either mirrors or RAIDZ2, dumping RAIDZ1...

@jgreco, assuming he does all of that, and in light of your comment regarding the behavior of Windows when it utilizes block storage... would he be better off using NFS instead of iSCSI?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, @Spearfoot , that depends. The answer would tend towards "hell yeah" but there may be a reason that iSCSI's being used. We don't know that yet.

Using CIFS or NFS and storing files directly gives ZFS a LOT more insight into what is being stored and when it becomes irrelevant, which means that instead of acting as a dumb block store, ZFS gets to shine.

https://forums.freenas.org/index.ph...res-more-resources-for-the-same-result.28178/

It also helps prevent you from accidentally walking into situations like this ... where what I'm going to say is probably a fundamental misunderstanding of how things interoperate ... then causes a bit of a disaster.

In general, trying to use all your space with ZFS is always a recipe to get unpleasantly screwed. The introduction of compression should usually not be viewed as a way of making "more space" IMHO, but rather a way of keeping ZFS performing better. Overcommit is an incredibly dangerous thing in storage systems, and while some of us do overcommit at multiple storage levels, we're keeping a very close eye on the whole picture, and are making sure that things remain fluid at each of those levels.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Well, @Spearfoot , that depends. The answer would tend towards "hell yeah" but there may be a reason that iSCSI's being used. We don't know that yet.
Yeah, a cursory search of "veeam nfs backup" reveals that NFS may not be an option for a Veeam backup repository on Windows. But that's outside my bailiwick; perhaps @Dan Lee will enlighten us about that.
 
Status
Not open for further replies.
Top