UFS Volume reporting less space available than actuality

Status
Not open for further replies.

par

Explorer
Joined
Sep 26, 2013
Messages
92
I use a UFS Volume for "scratch" temporary download writes, which get deleted afterward. After the data is written, it gets deleted. For some reason FreeNAS does not register the free disk space after the data is deleted. It seems like every 24 hours it does a new check and the available free disk space will be corrected.

Is there some sort of garbage collection that needs to happen after deletions? How do I get FreeNAS to correctly report the actual free disk space? This volume is being accessed through a jail, if that matters.

Note the difference between this and the screenshots:
Code:
$ du -sh /mnt/Scratch/
3.6G    /mnt/Scratch/


scratch-1.png
scratch-2.png
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My guess is it does a check every 24 hours or something. Most people don't use diskspace graphs for minute-to-minute operations. They use it to trend data consumption so they can preemptively add more storage space before you run out. What happens minute-to-minute or hour-to-hour isn't exactly that critical.
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
What happens minute-to-minute or hour-to-hour isn't exactly that critical.


Sure it is! Still have not solved this problem. :confused: FreeNAS-9.2.1.5-RELEASE-x64 (80c1d35)

It is *not* just the diskspace graph that has this issue, I am using it to exemplify the problem.

Code:
$ du -sh /mnt/Scratch/
829M    /mnt/Scratch/
 

Attachments

  • download.png
    download.png
    10.7 KB · Views: 183

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
du doesn't work on ZFS properly. It can be horribly horribly inaccurate since it doesn't understand ZFS snapshots, quotas, reservations, etc.

I'm not sure why you are disagreeing with me. minute-to-minute doesn't matter as much as the trend over a period of hours or days. If usage is 30.000GB and a minute later is 30.001GB that's inconsequential. Even if it jumps from 30GB to 40GB in a minute that's not too important unless there's some kind of problem causing the rapid increase of disk space and that's specifically what you are trying to identify.

Overall, at the end of the day that chart is most useful for guesstimating when you're going to hit that 80% line(which is when you should start looking at buying more space). And at 95% you've failed to properly allocate enough disk space for your task and you are going to see irreparable fragmentation of new writes.

In your case, you are using it as scratch space(which is obvious from the name). Even the graph you provided is about an hour. But the fact that it's getting as full as it is means you should be expanding your pool size.
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
This isn't ZFS! And du in this case is accurate.

I can't expand my pool, my scratch for temporary writes is a small SSD! This is a big problem for me. Maybe this isn't a big problem with a huge array, but that has absolutely nothing to do with this case. Obviously this is a tiny volume, and that's all I have to work with here.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My apologies about thinking you were using ZFS. UFS support is already pulled from the next version of FreeNAS, so I don't even consider UFS anymore since it's basically dead to me. Anyone using UFS is about to be either stuck on a FreeNAS version that will become outdated and have security risks or will have to migrate to ZFS.

You can't expand your pool because UFS isn't a pool. ZFS uses pools. So I'm a little confused about everything you are saying right now. Maybe you should start over as this thread is 5 months old.

Are you showing me the graph of du and the chart from the same time frame to show that they don't match up? What happens if you do a df of the /mnt/scratch?

But I'm not seeing what your problem is or what you think is a problem just by reading this thread.
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
I have a UFS volume, and a ZFS pool. I temp write to UFS volume, and then move the data to ZFS pool. If I do this enough FreeNAS thinks the UFS volume is full and out of disk space, when in fact it is not. When I reach this point I start having all sorts of problems. I'm not sure how to be any more clear about this error.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Ok.. please answer my questions above.
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
Are you showing me the graph of du and the chart from the same time frame to show that they don't match up? What happens if you do a df of the /mnt/scratch?


Same time frame. This is what I get with df, I think this is the root of the problem. It is certainly not at 89% capacity as reported. It's as if it's accumulating disk space usage without ever resetting it in the case of moved/deleted data.

Code:
$ df /mnt/Scratch/
Filesystem  1K-blocks    Used  Avail Capacity  Mounted on
/mnt/Scratch  54726976 44919108 5429712    89%    [restricted]
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Just to ask a stupid question.. are you sure you aren't on ZFS? This sounds like a classic case of snapshots. It might not be, but I've got almost no experience with UFS so I don't have much more advice at this point. :(
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
It doesn't say it's ZFS anywhere.

I guess my question really is how do I manually trigger this process (as pictured), or really why isn't it done automatically? Maybe it is some configuration option.
 

Attachments

  • reset.png
    reset.png
    10.6 KB · Views: 161

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
At the CLI type "zpool status Scratch". If a "Scratch" pool is listed then its definitely a zpool.

If its a zpool then the process that is holding up your data is going to be setup by your admin(most likely you it sounds like). You probably have snapshots or something setup that is preventing the immediate release of data when it is deleted from the pool. I can tell you that I just tested in a VM deleting of a pool and UFS drive that was 90% full and the data was cleared within 20 seconds.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, you definitely are UFS. All I can guess is you have some kind of snapshots, reservations, *something* is holding onto that data. I'm all but 100% sure this is something you've configured in a way that it behaves this way, but I can't tell you what you did wrong or how to fix it.

Had a similar story in IRC last week. Someone complained that their box kept rebooting every 5 minutes. They were pissed and had just bought a whole bunch of hardware from our recommend list. The problem... he enabled the watchdog timer in the BIOS without understanding how it works, how it would affect his system, or how it might not work properly. Without him understanding anything he enabled it, then was horrified to find his server wouldn't work because of his own lack of understanding. I knew his problem right away because I know that the watchdog on that hardware reboots at 5 minutes. ;)

I'm guessing you've set something you don't fully understand and it's biting you in the butt. This isn't an insult as we've all had those stupid moments. Unfortunately you're going to have to go back and take a deep look at how you have everything set up and see if you can find that one thing you've set up that you might not understand as well as you think you do.

As a desperate maneuver you could just get rid of the UFS drive and make it a new ZFS pool. Being that it'll be separate pool(and a new pool) it shouldn't have this problem. ;)
 

par

Explorer
Joined
Sep 26, 2013
Messages
92
Rather than trying to understand what exactly was going on...

Issue resolved by wiping UFS and using ZFS instead. :rolleyes:

Didn't even realize this was an option when I first set it up, the difference between "Import Volume" and "ZFS Volume Manager"...
 
Status
Not open for further replies.
Top