Volume space unused after adding disks

Status
Not open for further replies.

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
Hello,

I am a relatively new at freenas and i would like some savvy help.

I have recently upgraded my system by adding a new set of disks.
I use FreeNAS-9.3-STABLE-201511280648.

The thing is after I upgraded the space by extending the volume it somehow increased space but not at full capacity. The full capacity in 3xRaidZ1 should be around 38.14 TB. FreeNAS actually shows a volume size of 37,6. I can live with that though it half terabyte less. But the dataset is actually only 30.1TB. More than 7Tb are missing. I am not sure what to do.

There are also some bogus datasets and jails that appeared after upgrading the system to version 9.3. Probably wizzard junk. Not sure how to get rid of them either.

Can anyone help with this issue? I have attached screenshots with the configuration.

Untitled-1.jpg

Untitled-2.jpg

Thanks in advance.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Actually I calculated it should be a bit more than 35 TiB and the GUI shows 35.1 TiB so everything looks good.

You are using 30.1 TiB and you have 5.0 TiB free.

BTW it's not recommended to go over 85 % usage, you should do something about that.
 

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
Thanks for replying.

I recalculated myself and the right amount for 40.000 Mb (the total stated drive capacity erroneously and stubbornly stated by manufacturers) is 36,38TB (8x3,725.29GB+4x1,862.65GB/1024). You are wrong. I was also. However the volume clearly shows 37,6TB used and 7.7TB available. Why the 7,5TB difference? How about that? How do you explain it? I simply can't. And idea why the difference in dataset size?

How about the other problem regarding the bogus datasets and jails? Any ideas?

I know about the warining but you are wrong about the 85% recommended value. In fact the recommended value is clearly 80% as stated by the warning FreeNAS gives. By the way. Why this limitation and what are te consequences? However there is nothing to do about in right know but throwing more money at the problem. Besides, i've been using it previously at 98% without any noticeable problem.

Thank you for you attention.

Anyone has any ideas....
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
No, I'm not wrong because I've taken into account the ZFS overheads :)

Please read this post for the details on the numbers of the first two lines of the this tab.

Why did you run the wizard if you just wanted to reinstall and import your config? anyway, if you don't use the dataset just select it and click the delete button :)

Yes, thought 80 % but inattentively copied the 85 % number from the screenshot :p if you go over 90 % ZFS switch from speed optimization to space optimization and if you fill the pool to 100 % you'll be in big trouble (especially if you don't know how to use the CLI) because ZFS is a CoW FS so you'll not be able to delete files to make some space... the 80 % rule is here to give you time to do something to not hit the 90 % threshold.

98 % is more or less on the edge of insanity...
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
How can you be missing 7TB, if you have 5TB available to you? @BiduleOhm's calculations are correct. FreeNAS also has some overhead, that's out of our control. There's no silver bullet that's going to give you the space you want.

As far as usage, again, @BiduleOhm knows that the warning will come on at 80%. The exact number is subject to interpretation and depends on one's configuration. iXsystems has chosen to show an alert at 80%. If you were using iSCSI, we'd recommend you stay under 50%.

At 98% usage, you are cruisin' for a bruisin'. Due to ZFS' copy on write, you could easily find yourself out of disk space. And, once that happens, trying to recover from that condition isn't intuitive.

Oh, but the way, the system will come to a crawl, making it even harder to fix the problem. I remember one thread, where someone had a lot of snapshots and ran out of space. As I recall, it took him weeks to recover, since the delete process (with a full pool) was so slow.
 

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
@gpsguy indeed you are right abouth the calculation... i already saw that. Yet i am still missing 2Tb? According to the math on what FreeNAS shows that is. Ain't I? Why the difference between volume and dataset size? Between 37.6TB and 30.1 it's like 7.5Tb missing (20%)!

You forgive me all my ignorance and naivety. Yet please don't tell my that i have to add to the 10% loss due to manufacturer and another 15% due to ZFS overhead. R u kidding me? That's 25% space "lost". That's like... insanely wasteful. Seriously someone should do something about this.

"Why did you run the wizard if you just wanted to reinstall and import your config?" What do you mean? The wizard starts by default. Besides, who told you i was going to do that?

I was cuisin' at 98% for a year without any problems. I really don't know what you are taking about. No crawl at all. And how can you not be able to delete with a volume full? That is just insane? Who would make a filesystem like that?

Can anyone give me a meaningful answer? I simply can't grasp that we actually use 60% of the advertised drive space ((15% loss due to manufacturer + 10% ZFS overhead) * 80% recommended load=60%)? That's also insanely wasteful. And that is not just me craving for the silver bullet. The statement about 50% in case of using iSCSi just makes me wonder why on earth someone would consider ZFS a sane solution. In case this is true, of course.

I might be naive or just don't understand the basics. Can anyone let me know?

@Bidule0hm How did you take that into account? What is the formula for the overhead that you calculated? I simply used the unit converions to get at my result. How did you get at yours?

"If you don't use the dataset just select it and click the delete button". Are you sure? Of course that was my first thougt also. But I need to be sure.

About all the other issues has anyone got an answer?
 
Last edited:

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yet i am still missing 2Tb? According to the math on what FreeNAS shows that is. Ain't I? Why the difference between volume and dataset size? Between 37.6TB and 30.1 it's like 7.5Tb missing (20%)!
You forgive me all my ignorance and naivety. Yet please don't tell my that i have to add to the 10% loss due to manufacturer and another 15% due to ZFS overhead. R u kidding me? That's 25% space "lost". That's like... insanely wasteful. Seriously someone should do something about this.

Did you read my two previous posts here and here?

"Why did you run the wizard if you just wanted to reinstall and import your config?" What do you mean? The wizard starts by default. Besides, who told you i was going to do that?
You actually:
There are also some bogus datasets and jails that appeared after upgrading the system to version 9.3. Probably wizzard junk. Not sure how to get rid of them either.


I was cuisin' at 98% for a year without any problems. I really don't know what you are taking about. No crawl at all.
Probably because the network was your main bottleneck.

And how can you not be able to delete with a volume full? That is just insane? Who would make a filesystem like that?
Read about copy on write. It has some disadvantages but there's big advantages. As long as you don't do really stupid things like fill the thing to 100 % there's no problems.

((15% loss due to manufacturer + 10% ZFS overhead) * 80% recommended load=60%)
That's wrong and more wrong. Manufacturer loss ratio is 0.9095 so less than 10 %. ZFS overhead is about 1.6 % for metadata and there's also others overheads but the total is less than 5 % (unless you have a misaligned pool). The 80 % is a warning, you can still use the pool to about 90 % without having problems.

I might be naive or just don't understand the basics. Can anyone let me know?
We do, since about 4 posts, but you don't take the time to read what we write, and then you still say we are wrong... well, we can't do anything about that.

How did you take that into account? What is the formula for the overhead that you calculated? I simply used the unit converions to get at my result. How did you get at yours?
I read about how ZFS works. That's how I know there's 1/64 (~1.6 %) of space taken by the metadata for example.

"If you don't use the dataset just select it and click the delete button". Are you sure? Of course that was my first thougt also. But I need to be sure.
You're trolling here, no?

About all the other issues has anyone got an answer?
What other issues?
 
Last edited:

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Do you think @Bidule0hm and I make this stuff up?

Here's an example iSCSI system being used by one of our mods - https://forums.freenas.org/index.php?threads/i-o-performance-planning.40002/#post-250112 Study the message. Look at the last paragraph. Is he complaining?

Compared to the cost of an enterprise SAN, FreeNAS is a bargain.

The statement about 50% in case of using iSCSi just makes me wonder why on earth someone would consider ZFS a sane solution. In case this is true, of course.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Why the difference between volume and dataset size? Between 37.6TB and 30.1 it's like 7.5Tb missing (20%)!
Because the "volume" size shows the total capacity on all your drives, including the space used for parity, while the "dataset" size shows the "net" capacity. Think of it this way, and ignore ZFS for a moment. If you have three 2 TB drives in a RAID5 configuration, the total space of those three drives is 6 TB, but the space you can use is 4 TB. You haven't "lost" 2 TB; you're using (or reserving) it for parity. That's the difference between the "volume" and the "dataset" size.
And how can you not be able to delete with a volume full? That is just insane? Who would make a filesystem like that?
Who would do that? Sun Microsystems/Oracle, that's who. Why? Because disk space is cheap (even when ZFS was created, and much more so now), and copy-on-write brings a lot of advantages to the table. Will it work well for you? That's up to you to decide, but you'd be well-advised to understand what it's actually doing, and why, first.
 

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
I did read with attention all that was posted before. Every single line! And it should have been evident since i answered to the posts you all made. Also, I think i have documented myself reasonably on the ZFS and FreeNAS documentation available as i am using FreeNAS for some years now, despite the obvious fact that i am not a professional. However, I could do well without the condescension... I came here thinking i would get disinterested advice, not to get scolded. I will stop here and make no further comments on ups and downs of ZFS since they it is obvious to me they won't be received well. Neither shall i comment on the few replies I have got above because i don't believe it will serve for anything.

And BTW i don't doubt anyone's probity in posting their answer. I just want and explanation of the answer that makes sense to me. I still have not received a clear argumented and direct answer to my bewilderment. Why the size of the ZFS Volume is 37.6TB and the size of the dataset is 30.1TB. And why the difference in the usable space between volume and dataset (7,7TB-5,0TB=2.7TB)?

@danb35
Thank you. I think you cleared my bewilderment. I clearly understood the concepts of "net size" and "parity reservation" having used RAID previously. I thought it was obvious from my calculations above. I have 3 arrays in the pool 5 drive each and in my caclulations i have accounted for only 12 drives instead of the 15 available. That is i have a total physical drive capacity of aprox. 48TB and a "net size" of about 40TB not accounting for manufacturer loss and zfs overhead. Despite that, FreeNAS shows a volume net size of 37,6TB+7,7TB and a dataset (usable) net size of 30.1TB+5,0TB, this time taking into account those things. There is a difference 10,2TB that is almost 20% of the total physical space. And this cannot be explained as "manufacturer loss" or by ZFS overhead. The only thing that can explain this is parity reservation. If i understood correctly then, the volume size includes the space reseved for parity, while the dataset does not. However this is a strange and confusing way to present the data on the volume. Am I correct about this? In case this is true, that should have been the simple answer i was expecting from here.

About the "other issues" never mind. I'll just delete the bogus dataset and jail. Hopefully i won't loose my data.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
That is i have a total physical drive capacity of aprox. 48TB and a "net size" of about 40TB not accounting for manufacturer loss and zfs overhead. Despite that, FreeNAS shows a volume net size of 37,6TB+7,7TB and a dataset (usable) net size of 30.1TB+5,0TB, this time taking into account those things.
That sounds about right. You say your total drive capacity is 48 TB, or ~43.6 TiB; your view disks screen shows 50 TB (10 x 4 TB + 5 x 2 TB), which would be 45.5 TiB. FreeNAS shows 45.3 TiB (37.6 + 7.7). For net capacity, you figure about 40 TB, or 36 TiB; your actual is 35.1 TiB. In short, the numbers it's showing are exactly what you should expect.

Yes, I agree it's a somewhat confusing way to show this information--I've lost count of how many times I've explained it since 9.3 was released.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I did read with attention all that was posted before. Every single line! And it should have been evident since i answered to the posts you all made. Also, I think i have documented myself reasonably on the ZFS and FreeNAS documentation available as i am using FreeNAS for some years now, despite the obvious fact that i am not a professional. However, I could do well without the condescension... I came here thinking i would get disinterested advice, not to get scolded. I will stop here and make no further comments on ups and downs of ZFS since they it is obvious to me they won't be received well. Neither shall i comment on the few replies I have got above because i don't believe it will serve for anything.

You can do down comments on ZFS, but only if you know what you're talking about. Saying "that's crap, I can't delete files when it's full to 100%" when you don't know how a CoW FS works or even what it is, will not be well received...

I still have not received a clear argumented and direct answer to my bewilderment. Why the size of the ZFS Volume is 37.6TB and the size of the dataset is 30.1TB.

You did: https://forums.freenas.org/index.ph...-unused-after-adding-disks.40042/#post-250445 and BTW the sizes aren't 37.6 and 30.1 but 45.3 (37.6 + 7.7) and 35.1 (30.1 + 5.0).
 
Last edited:

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
That sounds about right. You say your total drive capacity is 48 TB, or ~43.6 TiB; your view disks screen shows 50 TB (10 x 4 TB + 5 x 2 TB), which would be 45.5 TiB. FreeNAS shows 45.3 TiB (37.6 + 7.7). For net capacity, you figure about 40 TB, or 36 TiB; your actual is 35.1 TiB. In short, the numbers it's showing are exactly what you should expect.

Yes, I agree it's a somewhat confusing way to show this information--I've lost count of how many times I've explained it since 9.3 was released.

Thank you. I get it now. Glad to see it's not just me who doesn't understand the way FreeNAS presents the configuration. I consider the matter solved.
 

popacio

Dabbler
Joined
Sep 1, 2013
Messages
12
You can do down comments on ZFS, but only if you know what you're talking about. Saying "that's crap, I can't delete files when it's full to 100%" when you don't know how a CoW FS works or even what it is, will not be well received...



You did: https://forums.freenas.org/index.ph...-unused-after-adding-disks.40042/#post-250445 and BTW the sizes aren't 37.6 and 30.1 but 45.3 (37.6 + 7.7) and 35.1 (30.1 + 5.0).

BTW... You obviously didn't read my previous post. And you quote things i didn't say. I still believe making a FS that cannot delete files once it's full is irresponsible and plain wrong. I can't see how anything could change my oppinion. You certainly did not make the case for that. Besides your statements and subsequent comments are besides the subject of my my inquiry. I would appreciate if you's stick with answering the asked questions, if you can, instead of unjustifiably scolding me for my oppinions. I don't appreciate either the accusations of not reading the replies or even trolling. That is simply impolite.
That beeing said, i already thanked danb35 for answering my question, before you posted.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I still believe making a FS that cannot delete files once it's full is irresponsible and plain wrong. I can't see how anything could change my oppinion.
Then you should really investigate using something other than FreeNAS, because that's the way ZFS works, and ZFS is what FreeNAS uses. It's inherent to a copy-on-write filesystem, which (like anything else) has advantages and disadvantages. I don't think @Bidule0hm is going to bother to defend ZFS to you (and I know I'm not), because it's already been done elsewhere, by people who understand it better. He who has an ear to hear, let him hear.

You're being "scolded", as you put it, because you persist in doing things with your system which are detrimental to the long-term performance, stability, and data security of your server. The people here, by and large, care about their data, and assume anyone else using FreeNAS also cares about their data, so we will warn people (persistently, if necessary) when they're doing things that are potentially detrimental to your data.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep, exactly what you've said @danb35 ;)

And you quote things i didn't say.
I quoted exactly what you've said in your posts, you just can't deny that.

I can't see how anything could change my oppinion. You certainly did not make the case for that. Besides your statements and subsequent comments are besides the subject of my my inquiry. I would appreciate if you's stick with answering the asked questions, if you can, instead of unjustifiably scolding me for my oppinions.
I answered all of your questions (and even linked the answers, who were right above BTW, when you said I didn't) to be thanked by "you're wrong", "you didn't answer that", "you don't know", ... far more than one time (and moreover you didn't know what you were talking about). And you do it again (yes, the "if you can")...

I don't appreciate either the accusations of not reading the replies or even trolling.
It was questions, not accusations.

i already thanked danb35 for answering my question
Yeah, all you wanted is seeing someone here saying it's not your fault but the ZFS, GUI, FreeNAS, whatever... fault. All the others are wrong, I know.

Well, TL;DR: I'll not answer to any of your future questions, I've not too much free time to lose.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
^^ Ditto what @Bidule0hm said and the way I felt treated.

BTW, We forgot to tell you that if your used the web gui to create your pool, the default is to create a 2GB swap partition on each of your drives. So, that accounts for 30GB (15 x 2GB) of the FreeNAS overhead.
 
Status
Not open for further replies.
Top