I am so confused by the storage tab

Status
Not open for further replies.

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
Maybe I'm not looking at it the right way, but I can't make heads or tails of the storage view. I guess I'll just list the issues I'm having and hopefully it's not too difficult to get me oriented. I'm running v11 in a hyper-v for testing purposes using the original UI.

1. I created six 10GB drives and I'd like to create a RAID 10 because that's what we'll use in production (actually, we have a production server that I've inherited and I'm trying to understand how it's configured). If I'm understanding correctly, a six 10GB drive RAID 10 should yield a 30GB volume, right? So first I went to the volume manager and created a mirror called "raid10" and created a mirror of two drives, saved it, went back into volume manager, extended raid10 and added another mirror, and then did the same for the last two drives. Is this now a RAID 10 array? Is just because there are now 3 mirrors listed in volume status below the volume name "raid10", does that mean that the volume is striped across the mirrors? This is the way I've seen others make RAID 10 arrays, so I'm assuming I did it right.

Then it seems I can also go into the volume manager, name my raid, and then create a 2x3 volume layout and it says "mirror" - is this RAID 10?

Capture.JPG


2. Why does it show the capacity being 24GB? Is that because 24GB is 80% of 30GB? But it seems like the 80% rule applies to creating zvols and not wanting to use more than 80%, but you can still override that if you want to deal with the consequences when creating the zvol, so I'm not understand the capacity being shown.

3. How do I know that I actually have a RAID 10? It seems like using either of the methods above yields the following view when I go to volume status:

Capture.JPG


I understand that it's not really RAID 10, but really RAID 1 + 0, but if this is in-fact a RAID 10 (1+0), there should be some indication showing that the mirrors are striped. The listing above just shows (to my FreeNAS-untrained eyes) that my volume contains three mirrors. Maybe it's impossible to have 3 mirrors in a volume without them being striped, but it would be nice if it was more clearly noted that drives listed under a volume are striped... it's just not very intuitive to the beginner. Yes, you could argue that a beginner shouldn't be playing with a NAS, but hey, I gotta start somewhere ;)

4. The main storage page makes no sense to me. If I actually created a RAID 10 using six 10GB drives, shouldn't it be 30GB (60/2)? This is what the main storage page looks like:

Capture.JPG


As you can see it says 23.8 - where is that number coming from? And what does 9.2MB used mean? Is that just space that's stolen for management purposes? If so, I'm cool with that...

5. I'm trying to create a zvol for my VM witness, it should be 1GB - I create that and now it says 1GB - that makes sense (yay, something makes sense to me!), it also makes sense that the dataset says 1GB used, but why doesn't the volume show 1GB used (or 1.0094GB to be exact)? That would make more sense because the dataset is essentially using up space contained in the volume, right?

6. If I started with 23.8GB available (which still doesn't make sense to me) and I use 1GB, how did I end up at 22GB available for the dataset? Rounding error?

7. And if I used 1GB for the zvol, why did the space available jump up to 23.1? Shouldn't it be the same as the dataset? I still don't understand exactly what a dataset is, but I'm just accepting that it's a layer that sits between the volume and the zvols...

8. When I go to create another zvol, I want to create it using all the available space (is it 22.0 or 23.1?), but when I try and create a zvol for each of those, I get an error:

Capture.JPG


I know about the 80% rule (per 8.1.4 in the manual), so I should only create a zvol that is 80% of the available capacity (of the dataset?), so that would mean I should use 17.6GB? Just for kicks I use 20gb and it lets me through without having to select the "force size" checkbox - that doesn't make sense... or was the 80% being accounted for when I created the volume? It would be really helpful if the UI told me "x amount of space available for use to meet the 80% rule, otherwise select "force size" checkbox".


I think that's about all I have so far. I'm really trying to understand this and I've been staring at it for a while now, watching videos, reading posts, reading the manual, creating a test VM environment, and I just can't make sense of this. I really tried to RTFM before making this post. I appreciate anyone's help!!

Thanks!
Mike
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Mike,
You are on a topic that none of us like to talk about because it is not a simple straight forward topic. There has been a lot of discussions on how FreeBSD reports available/free/used/reserved space and it's no fun to hash over and over again. I think in general it is just easier to accept that the space was consumed by the pool creation overhead with respect to why you have 24GB vice 30GB. I suspect you are overlooking or not aware that the default swap file size which is 2GB per drive which would add up to 6GB (deer looking into the headlights stare). Also, when creating the pool it does not take into account that you might be creating a zvol, the zvol is done when you create the zvol which is after creating the pool so the 80% rule for a zvol is not in effect during pool creation.

If the swap space is good enough of an answer then look no further but if you really want to dig into the way space is reported, continue.

I would recommend that you do a Google search for something like "freenas reporting incorrect size" or similar to see what jumps off the screen. I know there are some explainations out there that are good. You will also see threads for NAS4Free and FreeBSD having the same confusion since they are all based off of FreeBSD.

As for your question on if you actually created a RAID10 setup, the proof is in the output of your step #3 above. It's the formatting of that output. You have three mirrors indented under your pool name.

I hope this helps some. And I did see your question about replacing your SSDs and you can replace the 1TB drives with 2TB drives, just follow the replacement procedures in the user guide but realize that your space will not represent the 2TB drive size until all the drives in the vdev are the same size. In other words, the smallest drive is how all the drives will act. And you have a very large pool of data, yikes!
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
there should be some indication showing that the mirrors are striped.
The indication is in the knowledge that striping is always what ZFS does. Each mirror is a vdev. When a pool contains more than one vdev, the vdevs are always striped together. That's just part of how ZFS works.
But it seems like the 80% rule applies to creating zvols
My understanding is that if you're using block devices (i.e., zvols), you really don't want to use more than 50% of your pool capacity.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
Mike,
You are on a topic that none of us like to talk about because it is not a simple straight forward topic. There has been a lot of discussions on how FreeBSD reports available/free/used/reserved space and it's no fun to hash over and over again. I think in general it is just easier to accept that the space was consumed by the pool creation overhead with respect to why you have 24GB vice 30GB. I suspect you are overlooking or not aware that the default swap file size which is 2GB per drive which would add up to 6GB (deer looking into the headlights stare). Also, when creating the pool it does not take into account that you might be creating a zvol, the zvol is done when you create the zvol which is after creating the pool so the 80% rule for a zvol is not in effect during pool creation.

Ok, the swap thing makes sense and thank you for informing me of that. But that's something that could have easily have been mentioned on the Volume Manager dialog - it's already calculating that value in real-time, why not let the user know what's going on and avoid needless confusion?

I would recommend that you do a Google search for something like "freenas reporting incorrect size" or similar to see what jumps off the screen. I know there are some explainations out there that are good. You will also see threads for NAS4Free and FreeBSD having the same confusion since they are all based off of FreeBSD.

So if it is known that sizes are being displayed incorrectly, then there should be a note on the storage page that says "Reported sizes may be inaccurate due to the way that FreeBSD reports space to FreeNAS." While nobody likes to say that they're showing something wrong, doing it wrong and not telling anybody is even worse.

Now referring back to my post, is there a way to figure out how much space I actually have on my volume so I know how big to make my zvol? I don't mind going to the command-line if I have to, but there's gotta be some kinda way to figure out how to maximize the size of my zvol.

As for your question on if you actually created a RAID10 setup, the proof is in the output of your step #3 above. It's the formatting of that output. You have three mirrors indented under your pool name.

Just so we're clear, you're confirming that it is in-fact a RAID10 setup?

I hope this helps some. And I did see your question about replacing your SSDs and you can replace the 1TB drives with 2TB drives, just follow the replacement procedures in the user guide but realize that your space will not represent the 2TB drive size until all the drives in the vdev are the same size. In other words, the smallest drive is how all the drives will act. And you have a very large pool of data, yikes!

Thanks for that info! That does lead me to another area of confusion - how come the Volume Status page doesn't show the drive sizes (or serial numbers)? So now if I'm trying to upgrade my volume with new drives, I need to cross-reference the View Disks page to figure out what drive is where. Instead, that Volume Status page could be so much more useful in showing me what drive is where and which drives still need to be upgraded, but now I need to cross-reference the View Disks page which is just leaving more room for error.

I get that some of my suggestions are more complicated to implement than it may seem (I do some programming, so I understand there may be difficulty behind my requests), but some of this stuff really could be presented in a much clearer and informative manner with very little effort. My goal in mentioning these items is so that hopefully the learning curve for this system can be eased a bit because right now it seems like it's needlessly complex just due to lack of information in the UI.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
The indication is in the knowledge that striping is always what ZFS does. Each mirror is a vdev. When a pool contains more than one vdev, the vdevs are always striped together. That's just part of how ZFS works.

Cool, thanks for the info! I think it would be helpful that if members of a volume are always striped, having informative text on the Volume Status page that states that information would be helpful because it definitely isn't intuitive. And I think this is really important because this is an area where you don't want to be making guesses - it should be very clear and obvious as to how something is configured because a misunderstanding of how something is configured here can have serious consequences.

My understanding is that if you're using block devices (i.e., zvols), you really don't want to use more than 50% of your pool capacity.

Ok, and that makes sense to me - it was just unclear if that 80% was already being taken into account during volume creation (it wasn't - that was the swap allocation), or if it's during the zvol creation. But it seems like it's not doing any of that for you, but instead letting you max out the zvol size and leaving it up to the admin to manage the size... which I'm totally fine with.

Thank you for the clarifications.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
why not let the user know what's going on and avoid needless confusion?
It's in the user manual.
So if it is known that sizes are being displayed incorrectly,
The sizes are not being reported incorrectly, you have mistaken what I recommended to search the internet for a real problem. The size data is accurate provided you understand what is being conveyed by FreeBSD/FreeNAS.
Just so we're clear, you're confirming that it is in-fact a RAID10 setup?
Yes.
Thanks for that info! That does lead me to another area of confusion - how come the Volume Status page doesn't show the drive sizes (or serial numbers)? So now if I'm trying to upgrade my volume with new drives, I need to cross-reference the View Disks page to figure out what drive is where. Instead, that Volume Status page could be so much more useful in showing me what drive is where and which drives still need to be upgraded, but now I need to cross-reference the View Disks page which is just leaving more room for error.

I get that some of my suggestions are more complicated to implement than it may seem (I do some programming, so I understand there may be difficulty behind my requests), but some of this stuff really could be presented in a much clearer and informative manner with very little effort. My goal in mentioning these items is so that hopefully the learning curve for this system can be eased a bit because right now it seems like it's needlessly complex just due to lack of information in the UI.
You need to keep in mind that this is not an open source project but a free product from iXsystems. It's up the those developers to make things work nice. In reality the present GUI is vastly improved over FreeNAS 8.0. Before you really needed to read the manual, read other stuff for FreeBSD, with a lot of effort you were able to figure it out. It was not for the faint of heart.

So while I understand many of your critisizims against FreeNAS, I would suggest that if you wanted to see change, submit a bug report or feature request. I understand that you are in a situation where someone setup this massive FreeNAS system and now you inherited it. Sounds like you will be the next FreeNAS guru at your company. I would ensure that you train someone else. Something else I find helpful is to write your own user guide on how to perform routine maintenance. It will help you the next time you need to replace a drive or whatever you need to do.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
to add to what @joeschmuck said, you might want to download a copy of the docs using the link in my signature.

I keep a copy on my smartphone for reference.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
@mikesoultanian And if I sounded like I came off a bit harsh, that wasn't my intension. I'm actually a sweet guy, just ask all the girls at work, they love me. Even some of the guys love me but that is another story :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I'm actually a sweet guy, just ask all the girls at work, they love me.
Unfortunately, Human Resources tells a different story...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The indication is in the knowledge that striping is always what ZFS does. Each mirror is a vdev. When a pool contains more than one vdev, the vdevs are always striped together. That's just part of how ZFS works.

My understanding is that if you're using block devices (i.e., zvols), you really don't want to use more than 50% of your pool capacity.

Fragmentation becomes more an issue the more utilized your pool is. Above 50% utilisation fragmentation issues accelerate and disks slow down (inner tracks are slower than outer tracks). Above 90% utilisation ZFS switches allocator to a very slow allocator, but one which minimises further fragmentation. At 100% utilization your pool seizes up. Ergo, it makes a lot of sense to try to maintain pool usage below 90%, and that means you really should think about increasing your pool size when you hit 80%, and before you hit 90%.

BTW, the swap is defaulted to 2GB, not a percentage. So, if you had added 1TB drives, you'd still only have 6GB of SWAP consumed across all 3000GB. And the manual does mention this

For ZFS, Disk Space Requirements for ZFS Storage Pools recommends a minimum of 16 GB of disk space. Due to the way that ZFS creates swap, it is not possible to format less than 3 GB of space with ZFS. However, on a drive that is below the minimum recommended size, a fair amount of storage space is lost to swap: for example, on a 4 GB drive, 2 GB will be reserved for swap.

You can disable the swap setting. But then you probably should setup swap on an internal SSD... for when you *need* swap in order to mount a pool after a non-graceful restart....
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I think it would be helpful that if members of a volume are always striped, having informative text on the Volume Status page that states that information would be helpful because it definitely isn't intuitive.
It seems like you're asking for quite a bit of information to be added to an already-busy page, much of which is already in the manual. FreeNAS is not a product into which you can safely dive without reading the existing (extensive) documentation.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
@mikesoultanian And if I sounded like I came off a bit harsh, that wasn't my intension. I'm actually a sweet guy, just ask all the girls at work, they love me. Even some of the guys love me but that is another story :)

Nah, you're just sayin' it like it is - heck, I'm sorry that my frustration is showing through, but it's a high-pressure situation I'm in as I had to move all our VMs on to another FreeNAS unit which has really slow storage and it's slowing down the entire office so I'm trying to get things back over to our main FreeNAS unit and trying to learn this very quickly - it's just a sucky situation. I get it - when you're using something that's free, you can't really complain ;)

I am definitely documenting this because one of my biggest gripes about this whole situation is that nobody documented anything!!!

So, I guess it looks like my only real unresolved issue is #8 as it relates to this question is how do I make sure that I'm maximizing the zvol size?

thanks!
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
It seems like you're asking for quite a bit of information to be added to an already-busy page, much of which is already in the manual. FreeNAS is not a product into which you can safely dive without reading the existing (extensive) documentation.

I do agree that it's busy, but I do think there are places where information like this can prove helpful. You are all very comfortable with the software, but from non-FreeNAS user's point of view, having indicators like this can be very helpful and can make learning curve for a complex piece of software like this a little easier.

Now, there are other areas where it's just downright confusing, but when it's free software, I can't be expecting people to code it the way I like it - so again, I apologize if my frustration at the UI is coming off as lack of appreciation for what people have created.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
BTW, the swap is defaulted to 2GB, not a percentage. So, if you had added 1TB drives, you'd still only have 6GB of SWAP consumed across all 3000GB. And the manual does mention this

You can disable the swap setting. But then you probably should setup swap on an internal SSD... for when you *need* swap in order to mount a pool after a non-graceful restart....

2GB isn't the end of the world, but it's still a decent chunk of space. I have 80 drives in my JBOD - does that mean I'm losing 160GB just to swap space? I could very well put the swap space on the boot drives as I'm pretty sure it has 160GB of free space (assuming it needs to match the size that would be allocated to the drives) but is it really worth the hassle? Btw, I didn't do the install on drives - it was the previous admin. I've read that I could instead just install the OS on two thumbdrives, as I'm kinda tempted to do, but this is a production box so I don't want to go making too many changes until I'm a bit more comfortable with FreeNAS.

thanks!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
2GB isn't the end of the world, but it's still a decent chunk of space. I have 80 drives in my JBOD - does that mean I'm losing 160GB just to swap space? I could very well put the swap space on the boot drives as I'm pretty sure it has 160GB of free space (assuming it needs to match the size that would be allocated to the drives) but is it really worth the hassle? Btw, I didn't do the install on drives - it was the previous admin. I've read that I could instead just install the OS on two thumbdrives, as I'm kinda tempted to do, but this is a production box so I don't want to go making too many changes until I'm a bit more comfortable with FreeNAS.

thanks!

Set the per-disk swap to 0GB in the system settings.

Btw, I didn't do the install on drives - it was the previous admin. I've read that I could instead just install the OS on two thumbdrives, as I'm kinda tempted to do, but this is a production box so I don't want to go making too many changes until I'm a bit more comfortable with FreeNAS

thumbdrives are significantly less reliable that SSD drives.
 

mikesoultanian

Dabbler
Joined
Aug 3, 2017
Messages
43
So, I guess it looks like my only real unresolved issue is #8 as it relates to this question is how do I make sure that I'm maximizing the zvol size?

I was doing some more reading and I think I figured out the answer to my zvol question above. I saw a reference to 10.5.6 and there's a warning:

For performance reasons and to avoid excessive fragmentation, it is recommended to keep the used space of the pool below 50% when using iSCSI. As required, you can increase the capacity of an existing extent using the instructions in Growing LUNs.

So I guess what I'll be doing is creating the zvol smaller than the pool. Should I really limit it to 50% as I've noticed people also mention 80% being the threshold where things start going south? Also, I'm assuming that means I can, in theory, let the data on the zvol get to 100% without causing file system issues because the pool utilization is still less than 50% (or 80% if that's ok)? Obviously in practice I wouldn't want to fill it to 100%.
 
Status
Not open for further replies.
Top