ZVOL At 211% Usage

Status
Not open for further replies.

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Inside of FreeNAS I have a volume with a total size of 109GB. I have created a ZVOL of 90GB that is being shared via iSCSI to an ESXi host. Now obviously that far exceeds the recommendation of not using more than 50% of the volume when using iSCSI and I suspect that it will stop working soon... The reason for posting here is I am a little confused on the present used values and while I am fairly certain that the reason why I am seeing the numbers below is due to the whole 50% thing above I just wanted to confirm.

upload_2016-7-22_23-32-10.png


So my questions are as follows...

- If the total volume is 109GB in size but I created a ZVOL of 90GB why is it that FreeNAS is reporting volume usage of 61.7GB (56%) with 47.3GB of free space?
- The ZVOL "datastore2" is showing 92.87GB of used space which makes since, but then its reporting the percent used as 211... I am assuming that this has something to do with the fact that I am using more than 50% of the total volume or is this something else?

Just to repeat I know that I should not be using more than 50% of the volume and that this is a really bad idea I am just trying to understand the numbers that I am seeing.

Thanks in advance.
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
Pretty interesting. @cyberjock ?? what do you think.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Do you do snapshots? Can you provide the output of "zfs list Extra/Datastore2"?
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Also here is the version of FreeNAS that I am running, not sure if it helps or not.

FreeNAS-9.3-STABLE-201509022158
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Sparse allocated zvol?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Okay, can you post the output of "zfs get all Extra/Datastore2"?

I would consider upgrading your FreeNAS version to the latest 9.10 release too, but I'm not expecting that to fix your problem. But who knows, I don't remember all the nuances to the versions from back then, so it may be fixed by an update. :P
 

darkwarrior

Patron
Joined
Mar 29, 2015
Messages
336
Hey there,

I'm seeing something similar at my end using zVOLs as well.

zvolusage.PNG


Now i'm wondering if that is not related to the blocksize that has been choosen :)
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Okay, can you post the output of "zfs get all Extra/Datastore2"?

I would consider upgrading your FreeNAS version to the latest 9.10 release too, but I'm not expecting that to fix your problem. But who knows, I don't remember all the nuances to the versions from back then, so it may be fixed by an update. :p

Output of zfs get all Extra/Datastore2....

[root@freenas ~]# zfs get all Extra/Datastore2
NAME PROPERTY VALUE SOURCE
Extra/Datastore2 type volume -
Extra/Datastore2 creation Fri Apr 22 8:14 2016 -
Extra/Datastore2 used 92.8G -
Extra/Datastore2 available 28.6G -
Extra/Datastore2 referenced 77.0G -
Extra/Datastore2 compressratio 1.00x -
Extra/Datastore2 reservation none default
Extra/Datastore2 volsize 90G local
Extra/Datastore2 volblocksize 16K -
Extra/Datastore2 checksum on default
Extra/Datastore2 compression off inherited from Extra
Extra/Datastore2 readonly off default
Extra/Datastore2 copies 1 default
Extra/Datastore2 refreservation 92.8G local
Extra/Datastore2 primarycache all default
Extra/Datastore2 secondarycache all default
Extra/Datastore2 usedbysnapshots 0 -
Extra/Datastore2 usedbydataset 77.0G -
Extra/Datastore2 usedbychildren 0 -
Extra/Datastore2 usedbyrefreservation 15.8G -
Extra/Datastore2 logbias latency default
Extra/Datastore2 dedup off local
Extra/Datastore2 mlslabel -
Extra/Datastore2 sync standard default
Extra/Datastore2 refcompressratio 1.00x -
Extra/Datastore2 written 77.0G -
Extra/Datastore2 logicalused 76.6G -
Extra/Datastore2 logicalreferenced 76.6G -
Extra/Datastore2 volmode default default
Extra/Datastore2 snapshot_limit none default
Extra/Datastore2 snapshot_count none default
Extra/Datastore2 redundant_metadata all default
[root@freenas ~]#

Also here is an updated screen shot of the volume usage... We are now at 324% usage.
upload_2016-7-26_21-4-50.png
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
Ok, so I decided to run the UNMAP command on the ESXi host for this particular datastore and now instead of it showing as 324% usage it is sitting at 180% usage, thoughts? This clearly helped, but what the heck is the extra 80%? And why is volume usage listed at only 54.2GB (49%) I would think that this should reflect 90GB since that is the allocated size of the ZVOL?

upload_2016-7-26_21-48-34.png
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Looks like the UI is for some reason calculating that percentage value as "total space over free space" eg: 92.8/44.0 ~= 2.11, 92.8/28.6 ~= 3.24, 92.8/51.4 ~= 1.8

The good news is that the zfs command-line is telling you the truth, and hopefully consistently (taking into account the UNMAP you ran). UI bug perhaps?

The thing that jumped out at me though in your paste was

Code:
Extra/Datastore2 compression off inherited from Extra


I have to ask why you aren't using at least LZ4 on your pool. It's essentially free and could improve performance if you're limited by transfer rates of your vdevs. (Unless you know you're storing incompressible data.)
 

NightNetworks

Explorer
Joined
Sep 6, 2015
Messages
61
I am thinking that you are correct regarding the UI bug...

Interesting point on the LZ4 compression... Do you think I could see performance gains if the volume is running off of an SSD drive? I do use LZ4 on my other volumes, but did not realize that there could be potential performance gains from using it.

Thank
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Less data to write means faster writes, so even with SSD it will likely just get "even faster." LZ4 is fast enough that any modern CPU will have basically no performance hit; it also has an early-abort if it detects incompressible data. I'd give it a shot.
 
Status
Not open for further replies.
Top