Regarding iSCSI zvol extents in FreeNAS for VMware and the 50% rule.

Status
Not open for further replies.

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
Hi everyone,

I understand that you're only supposed to use 50% of the storage space when using iSCSI for FreeNAS and VMware as a best practice but I'm just unclear regarding what that 50% means. Let me give you an example:

Phiysical disks: 4x 1 TB HDD's
Volume configured as striped mirror, so 2 TB total usable space.

To follow this 50% rule do you:

A: Create a 1 TB zvol file extent, thereby only using 50% of the total volume's space, and then allocated all 1 TB of that file extent to the iSCSI target in VMware ESXi?
B: Create a 2 TB zvol file extent, thereby using 100% of the total volume's space, but then set Target Global Configuration -> Pool Available Space Threshold -> 50% and Extents -> Properties of the ESXi extent -> Available Space Threshold -> 50% and then make sure to only allocate less than 1 TB (50%) when I add the iSCSI target in VMware?
C: Something else?
 

mav@

iXsystems
iXsystems
Joined
Sep 29, 2011
Messages
1,428
"A" is recommended for basic users, while "B" is fine for advanced users who knows how to use UNMAP on VMware.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What "50% rule"?

I always hate when people take what I say and then parrot it without understanding.

You can safely run 80-90% on a block storage system but it will get excruciatingly slow. There's nothing magic that happens at 50% except that after awhile it is merely painfully slow. The smart money is on keeping your pool utilization as low as you reasonably can.

Crap, I can't do that from here. Back in a few minutes.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay. Back. Here, look at this.

delphix-small.png


The issue here is what happens over time with fragmentation. Eventually ZFS reaches a steady state of sorts, where the write speed has degraded and settled in around a certain speed due to fragmentation. This doesn't happen right away... it happens over time, as blocks are rewritten and runs of free blocks in the pool are consumed. A brand new pool will be nice and zippy even out to 95%... right until you start rewriting blocks.

This means that after time passes, lots of rewrite activity has occurred, and you're at the point where your pool is 50% full, you've pretty much got a painful loss of write speed. A pool that's only 25% full will be about three times faster. A pool that's only 10% full will be twice as fast again.

So. The smart money might be to create a zvol that occupies ~50% of the available pool space, knowing that as you fill that zvol, your performance will be impacted. You then still want to configure ESXi to use UNMAP, and ideally not fill your ESXi datastore to 100%, so that you fall somewhere on the better parts of that graph.

Do note that this is primarily an issue of write speed. For reads, you can mitigate a lot of the read performance hit through the use of massive quantities of ARC and L2ARC.
 

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
What "50% rule"?

I always hate when people take what I say and then parrot it without understanding.

You can safely run 80-90% on a block storage system but it will get excruciatingly slow. There's nothing magic that happens at 50% except that after awhile it is merely painfully slow. The smart money is on keeping your pool utilization as low as you reasonably can.

Crap, I can't do that from here. Back in a few minutes.

Great, thanks for the info. For those wondering I was referencing the FreeNAS documentation here:

1.3. ZFS Primer

While ZFS provides many benefits, there are some caveats to be aware of:

At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues.

10.5.6. Extents

Warning for performance reasons and to avoid excessive fragmentation, it is recommended to keep the used space of the pool below 50% when using iSCSI. As required, you can increase the capacity of an existing extent using the instructions in Growing LUNs.

 

soulburn

Contributor
Joined
Jul 6, 2014
Messages
100
Okay. Back. Here, look at this.

delphix-small.png


The issue here is what happens over time with fragmentation. Eventually ZFS reaches a steady state of sorts, where the write speed has degraded and settled in around a certain speed due to fragmentation. This doesn't happen right away... it happens over time, as blocks are rewritten and runs of free blocks in the pool are consumed. A brand new pool will be nice and zippy even out to 95%... right until you start rewriting blocks.

This means that after time passes, lots of rewrite activity has occurred, and you're at the point where your pool is 50% full, you've pretty much got a painful loss of write speed. A pool that's only 25% full will be about three times faster. A pool that's only 10% full will be twice as fast again.

So. The smart money might be to create a zvol that occupies ~50% of the available pool space, knowing that as you fill that zvol, your performance will be impacted. You then still want to configure ESXi to use UNMAP, and ideally not fill your ESXi datastore to 100%, so that you fall somewhere on the better parts of that graph.

Do note that this is primarily an issue of write speed. For reads, you can mitigate a lot of the read performance hit through the use of massive quantities of ARC and L2ARC.

This is great! Thank you for the thorough explanation. I will research UNMAP and report back with my results. This is for a homelab so I can play around without much worry. I found a few links regarding using UNMAP which look good:

Direct Guest OS UNMAP in vSphere 6.0
Using esxcli in vSphere 5.5 and 6.0 to reclaim VMFS deleted blocks on thin-provisioned LUNs (2057513)
How to Reclaim Free Block Space from a Lun with VMware vSphere 5.5 and Netapp Cluster Mode
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Great, thanks for the info. For those wondering I was referencing the FreeNAS documentation here:

Yeah, that was sourced from stuff I've been saying here for years, I'm pretty sure. The problem is that the "rule" isn't a rule and isn't even close to absolute; there are situations where you could go 95% and never notice, or be at 20% and feel like you were being disemboweled. Bleh.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
We're sympathetic to our ZFS captors?
Precisely. I certainly wouldn't want to imply that you've terrorized the forums enough that we all have come to sympathize with your ideology, adopt it as our own, and then convert others to it. Actually, that's more of a cyberjock thing. Nevermind.
 
Status
Not open for further replies.
Top