VMWare iSCSI datastore expansion issue

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
I'm having an issue with expanding one of my VMWare datastores. I'm posting here because I've worked with VMWare for a very long time, along with EqualLogic, Compellent, Powervault, Promise and a few other storage arrays. When you expand a LUN on the storage device, you then go back to VMWare and at most, you would have to rescan storage for it to register, and you can then expand the volume.

That doesn't seem to be happening here with FreeNAS. I just finished migrating over to 11 disk RAIDZ3 volume, and replacing the remaining 4TB SAS disks with 8TB Helium SAS disks. FreeNAS then expanded the volume as expected once the last 4TB disk had been replaced with the 8TB disk. I then increased the size of the zvol from 16T to 24T (and please forgive me for 'incorrect terminology' - I'm using the terminology that lines up with the FreeNAS UI). This is reflected under 'Storage', and if I go to Sharing > block > Extents and edit the extent to view what it shows, it too reflects that the device extent is now 24T.

No matter what I do (I've rescanned storage and rebooted my hosts), VMWare is still showing 16T as the LUN size - it seems as if FreeNAS is still presenting it to VMWare as 16T. Am I missing something in FreeNAS? Or does it take a while for this change to make it into iSCSI and become visible to the VMWare hosts?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're doing a VMware datastore as iSCSI on RAIDZ3? Bad idea (probably).


What size is being reported by "esxcfg-scsidevs -c"?
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
You're doing a VMware datastore as iSCSI on RAIDZ3? Bad idea (probably).


This datastore is only for the storage of two VMDks - one is media (movies, TV shows, etc) and the other is the data volume attached to a file server. This volume needs storage capacity and reliability, not blistering speed. All VMs live on and boot off a 10 disk stripe/mirror.

What size is being reported by "esxcfg-scsidevs -c"?

16,777,216MB
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This datastore is only for the storage of two VMDks - one is media (movies, TV shows, etc) and the other is the data volume attached to a file server.

You'd be better of storing the media directly on the FreeNAS.


16,777,216MB

So that's clearly wrong. Unfortunately I don't have an easy way to replicate this right now, so hopefully someone else will chime in.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
You'd be better of storing the media directly on the FreeNAS.


I respectfully disagree, at least for my use case. With the VMDKs being attached to a Windows VM, they and all of the data on them are FAR more portable and storage array agnostic than if I were to store it right on FreeNAS. Long ago, when this data was much smaller in size, it lived on an 8-bay Drobo that was RDMed to a Windows VM. Then it came time to migrate the data elsewhere. Not fun. Using VMDKs makes it MUCH easier to move the data around, such as a few weeks ago when I redid this datastore. Storage vMotion the VMDKs to a suitable destination, destroyed the original 8 drive Z1, created the new 11 drive Z3, set it all up with VMWare and moved the VMDKs back. Much less headache than trying to move an entire filesystem of files.

So that's clearly wrong. Unfortunately I don't have an easy way to replicate this right now, so hopefully someone else will chime in.

Yes. The question is why isn't FreeNAS reporting the proper new size over to VMWare?
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So your theory is that because you're using Storage vMotion it's "easier"...? Okay, fine, that might be true if you honestly think setting up a Storage vMotion is easier than clicking and dragging a folder inside Windows, but simply moving the data from one CIFS share to another would involve the same level of reading and writing as the Storage vMotion. "portable and storage array agnostic" makes little sense since you can't take the FreeNAS disks and plug them into a Drobo and expect them to have your data accessible.

I'm curious to know how much the block storage on RAIDZ3 is costing you in wasted overhead.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
So your theory is that because you're using Storage vMotion it's "easier"...? Okay, fine, that might be true if you honestly think setting up a Storage vMotion is easier than clicking and dragging a folder inside Windows, but simply moving the data from one CIFS share to another would involve the same level of reading and writing as the Storage vMotion. "portable and storage array agnostic" makes little sense since you can't take the FreeNAS disks and plug them into a Drobo and expect them to have your data accessible.

I'm curious to know how much the block storage on RAIDZ3 is costing you in wasted overhead.

Well, with 'clicking and dragging', permissions don't always follow, so you need to be aware of that, and when permissions doesn't copy over, be prepared to go over everything and fix what didn't come over. Been there more than once. Then there are shares and share paths. When I used the 'click and drag' method to migrate from the Drobo to the Promise, because the Drobo was RDM'd, it was a mess and royal pain. As far as Windows was concerned, it was a NTFS to NTFS move, and... yeah... I don't feel like doing that again unless I REALLY have to. I also now use DFS and DFSR, which I'm not sure would even work with FreeNAS (or anything else non-windows) as a CIFS target.

There really is nothing to set up with storage vMotion. If you have a functioning VMWare setup with the proper license level and multiple datastores, you have Storage vMotion. Right click the VM > Migrate > Migrate Storage > Select destination. Boom.

No one said anything about taking FreeNAS disks and putting them in a Drobo, or Drobo disks and putting them in a Compellent, or compellent disks and putting them in an EMC. My point was, as an example, if tomorrow, a suitably equipped SC4020 'fell into my lap' and I decided to move away from FreeNAS, I could Configure it for my network, provision LUNs from it to my VMWare environment and create the datastores, storage vmotion everything from FreeNAS to the SC4020 turn off FreeNAS and done. Same process I used when I migrated from the Promise to FreeNAS. No fixing shares, permissions or anything else. In the end, it's like nothing ever happened. Trust me - I've done more than a few data migrations in my time - I'll take storage vMotioning VMDks all day every day over copying/moving hundreds of thousands of individual files via the 'click and drag' method, or even employing 3rd party utilities like viceversa. Copying/moving all of those smaller files absolutely does take more time than it takes to storage vMotion a VMDk of the same size. And that data is pretty much inaccessible during the move/copy. Not so with the Storage vMotion.

As far as block storage overhead, not really sure how block storage has all that much more overhead than other methods - 80% usable of a given configuration is still 80%, but whatever the answer, the RAIDZ3 gives me the storage volume I need, with a decent safety cushion in terms of fault tolerance. While some might argue that 'it's not the best', it does what I need it to do just fine. I personally thing that only being able to use 80% of your 'usable' capacity is nuts, and the recommendation I've read that you only use 50% of it is even more nuts. But FreeNAS has been doing well for me, so until something better falls in my lap, I'll more than likely stick with it.

But this is really all a matter of opinion. You like to do it one way, I like to do it another. Both ways work. That's not what this thread is meant to be about though.

What we should be focusing on here is why does it seem that FreeNAS is not presenting the proper size to VMWare?
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
As far as block storage overhead, not really sure how block storage has all that much more overhead than other methods - 80% usable of a given configuration is still 80%, but whatever the answer, the RAIDZ3 gives me the storage volume I need, with a decent safety cushion in terms of fault tolerance. While some might argue that 'it's not the best', it does what I need it to do just fine.

I'm fine with you being wrong, it doesn't really bother me, I just thought you'd like to know that your storage probably isn't working the way you think it is. But it could totally be that I have no idea what I'm talking about.

But this is really all a matter of opinion.

Reminds me of an Asimov quote that I should probably refrain from posting. But, no, you have mistaken knowledge and facts for opinion.

You like to do it one way, I like to do it another.

I don't *like* to do it one way. I'd love to be able to do block storage on RAIDZ3 and not have it totally mess with space allocation.

What we should be focusing on here is why does it seem that FreeNAS is not presenting the proper size to VMWare?

As I already said,

So that's clearly wrong. Unfortunately I don't have an easy way to replicate this right now, so hopefully someone else will chime in.

So focusing on what you want is a nonstarter. I don't have the resources free right now to replicate your experience, nor do I have the time. Sorry.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
Not for nothing, but you started the stray from the real topic, I just explained why I set it up the way I have. Maybe I should have spelled it out more clearly, but when I said it's a matter of opinion, and 'You like to do it one way, I like to do it another', I was referring mainly to your suggesting that using a CIFS share on FreeNAS being somehow better or easier to move around than storage vMotioning VMDKs. If you feel that's truly better and easier, go for it. I'm not telling you not to. I won't do it again unless I REALLY have to. RAIDZ3 and block storage may not be ideal, it may have more overhead, but IT DOES WHAT I NEED IT TO DO. I have plenty of storage space, decent fault tolerance and this volume doesn't need 20,000 IPOS and/or 4,000MB/s transfer rates. IT just needs to store these two relatively quiet, but quite large, VMDKs.

At any rate, back to our regularly scheduled programming - if anyone has any ideas why FreeNAS hasn't presented the change in size to VMWare, I'm all ears.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're not hearing the message I'm sending, which isn't about CIFS. I hope you like using up to 4x the space to store your content because of the way RAIDZ works. If you're rich and can afford to do that, .... swell.

It's all fine, this is now boring to me, I'm outta here.
 

SubnetMask

Contributor
Joined
Jul 27, 2017
Messages
129
So related to the topic at hand, I ended up shutting down or suspending all of my VMs and rebooting my FreeNAS. After rebooting it (and not my VMWare hosts), and then re-scanning storage on one of my hosts after FreeNAS was back up and running, the proper amount was returned and I was able to successfully expand my datastore - but this should not have required a reboot of FreeNAS.
 
Top