FreeNAS pool size shrink

Joined
May 22, 2019
Messages
4
Hello, i have the following case
a customer has a freenas server(virtual machine) doplyed on his environment, he presented to freenas 12 TB of storage from a proliant server which has physical disks, in freenas they created a pool of 12 TB, using the full capacity of the disks as you know must be done. this pool was presented to vCenter and its used as a backup repository for VMs. Currently they have used 8TB, and have 4.23 TB free. untill now everything is ok.

Now they have a requirement, they need to present a storage of 2 TB aprox. to another server(VM); the reason? they need nore space for some reports, documentation, files used by an application, etc. and they want to use space from this 4 TB free, because they dont have any other source of storage for this purpose. My question here is: is there any way, i can resize this pool size, reduce its capacity in lets say 2 TB or in last instance, erase it and create 2 new pools, 1 of 10 TB and other of 2 TB(obviously backing up the information contained).

hope it was clear and hope someone can help me with some advice.

regards.

MS
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
How were the disks/pool presented to the vCenter server, and how is the new space required to be presented to the "other VM"?

If they are using NFS exports to share datasets then you can simply connect the second VM to the same or another dataset from the pool, and this will allow for the capacity to be shared. If iSCSI was used to share a ZVOL then it is more complicated, and you may have to erase and recreate a ZVOL.

There are some other possible issues with the "virtual FreeNAS" as it is unlikely that PCIe passthrough of the HBA was performed. What is the pool configuration itself (stripe, mirror, RAIDZ?) - as some of these actually do allow for removal of vdevs under certain circumstances.
 
Joined
May 22, 2019
Messages
4
The pool was presented to vCenter using NFS, they need the new space to be presented to two different VMs(new requirement), SQL cluster. beacause in this space they will be running all their .exe files of their applications. i know i can connect the virtual machines to this dataset and this will create a new vmdk, then i can add this existing vmdk to the second VM, but this will cause that cant do storage vmotion and create snapshots(the backup solution they have require snapshots)
so that was my first option in mind, how i add this storage without the trouble that represents sharing a vmdk between two VMs. i think this will be a question for another forum not this hehe but if you have an idea..
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
The pool was presented to vCenter using NFS, they need the new space to be presented to two different VMs(new requirement), SQL cluster. beacause in this space they will be running all their .exe files of their applications. i know i can connect the virtual machines to this dataset and this will create a new vmdk, then i can add this existing vmdk to the second VM, but this will cause that cant do storage vmotion and create snapshots(the backup solution they have require snapshots)

Bit of bad news here, there's no supported way to do a Windows shared-disk cluster using an NFS datastore, so you'll have to allow for a way to connect the vSphere hosts to the FreeNAS VM at the iSCSI level. With it being a virtual setup you can at least easily add another virtual NIC to FreeNAS, though I don't think it would handle a hot-add well. You also won't be able to do a live svMotion in any case, and as vSphere doesn't allow for snapshots of shared disks you'll have to use an agent-based backup for the SQL cluster.

Clustered-VMDK in vSphere 7.0 still requires access to the datastore to be at the block-level as well, so upgrading doesn't get around it either.

WSFC on vSphere 6.7 - https://kb.vmware.com/s/article/2147661
WSFC on vSphere 7.0 - https://kb.vmware.com/s/article/79616

also, the disks were presented to freenas as RDM, the full raw capacity of 12 TB is used

Definitely not a recommended setup, the supported solution is using DirectPath I/O (PCIe passthrough) of an HBA controller, which gives the VM direct hardware access to the HBA. I suspect that the HP ProLiant has a SmartArray P-series controller, a small boot disk was set up to boot vSphere and store the FreeNAS boot VMDK/VMX, and the remaining disks were hopefully left unconfigured to be then RDM'd into FreeNAS? That's still not ideal but definitely better than if the remaining disks were set up as independent RAID0 volumes or as a single large RAID-anything-else.
 
Top