Freespace after deleting vms

Status
Not open for further replies.

Francoism

Cadet
Joined
Jan 20, 2017
Messages
4
I have a serious problem that needs an urgent resolution

I have a Freenas (FreeNAS-9.3-STABLE-201511280648) box with ZFS and RAIDz2 config. It serves iscsi shares to a xenserver. There is no snapshots and after deleting files from a NFS share and also deleting the VM's the free space is not increasing.

Does ZFS "keep" the deleted VM's disks somewhere? I read through most of the posts and most of the solutions is deleting snapshots, but there is none.

Please assist if possible. My solution at the moment is to add another storage server and move all the vm's.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Please supply the output of the following commands in code blocks;

Code:
zpool status
zpool list -v
zfs list -t all

Last, remember that Unix won't free up space if it's still in use. It's a
common enough problem that I've seen it dozens of times. Usually I
end up figuring out the process by using either lsof or fuser.
Then either stopping and restarting the daemon, or killing off the process.
 

Francoism

Cadet
Joined
Jan 20, 2017
Messages
4
Please supply the output of the following commands in code blocks;

Code:
zpool status
zpool list -v
zfs list -t all

Last, remember that Unix won't free up space if it's still in use. It's a
common enough problem that I've seen it dozens of times. Usually I
end up figuring out the process by using either lsof or fuser.
Then either stopping and restarting the daemon, or killing off the process.
Thanks for assisting

Here is the results
Code:
[root@sr001] /mnt/SR001/dev-vm# zpool status
  pool: SR001
 state: ONLINE
  scan: scrub in progress since Sat Jan 21 02:07:28 2017
		94.0G scanned out of 10.1T at 3.04M/s, (scan is slow, no estimated time)
		0 repaired, 0.91% done
config:

		NAME											STATE	 READ WRITE CKSUM
		SR001										   ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/ca0a9424-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0
			gptid/cab4b5cf-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0
			gptid/cb552546-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0
			gptid/cbfe46c4-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0
			gptid/ccb61c7a-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0
			gptid/cd607b68-8930-11e5-ada8-000af71393cc  ONLINE	   0	 0	 0

errors: No known data errors

  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jan 15 03:45:51 2017
config:

		NAME										  STATE	 READ WRITE CKSUM
		freenas-boot								  ONLINE	   0	 0	 0
		  gptid/1a65a4d4-87a2-11e5-be23-00e04c360004  ONLINE	   0	 0	 0

errors: No known data errors



Code:
[root@sr001] /mnt/SR001/dev-vm# zpool list -v
NAME									 SIZE  ALLOC   FREE  EXPANDSZ   FRAG	CAP  DEDUP  HEALTH  ALTROOT
SR001								   10.9T  10.2T   670G		 -	65%	93%  1.00x  ONLINE  /mnt
  raidz2								10.9T  10.2T   670G		 -	65%	93%
	gptid/ca0a9424-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
	gptid/cab4b5cf-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
	gptid/cb552546-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
	gptid/cbfe46c4-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
	gptid/ccb61c7a-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
	gptid/cd607b68-8930-11e5-ada8-000af71393cc	  -	  -	  -		 -	  -	  -
freenas-boot							  29G  1.06G  27.9G		 -	  -	 3%  1.00x  ONLINE  -
  gptid/1a65a4d4-87a2-11e5-be23-00e04c360004	29G  1.06G  27.9G		 -	  -	 3%


Code:
[root@sr001] /mnt/SR001/dev-vm# zfs list -t all
NAME																	USED  AVAIL  REFER  MOUNTPOINT
SR001																  6.81T   215G   216K  /mnt/SR001
SR001/.system														   139M   215G  89.5M  legacy
SR001/.system/configs-5ece5c906a8f4df886779fae5cade8a5				 46.1M   215G  46.1M  legacy
SR001/.system/cores													1.24M   215G  1.24M  legacy
SR001/.system/rrd-5ece5c906a8f4df886779fae5cade8a5					  192K   215G   192K  legacy
SR001/.system/samba4													799K   215G   799K  legacy
SR001/.system/syslog-5ece5c906a8f4df886779fae5cade8a5				  1.24M   215G  1.24M  legacy
SR001/FTP-Backups													   554M   215G   554M  /mnt/SR001/FTP-Backups
SR001/ISO															  21.0G   215G  21.0G  /mnt/SR001/ISO
SR001/cloudpbx														 2.67T   215G  2.67T  /mnt/SR001/cloudpbx
SR001/dev-vm														   2.18T   215G  2.18T  /mnt/SR001/dev-vm
SR001/jails															 192K   215G   192K  /mnt/SR001/jails
SR001/misc-vm														  1.94T   215G  1.94T  /mnt/SR001/misc-vm
SR001/strata-temp													  1.93G   215G  1.93G  /mnt/SR001/strata-temp
freenas-boot														   1.06G  27.0G	31K  none
freenas-boot/ROOT													  1.04G  27.0G	25K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648					  1.02G  27.0G   533M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-10-04:02:18  2.49M	  -   510M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-12-14:05:53   244K	  -   511M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-29-13:44:34  10.4M	  -   520M  -
freenas-boot/ROOT/Initial-Install									  9.62M  27.0G   520M  /
freenas-boot/ROOT/Wizard-2015-11-10_14:24:47							  1K  27.0G   511M  legacy
freenas-boot/ROOT/Wizard-2015-11-12_13:53:50							  1K  27.0G   513M  legacy
freenas-boot/ROOT/Wizard-2015-11-12_14:05:53							  1K  27.0G   511M  /
freenas-boot/ROOT/default											  7.87M  27.0G   513M  /
freenas-boot/ROOT/default@2015-11-10-14:24:47						   245K	  -   511M  -
freenas-boot/ROOT/default@2015-11-12-13:53:50						  2.44M	  -   513M  -
freenas-boot/grub													  13.6M  27.0G  6.79M  legacy
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-10_14:24:47			   25.5K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-12_13:53:50			   25.5K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-12_14:05:53				 26K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201511280648			26K	  -  6.79M  -

 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Confused a bit here...what storage protocol are you using for the VM's, iSCSI or NFS? You mention both and they won't clear space the same way. Also, what version of Xenserver are you using?
 

Francoism

Cadet
Joined
Jan 20, 2017
Messages
4
The FreeNAS serves up both. I primarily use iSCSI, NFS is on for testing environments.

What I did to free up space is export the VM's to other storage repos. Deleted the extent and recreated a iscsi. The vm's totaled about 800GB, but the used space came in over 2TB.

This is the new space.
Code:
[root@sr001] /mnt/SR001/NFS/JennyTechnicalVM/79eb11a5-2f05-84c4-f090-da35dfe09e4f# zfs list -t all
NAME																	USED  AVAIL  REFER  MOUNTPOINT
SR001																  4.66T  2.36T  2.40G  /mnt/SR001
SR001/.system														   139M  2.36T  89.5M  legacy
SR001/.system/configs-5ece5c906a8f4df886779fae5cade8a5				 46.1M  2.36T  46.1M  legacy
SR001/.system/cores													1.24M  2.36T  1.24M  legacy
SR001/.system/rrd-5ece5c906a8f4df886779fae5cade8a5					  192K  2.36T   192K  legacy
SR001/.system/samba4													799K  2.36T   799K  legacy
SR001/.system/syslog-5ece5c906a8f4df886779fae5cade8a5				  1.34M  2.36T  1.34M  legacy
SR001/FTP-Backups													   554M  2.36T   554M  /mnt/SR001/FTP-Backups
SR001/ISO															  21.0G  2.36T  21.0G  /mnt/SR001/ISO
SR001/cloudpbx														 2.70T  2.36T  2.70T  /mnt/SR001/cloudpbx
SR001/dev-vm															192K  2.36T   192K  /mnt/SR001/dev-vm
SR001/jails															 192K  2.36T   192K  /mnt/SR001/jails
SR001/misc-vm														  1.93T  2.36T  1.93T  /mnt/SR001/misc-vm
SR001/sec-vm														   1.49G  2.36T  1.49G  /mnt/SR001/sec-vm
SR001/strata-temp													  1.93G  2.36T  1.93G  /mnt/SR001/strata-temp
freenas-boot														   1.06G  27.0G	31K  none
freenas-boot/ROOT													  1.04G  27.0G	25K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648					  1.02G  27.0G   533M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-10-04:02:18  2.49M	  -   510M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-12-14:05:53   244K	  -   511M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201511280648@2015-11-29-13:44:34  10.4M	  -   520M  -
freenas-boot/ROOT/Initial-Install									  9.62M  27.0G   520M  /
freenas-boot/ROOT/Wizard-2015-11-10_14:24:47							  1K  27.0G   511M  legacy
freenas-boot/ROOT/Wizard-2015-11-12_13:53:50							  1K  27.0G   513M  legacy
freenas-boot/ROOT/Wizard-2015-11-12_14:05:53							  1K  27.0G   511M  /
freenas-boot/ROOT/default											  7.87M  27.0G   513M  /
freenas-boot/ROOT/default@2015-11-10-14:24:47						   245K	  -   511M  -
freenas-boot/ROOT/default@2015-11-12-13:53:50						  2.44M	  -   513M  -
freenas-boot/grub													  13.6M  27.0G  6.79M  legacy
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-10_14:24:47			   25.5K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-12_13:53:50			   25.5K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-Wizard-2015-11-12_14:05:53				 26K	  -  6.79M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201511280648			26K	  -  6.79M  -



The problem is still there with the other iscsi shares so I will repeat this process by moving vms and deleting the extent.

I am using Xenserver 6.5
XS65E001,XS65E002,XS65E003,XS65E005,XS65E006,XS65E007,XS65E008,XS65E009,XS65E010,XS65E013,XS65E014,XS65ESP1,XS65ESP1002,XS65ESP1003,XS65ESP1004,XS65ESP1008,XS65ESP1009,XS65ESP1010,XS65ESP1011,XS65ESP1012,XS65ESP1018 Patches loaded
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What you need to look into is SCSI UNMAP support for your version of Xenserver. I assume there should be some commands you can run from your host to verify that UNMAP is supported by the storage array. I'm only familiar with VMware, but Xenserver should have a similar feature.
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I read that when I was looking into the issue earlier...it doesn't sound like a bug to me, just bad implementation of the unmap command and lack of in-guest unmap support. It sounds like there is a "reclaim space" button somewhere in the interface which essentially will free space the way VMWare used to in that it creates a large balloon file and then deletes it and sends unmap commands for the unused blocks. VMware soon discovered this was a bad way of doing things and have since changed this to 200MB chunks. So...you might look into that button, but to support it on the FreeNAS side, you probably need to make sure your iSCSI targets are zVol device extents and not file extents. A lot of people commenting on that bug ticket were taking about in-guest support for unmap, which certainly sounds like its not supported in that version of Xenserver. Your case doesn't have to do with in-guest support because, as read in your first post, you need to reclaim space from deleted vm disk files.
 
Status
Not open for further replies.
Top