SOLVED Error: No space left on device

Status
Not open for further replies.
Joined
May 19, 2017
Messages
11
Hello All,

Long time lurker, first time poster. These forums have been extremely helpful when I encounter issues with FreeNAS in the past.
However, I'm running into an issue where I feel I need to ask a specific question. I did find some old threads similar to my question, but my situation feels just enough different that I need to ask for help.

A couple days ago i started getting error emails sent to me with a variation of these messages:
Code:
newsyslog: chmod(/var/log/maillog.6.bz2) in change_attrs: No space left on device

freenas changes in mounted filesystems: 11d10
< wmsstorage/.system/samba4 /mnt/wmsstorage/.system/samba4 zfs	rw,nfsv4acls	 0 0 12a12
> wmsstorage/Backups	/mnt/wmsstorage/Backups	zfs	rw,nfsv4acls	 0 0 mv: rename /var/log/mount.today to /var/log/mount.yesterday: No space left on device mv: /var/log/mount.today: No space left on device

bzip2: Can't create output file /var/log/messages.0.bz2: No space left on device. newsyslog: `bzip2 -f /var/log/messages.0' terminated with a non-zero status (1)

In one of the other threads I read, they requested the following output:
Code:
[root@freenas] ~# du /mnt | sort -nr |head
14534422		/mnt
14534420		/mnt/wmsstorage
14528738		/mnt/wmsstorage/HyperVMs
5616	/mnt/wmsstorage/.system
5398	/mnt/wmsstorage/.system/syslog
5381	/mnt/wmsstorage/.system/syslog/log
4599	/mnt/wmsstorage/.system/syslog/log/samba4
169	 /mnt/wmsstorage/.system/cores
17	  /mnt/wmsstorage/ESXI-VMStorage
17	  /mnt/wmsstorage/Backups

[root@freenas] ~# df -ah
Filesystem								  Size	Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs1a						 926M	826M	 26M	97%	/
devfs									   1.0k	1.0k	  0B   100%	/dev
/dev/md0									4.6M	3.3M	902k	79%	/etc
/dev/md1									823k	2.0k	756k	 0%	/mnt
/dev/md2									149M	 33M	103M	25%	/var
/dev/ufs/FreeNASs4						   19M	3.3M	 15M	18%	/data
wmsstorage								  367k	367k	  0B   100%	/mnt/wmsstorage
wmsstorage/.system						  351k	351k	  0B   100%	/mnt/wmsstorage/.system
wmsstorage/.system/XEN-GR-R510-Heartbeat	1.0G	287k	1.0G	 0%	/mnt/wmsstorage/.system/XEN-GR-R510-Heartbeat
wmsstorage/.system/cores					439k	439k	  0B   100%	/mnt/wmsstorage/.system/cores
wmsstorage/.system/syslog				   5.7M	5.7M	  0B   100%	/mnt/wmsstorage/.system/syslog
wmsstorage/HyperVMs						  13G	 13G	  0B   100%	/mnt/wmsstorage/HyperVMs
wmsstorage/ESXI-VMStorage				   287k	287k	  0B   100%	/mnt/wmsstorage/ESXI-VMStorage
wmsstorage/Backups						  287k	287k	  0B   100%	/mnt/wmsstorage/Backups

[root@freenas] ~# mount
/dev/ufs/FreeNASs1a on / (ufs, local, read-only)
devfs on /dev (devfs, local, multilabel)
/dev/md0 on /etc (ufs, local)
/dev/md1 on /mnt (ufs, local)
/dev/md2 on /var (ufs, local)
/dev/ufs/FreeNASs4 on /data (ufs, local, noatime, soft-updates)
wmsstorage on /mnt/wmsstorage (zfs, local, nfsv4acls)
wmsstorage/.system on /mnt/wmsstorage/.system (zfs, local, nfsv4acls)
wmsstorage/.system/XEN-GR-R510-Heartbeat on /mnt/wmsstorage/.system/XEN-GR-R510-Heartbeat (zfs, local, nfsv4acls)
wmsstorage/.system/cores on /mnt/wmsstorage/.system/cores (zfs, local, nfsv4acls)
wmsstorage/.system/syslog on /mnt/wmsstorage/.system/syslog (zfs, local, nfsv4acls)
wmsstorage/HyperVMs on /mnt/wmsstorage/HyperVMs (zfs, local, nfsv4acls)
wmsstorage/ESXI-VMStorage on /mnt/wmsstorage/ESXI-VMStorage (zfs, local, nfsv4acls)
wmsstorage/Backups on /mnt/wmsstorage/Backups (zfs, local, nfsv4acls)
[root@freenas] ~#

I see that a ton of the file system is reporting 100% full with extremely small capacities, but I have no idea how/why that would happen!
Nothing has physically changed. I enabled snapshoting a couple of days ago, but it has only done a few snapshots and the webgui doesn't show that it has used much space yet.
There should be many TBs of space available on the system. Everything still seems to be working, except I am having issues getting CIFS to start. However, I am afraid that issues may start to arise.

I have also included screenshots of the webgui that displays our volumes and disks. One strange thing that I've noticed is that wmsstorage/VMStorage and wmsstorage/XenVMStorage are missing from the output above. Not sure if that means anything, but it is an observation.

If anybody has any ideas, I am willing to try them. Please let me know if you need any other information from my system.

Thank you in advance! EB
 

Attachments

  • FreeNAS Volumes.PNG
    FreeNAS Volumes.PNG
    51.5 KB · Views: 678
  • FreeNAS Disks.PNG
    FreeNAS Disks.PNG
    46.6 KB · Views: 775

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Is da6 part of your pool? If yes then your pool will only be as large as the smallest drive.

Provide the output of zpool status
 
Joined
May 19, 2017
Messages
11
Is da6 part of your pool? If yes then your pool will only be as large as the smallest drive.

Provide the output of zpool status
Thank you for the quick reply! I do not think da6 is part of the pool. That SSD was installed to improve performance, but we were never able to figure it out.
Here is the output of zpool status
Code:
[root@freenas] ~# zpool status
  pool: wmsstorage
 state: ONLINE
  scan: scrub repaired 0 in 47h49m with 0 errors on Mon May 15 23:50:00 2017
config:

		NAME											STATE	 READ WRITE CKSUM
		wmsstorage									  ONLINE	   0	 0	 0
		  raidz2-0									  ONLINE	   0	 0	 0
			gptid/0e989816-af86-11e3-887a-00259086724e  ONLINE	   0	 0	 0
			gptid/6d72fd2e-a9f2-11e3-aef0-00259086724e  ONLINE	   0	 0	 0
			gptid/adccc1cb-ac98-11e6-98d6-00259086724e  ONLINE	   0	 0	 0
			gptid/6e621a6a-a9f2-11e3-aef0-00259086724e  ONLINE	   0	 0	 0
			gptid/6ed9c7da-a9f2-11e3-aef0-00259086724e  ONLINE	   0	 0	 0
			gptid/6f52e6e2-a9f2-11e3-aef0-00259086724e  ONLINE	   0	 0	 0

errors: No known data errors
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well that sucks, not the potput I was hoping for. Could you output glabel status and for good measure camcontrol devlist

It appears it took 47 hours to run the last scrub so you must either have a lot of data or a very slow machine (I mean SLOW....). This makes me thing of snapshots, if you got them then you are likely filling your drives with this stuff.
 
Joined
May 19, 2017
Messages
11
I noticed that about the scrub too. I thought that seemed excessive. There really should only be approx 3-4TB of space used. Not a ton of data and the system should be fairly quick. I don't remember exact specs of the hardware, though. Snapshots are certainly a possibility, but we should have plenty of space, unless something is configured incorrectly.

Here is the output of the glabel status and camcontrol devlist
Code:
[root@freenas] ~# glabel status
									  Name  Status  Components
gptid/6d72fd2e-a9f2-11e3-aef0-00259086724e	 N/A  da0p2
gptid/6e621a6a-a9f2-11e3-aef0-00259086724e	 N/A  da1p2
gptid/6ed9c7da-a9f2-11e3-aef0-00259086724e	 N/A  da2p2
gptid/6f52e6e2-a9f2-11e3-aef0-00259086724e	 N/A  da3p2
gptid/0e989816-af86-11e3-887a-00259086724e	 N/A  da4p2
gptid/6df554c8-aaaf-11e3-b61c-00259086724e	 N/A  da6p1
gptid/6dfcf1da-aaaf-11e3-b61c-00259086724e	 N/A  da6p2
							 ufs/FreeNASs3	 N/A  da7s3
							 ufs/FreeNASs4	 N/A  da7s4
							ufs/FreeNASs1a	 N/A  da7s1a
gptid/adccc1cb-ac98-11e6-98d6-00259086724e	 N/A  da5p2
					ntfs/User File Storage	 N/A  zvol/wmsstorage/User_File_Storage@auto-20170513.0100-2ws1
[root@freenas] ~# camcontrol devlist
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 3 lun 0 (da0,pass0)
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 6 lun 0 (da1,pass1)
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 7 lun 0 (da2,pass2)
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 8 lun 0 (da3,pass3)
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 9 lun 0 (da4,pass4)
<ATA WDC WD30EFRX-68E 0A80>		at scbus0 target 11 lun 0 (da5,pass5)
<ATA Samsung SSD 840 5B0Q>		 at scbus1 target 4 lun 0 (da6,pass6)
<SanDisk Cruzer Fit 1.26>		  at scbus12 target 0 lun 0 (da7,pass7)
[root@freenas] ~#

Thanks again for helping me with this!
 
Joined
May 19, 2017
Messages
11
I found and ran the command to display all snapshots. Here is the output:
Code:
[root@freenas] ~# zfs list -t snapshot
NAME															 USED  AVAIL  REFER  MOUNTPOINT
wmsstorage@auto-20170513.0100-2w									0	  -   368K  -
wmsstorage/.system@auto-20170513.0100-2w							0	  -   352K  -
wmsstorage/.system/XEN-GR-R510-Heartbeat@auto-20170513.0100-2w	  0	  -   288K  -
wmsstorage/.system/XEN_GR_R510_Heartbeat@auto-20170513.0100-2w	  0	  -   144K  -
wmsstorage/.system/cores@auto-20170513.0100-2w					  0	  -   440K  -
wmsstorage/.system/syslog@auto-20170513.0100-2w					 0	  -  5.64M  -
wmsstorage/Backups@auto-20170513.0100-2w							0	  -   288K  -
wmsstorage/ESXI-VMStorage@auto-20170513.0100-2w					 0	  -   288K  -
wmsstorage/HyperVMs@auto-20170513.0100-2w						   0	  -  13.9G  -
wmsstorage/User_File_Storage@auto-20170513.0100-2w			   336M	  -   492G  -
wmsstorage/VMStorage@auto-20170513.0100-2w						  0	  -  1.17T  -
wmsstorage/VMbackups@auto-20170513.0100-2w						  0	  -   296G  -
wmsstorage/XenVMStorage@auto-20170513.0100-2w				   25.2G	  -  1.92T  -
wmsstorage/xenheartbeat@auto-20170513.0100-2w				   1.97M	  -  2.76M  -
[root@freenas] ~#

As you can see there are only 14 snapshots, and only 1 is fairly large at 25.2G.
I hope that helps a little.
 

nojohnny101

Wizard
Joined
Dec 3, 2015
Messages
1,478
Do you have any controllers or RAID cards between your disks and FreeNAS?

I know you said you don't know specific hardware specs but if you could do any digging to find some specifics that would help us help you.
 
Joined
May 19, 2017
Messages
11
@nojohnny101 I did some digging and found the order for the original purchase of the equipment back in Oct '13.
Intel i7-4771 | 16GB RAM | LSI 9211-8i SATA/SAS controller | SuperMicro C7Z87 mobo
I am confident that the LSI controller is in JBOD mode so that ZFS could have full control over the disks.
Let me know if you want more details

@danb35 Correct, it is still running 9.2.1.2
 
Joined
Jan 18, 2017
Messages
525
do you have quota's or reservations setup on that pool?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Well your da6 SSD is not part of your pool thankfully.

I'm thinking your snapshots are the root of the issue.

Try zfs list -ro space -t all wmsstorage and df -H
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
do you have quota's or reservations setup on that pool?
Reservations would be my guess--quotas won't block space, but reservations definitely will.
 
Joined
Jan 18, 2017
Messages
525
i don't use any of those functions and still a newbie but the fixed sizes of those datasets has my attention particularly wherever the .system folder is stored.
 
Joined
May 19, 2017
Messages
11
do you have quota's or reservations setup on that pool?
I do not have any quotas or reservations at the top level "wmsstorage" volume. Both are set to 0
Is there a command I can run to verify that?

Well your da6 SSD is not part of your pool thankfully.

I'm thinking your snapshots are the root of the issue.

Try zfs list -ro space -t all wmsstorage and df -H
Code:
[root@freenas] ~# zfs list -ro space -t all wmsstorage
NAME															AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
wmsstorage														  0  12.1T		 0	368K			  0	  12.1T
wmsstorage@auto-20170513.0100-2w									-	  0		 -	   -			  -		  -
wmsstorage/.system												  0  2.04G		 0	352K			  0	  2.04G
wmsstorage/.system@auto-20170513.0100-2w							-	  0		 -	   -			  -		  -
wmsstorage/.system/XEN-GR-R510-Heartbeat						   1G  1.00G		 0	288K			 1G		  0
wmsstorage/.system/XEN-GR-R510-Heartbeat@auto-20170513.0100-2w	  -	  0		 -	   -			  -		  -
wmsstorage/.system/XEN_GR_R510_Heartbeat						1.03G  1.03G		 0	144K		  1.03G		  0
wmsstorage/.system/XEN_GR_R510_Heartbeat@auto-20170513.0100-2w	  -	  0		 -	   -			  -		  -
wmsstorage/.system/cores											0   440K		 0	440K			  0		  0
wmsstorage/.system/cores@auto-20170513.0100-2w					  -	  0		 -	   -			  -		  -
wmsstorage/.system/syslog										   0  5.64M		 0   5.64M			  0		  0
wmsstorage/.system/syslog@auto-20170513.0100-2w					 -	  0		 -	   -			  -		  -
wmsstorage/Backups												  0   288K		 0	288K			  0		  0
wmsstorage/Backups@auto-20170513.0100-2w							-	  0		 -	   -			  -		  -
wmsstorage/ESXI-VMStorage										   0   288K		 0	288K			  0		  0
wmsstorage/ESXI-VMStorage@auto-20170513.0100-2w					 -	  0		 -	   -			  -		  -
wmsstorage/HyperVMs												 0  13.9G		 0   13.9G			  0		  0
wmsstorage/HyperVMs@auto-20170513.0100-2w						   -	  0		 -	   -			  -		  -
wmsstorage/User_File_Storage									1.03T  1.51T	  336M	492G		  1.03T		  0
wmsstorage/User_File_Storage@auto-20170513.0100-2w				  -   336M		 -	   -			  -		  -
wmsstorage/VMStorage											2.06T  3.23T		 0   1.17T		  2.06T		  0
wmsstorage/VMStorage@auto-20170513.0100-2w						  -	  0		 -	   -			  -		  -
wmsstorage/VMbackups											1.03T  1.32T		 0	296G		  1.03T		  0
wmsstorage/VMbackups@auto-20170513.0100-2w						  -	  0		 -	   -			  -		  -
wmsstorage/XenVMStorage										 4.09T  6.04T	 25.3G   1.92T		  4.09T		  0
wmsstorage/XenVMStorage@auto-20170513.0100-2w					   -  25.3G		 -	   -			  -		  -
wmsstorage/xenheartbeat										 1.03G  1.03G	 1.97M   2.76M		  1.03G		  0
wmsstorage/xenheartbeat@auto-20170513.0100-2w					   -  1.97M		 -	   -			  -		  -
[root@freenas] ~# df -H
Filesystem								  Size	Used   Avail Capacity  Mounted on
/dev/ufs/FreeNASs1a						 971M	866M	 27M	97%	/
devfs									   1.0k	1.0k	  0B   100%	/dev
/dev/md0									4.8M	3.5M	923k	79%	/etc
/dev/md1									843k	2.0k	774k	 0%	/mnt
/dev/md2									156M	 35M	108M	25%	/var
/dev/ufs/FreeNASs4						   20M	3.4M	 15M	18%	/data
wmsstorage								  376k	376k	  0B   100%	/mnt/wmsstorage
wmsstorage/.system						  359k	359k	  0B   100%	/mnt/wmsstorage/.system
wmsstorage/.system/XEN-GR-R510-Heartbeat	1.1G	294k	1.1G	 0%	/mnt/wmsstorage/.system/XEN-GR-R510-Heartbeat
wmsstorage/.system/cores					450k	450k	  0B   100%	/mnt/wmsstorage/.system/cores
wmsstorage/.system/syslog				   5.9M	5.9M	  0B   100%	/mnt/wmsstorage/.system/syslog
wmsstorage/HyperVMs						  14G	 14G	  0B   100%	/mnt/wmsstorage/HyperVMs
wmsstorage/ESXI-VMStorage				   294k	294k	  0B   100%	/mnt/wmsstorage/ESXI-VMStorage
wmsstorage/Backups						  294k	294k	  0B   100%	/mnt/wmsstorage/Backups
[root@freenas] ~#

Well, that top command certainly looks interesting... 12.1T used on wmsstorage? I'm not quite sure how to read that though.
 
Joined
Jan 18, 2017
Messages
525
looks to me (newbie) the reserved space is killing it.
 
Joined
May 19, 2017
Messages
11
From reading deeper into the output above, I see the listing of quotas that I have set. They total approx 9.2T and there is 12.1T worth of storage.
3 of the volumes show they are using more than the quota (ex. XenVMStorage 4.09T vs 6.04T even though I know there is actually only about 2T worth of data in there)

What is the difference between USED and USEDDS? USEDDS appears to be actual data usage.

Should I turn off and delete all snapshots?
 
Joined
May 19, 2017
Messages
11
I just turned off and deleted the snapshots.
That freed up far more space than I expected. Also, far more numbers changed in here than I expected!
Code:
[root@freenas] ~# zfs list -ro space -t all wmsstorage
NAME									  AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
wmsstorage								2.38T  8.27T		 0	368K			  0	  8.27T
wmsstorage/.system						2.38T  2.04G		 0	352K			  0	  2.04G
wmsstorage/.system/XEN-GR-R510-Heartbeat  2.38T	 1G		 0	288K		  1024M		  0
wmsstorage/.system/XEN_GR_R510_Heartbeat  2.38T  1.03G		 0	144K		  1.03G		  0
wmsstorage/.system/cores				  2.38T   440K		 0	440K			  0		  0
wmsstorage/.system/syslog				 2.38T  5.64M		 0   5.64M			  0		  0
wmsstorage/Backups						1024G   288K		 0	288K			  0		  0
wmsstorage/ESXI-VMStorage				 1.50T   288K		 0	288K			  0		  0
wmsstorage/HyperVMs					   86.1G  13.9G		 0   13.9G			  0		  0
wmsstorage/User_File_Storage			  2.93T  1.03T		 0	492G		   564G		  0
wmsstorage/VMStorage					  3.28T  2.06T		 0   1.17T		   916G		  0
wmsstorage/VMbackups					  3.13T  1.03T		 0	296G		   760G		  0
wmsstorage/XenVMStorage				   4.59T  4.13T		 0   1.92T		  2.20T		  0
wmsstorage/xenheartbeat				   2.38T  1.03G		 0   2.76M		  1.03G		  0
[root@freenas] ~#

However, my question above still stands. How can USED be so much higher than USEDDS?
Especially, when when it said it only used 25G for snapshots and yet deleting them frees up 2.38T.
I just want to have a deeper understanding of how this all works so it doesn't happen again.

Thank you ALL for your help with this!
 
Joined
Jan 18, 2017
Messages
525
USED appears to be USEDREFRESERV (reserved space) + USEDDS (actual data stored) someone please correct me if I've misunderstood
 
Joined
May 19, 2017
Messages
11
USED appears to be USEDREFRESERV (reserved space) + USEDDS (actual data stored) someone please correct me if I've misunderstood
I think you're right, but I'm not sure why those numbers would be added together. Reserved space should be the max and actual used space should be a number less than that.
Maybe I'm just not familiar enough with reservations. Is it always trying to reserve that amount of free space on top of actual space used?

However, the other side of this is that I have never set anything but a Quota on any of the zvols that I have created.
There are some (VMbackups, xenheartbeat, User_File_Storage, XEN_GR_R510_Heartbeat, VMStorage, and XenVMStorage) that i can see their reservation/quota amount, but I can't modify it or see which it is (reservation or quota) in the webgui. My only options are Create Snapshot and Destroy zvol. I am pretty sure these were Quotas too, but I can't be 100% positive on that.
 

Attachments

  • FreeNAS Volumes2.PNG
    FreeNAS Volumes2.PNG
    37.8 KB · Views: 636
Last edited:
Joined
Jan 18, 2017
Messages
525
you should be able to edit those setting in the GUI
 
Status
Not open for further replies.
Top