OSError [Errno 28] No space left on device

Status
Not open for further replies.

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Hello, I've got a FreeNAS server running and it has been going for a few years and happily providing iSCSI storage to some VMs.

The problem started off when I got an email stating no space left on device:
Code:
lockf: cannot open /var/run/periodic.daily.lock: No space left on device
mail: /tmp/mail.RstM46r3JAuF: No space left on device

I can get to the login GUI but when I enter credentials it goes to an error page. Oddly, the iSCSI connections are working normally and I can write and read from the array as normal. I think the problem is the USB drive the OS is running on may be out of space but I need to confirm this and then figure out how I can fix this. I'm not an experienced BSD or Linux guy so having to go beyond the GUI is always a bit stressful. I tried connecting using the Putty client but I got denied access using the correct root password. The server is currently running headless but I can connect a keyboard and monitor directly if needed.

Any advice on this situation would be appreciated, I'm doubtful it would be even safe to restart the server at this point.
Code:
Request Method:	  POST	
	
	 Request URL:	  http://192.168.0.16/account/login/	
	
	 Software Version:	  FreeNAS-9.10.2-U5 (561f0d7a1)	
	
	 Exception Type:	  OSError	
	
	 Exception Value:	
[Errno 28] No space left on device: '/tmp/sessionidrlggc2ges8zj30w3v46ccgpbn1x0bjfh'
		
	 Exception Location:	  /usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/file.py in save, line 125	
	
	 Server time:	  Wed, 20 Dec 2017 11:10:51 -0700


Traceback

						Environment: Software Version: FreeNAS-9.10.2-U5 (561f0d7a1) Request Method: POST Request URL: http://192.168.0.16/account/login/ Traceback: File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response 112.					 response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/usr/local/www/freenasUI/../freenasUI/account/views.py" in login_wrapper 327.		 extra_context=extra_context, File "/usr/local/lib/python2.7/site-packages/django/views/decorators/debug.py" in sensitive_post_parameters_wrapper 75.			 return view(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view 99.					 response = view_func(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func 52.		 response = view_func(request, *args, **kwargs) File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/views.py" in login 43.			 auth_login(request, form.get_user()) File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/__init__.py" in login 83.		 request.session.cycle_key() File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/base.py" in cycle_key 277.		 self.create() File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/file.py" in create 105.				 self.save(must_create=True) File "/usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/file.py" in save 125.			 fd = os.open(session_file_name, flags) Exception Type: OSError at /account/login/ Exception Value: [Errno 28] No space left on device: '/tmp/sessionidrlggc2ges8zj30w3v46ccgpbn1x0bjfh' 



Below this section there was further text but it contained the authentication credentials in plain text so I have omitted this section.
 
Last edited by a moderator:

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Do you have a current backup of your config db?
It is possible that, if you are using a USB boot drive, it may have failed in a similar way to one I had. The one I was using a couple years ago suddenly decided it was only half as big as it was supposed to be. That caused me some trouble, but my data was fine.
It might be possible to do a fresh install to another boot device and restore the config to get you back in.
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Do you have a current backup of your config db?
It is possible that, if you are using a USB boot drive, it may have failed in a similar way to one I had. The one I was using a couple years ago suddenly decided it was only half as big as it was supposed to be. That caused me some trouble, but my data was fine.
It might be possible to do a fresh install to another boot device and restore the config to get you back in.


I'm going to take a look through my files but I don't think I have a backup of that. I've gone through the process of restoring my volumes before after downgrading from FreeNAS V10 back to 9.5. Wouldn't look forward to it as re-configuring everything is a PITA.

Is there a way to access the partitions on the USB drive from Windows 10? I'll have to shut down the server to fix this eventually and when I do I'd like to try and see if I can just clean up the drive to get it working again if it isn't just straight up broken like yours was.
 
Last edited by a moderator:

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Good news is, unless you do something egregiously dumb, your data is safe. You won't be able to access the USB device from Win10 - it's ZFS. You could access it from a FreeBSD instance.

I suspect you have a ton of boot snapshots. Personally, I'd SSH in and destroy a snapshot or two from the CLI. If that completes successfully, reboot the machine and all should be happy. Then, go into the GUI, clean up other old boot images, and take a configuration backup. Finally, install a real boot device (small SSD), install FN 11.1, and restore your configuration backup.

Finally, I'd make sure you have email alerting configured properly and working. You should have received warnings when your boot pool reached 90% full.
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Good news is, unless you do something egregiously dumb, your data is safe. You won't be able to access the USB device from Win10 - it's ZFS. You could access it from a FreeBSD instance.

I suspect you have a ton of boot snapshots. Personally, I'd SSH in and destroy a snapshot or two from the CLI. If that completes successfully, reboot the machine and all should be happy. Then, go into the GUI, clean up other old boot images, and take a configuration backup. Finally, install a real boot device (small SSD), install FN 11.1, and restore your configuration backup.

Finally, I'd make sure you have email alerting configured properly and working. You should have received warnings when your boot pool reached 90% full.


Can you point me in the right direction on how to "destroy a snapshot or two from the CLI". I have no clue how to perform that task but I would definitely give it a try. I thought the snapshots were for the actual volumes not the USB. But I suppose it has it's own snapshots too.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I'm going to take a look through my files but I don't think I have a backup of that. I've gone through the process of restoring my volumes before after downgrading from Freenas V10 back to 9.5. Wouldn't look forward to it as re-configuring everything is a PITA.

Is there a way to access the partitions on the USB drive from Windows 10? I'll have to shut down the server to fix this eventually and when I do I'd like to try and see if I can just clean up the drive to get it working again if it isn't just straight up broken like yours was.
No, I don't think Windows will recognize the partition. I downloaded a special utility because the drive wasn't working properly but that ultimately only allowed me to be able to reformat the USB stick and even then it was only recognizing half the capacity it was supposed to have.

Your boot drive should be formatted with ZFS, so it might be possible to mount it under a Linux system that had ZFS on Linux installed. If I recall correctly, the file you want is named freenas-v1.db and is located in /data

Here is a screenshot:
Capture.PNG
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The server is currently running headless but I can connect a keyboard and monitor directly if needed.
Try that, and show the output of zpool list.
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Can you point me in the right direction on how to "destroy a snapshot or two from the CLI". I have no clue how to perform that task but I would definitely give it a try. I thought the snapshots were for the actual volumes not the USB. But I suppose it has it's own snapshots too.
You'll have to SSH in. Here's how it looks on my system:
Code:
root@freenas:/ # zfs list -t snapshot
NAME																	 USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/11.0-U3@2017-07-08-16:12:23							735M	  -   737M  -
freenas-boot/ROOT/11.0-U3@2017-09-14-03:44:23							735M	  -   737M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201601181840		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602020212		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602031011		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201605170422		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-8bc815b059fa92f1c8ba7c7685deacbb  6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-9.10.1-U4								 6.31M	  -  6.32M  -


You'll then issue a "zfs destroy <snapshot_name>" to remove the old snapshots:
Code:
root@freenas:/ # zfs destroy freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950


I would do one or two, just to get enough free space to make things happy. Then, I'd get back into the GUI and pull a configuration backup (system/general/save config). Then go to Boot and remove any older boot images still left there.

That should at least get the system out of its catatonic state. You've already ordered or purchased a small SSD to replace your USB boot device, yes? :)
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Thanks for the advice guys I'm going to go get connected to this system and get the output of the zpool list. I hope I can connect directly as when I tried using Putty I got an error authenticating.
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
You'll have to SSH in. Here's how it looks on my system:
Code:
root@freenas:/ # zfs list -t snapshot
NAME																	 USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/11.0-U3@2017-07-08-16:12:23							735M	  -   737M  -
freenas-boot/ROOT/11.0-U3@2017-09-14-03:44:23							735M	  -   737M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201601181840		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602020212		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602031011		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201605170422		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-8bc815b059fa92f1c8ba7c7685deacbb  6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-9.10.1-U4								 6.31M	  -  6.32M  -


You'll then issue a "zfs destroy <snapshot_name>" to remove the old snapshots:
Code:
root@freenas:/ # zfs destroy freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950


I would do one or two, just to get enough free space to make things happy. Then, I'd get back into the GUI and pull a configuration backup (system/general/save config). Then go to Boot and remove any older boot images still left there.

That should at least get the system out of its catatonic state. You've already ordered or purchased a small SSD to replace your USB boot device, yes? :)

I haven't been able to run any of those commands. I've been unable to connect via SSH and unable to enter the shell from the console menu. I can't reboot the system right now as I have a bunch of data transfers in progress at the moment.

I do have a spare SSD available to swap in as the boot device if this comes to that.

upload_2017-12-21_13-32-38.png
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
It isn't exactly the same error, but a similar behavior to when my boot drive failed several years ago. I could still access the shares, but I could not access the GUI or SSH and the console was filled with errors and unresponsive.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
The "no space left on device" for the file on /var/db suggests that it's the .system dataset that's out of space. By default, that would be on the data pool.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The "no space left on device" for the file on /var/db suggests that it's the .system dataset that's out of space. By default, that would be on the data pool.
Hello, I've got a Freenas server running and it has been going for a few years and happily providing iscsi storage to some VM's.
Have you been monitoring how full the storage was getting?
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Have you been monitoring how full the storage was getting?

When I set up the volumes I configured the iSCSI target to use 80% of the available space. The storage the VMs are using is nowhere near full, one volume has 6.03TB free and the other has 5.38TB free. The portion of the volume used for iSCSI acts as if it is completely used via lazy zeroing. I haven't been closely monitoring the total volume usage, so if something else was eating up the remaining 20% of space I would only find out via notification from the FreeNAS server box which I did but only after it was completely full and failed to run a cron job.

So in short, no I have not monitored it directly on FreeNAS.
 
Last edited by a moderator:

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
You'll have to SSH in. Here's how it looks on my system:
Code:
root@freenas:/ # zfs list -t snapshot
NAME																	 USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/11.0-U3@2017-07-08-16:12:23							735M	  -   737M  -
freenas-boot/ROOT/11.0-U3@2017-09-14-03:44:23							735M	  -   737M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201601181840		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602020212		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201602031011		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201605170422		   6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-8bc815b059fa92f1c8ba7c7685deacbb  6.77M	  -  6.78M  -
freenas-boot/grub@Pre-Upgrade-9.10.1-U4								 6.31M	  -  6.32M  -


You'll then issue a "zfs destroy <snapshot_name>" to remove the old snapshots:
Code:
root@freenas:/ # zfs destroy freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201512121950


I would do one or two, just to get enough free space to make things happy. Then, I'd get back into the GUI and pull a configuration backup (system/general/save config). Then go to Boot and remove any older boot images still left there.

That should at least get the system out of its catatonic state. You've already ordered or purchased a small SSD to replace your USB boot device, yes? :)


I had some better luck this morning after more attempts to get the shell access on the direct console. Jamming on all the F keys and then pressing the enter key a bunch seemed to get me in finally.

I ran zfs list -t snapshot and then deleted the oldest snapshot from 2017-03-24-08:17:59.

upload_2017-12-22_9-35-28.png


Unfortunately this did not resolve the issue accessing the GUI.

Request Method: POST
Request URL: http://192.168.0.16/account/login/
Software Version: FreeNAS-9.10.2-U5 (561f0d7a1)
Exception Type: OSError
Exception Value:
[Errno 28] No space left on device: '/tmp/sessionidlln5gz9eomd196o86v0tnwo2cc9pzbq2'

Exception Location: /usr/local/lib/python2.7/site-packages/django/contrib/sessions/backends/file.py in save, line 125
Server time: Fri, 22 Dec 2017 09:33:15 -0700
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
Also, a
Code:
df -h
might be interesting.
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
So, for the second time, what's the output of zpool list?

Oh maybe you didn't see the screenshot, the output is in there. I can't get in VIA SSH right now so I can't copy paste the text of the output.
I've typed the following from the screenshot:
freenas-boot/ROOT/9.10.2-U5@2017-03-24-08:17:59 3.31M - 636M -
freenas-boot/ROOT/9.10.2-U5@2017-07-20-08:03:47 3 3.99M - 637M -
 

dominomachine

Dabbler
Joined
Dec 21, 2017
Messages
14
Also, a
Code:
df -h
might be interesting.

Still can't SSH in so here is the output as an image:
upload_2017-12-22_11-19-32.png


It does say 0% on the left hand side there so maybe this volume is completely full? But that is not the system volume so I'm a bit confused why it is causing the OS to error.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Oh maybe you didn't see the screenshot, the output is in there.
I did see the screenshot, and no, the output isn't there. That was the output to zfs list -t snapshot, which is something completely different.
 
Status
Not open for further replies.
Top