9.3 upgrade runs out of disk space

Status
Not open for further replies.
Joined
Mar 17, 2015
Messages
3
Fresh 9.3 install on a node about a month ago (currently running FreeNAS-9.3-STABLE-201502271818). Love the new updating/patching feature (although I must admit it is a bit disruptive to take down storage every week or two for updates, but definitely favor staying abreast over minor downtimes).

Trouble I'm currently running into is the latest round of updates fills the /var/tmp mount it creates and is unable to complete the update.

These are the pending updates:
=====
Upgrade: base-os-9.3-STABLE-efa3d56-a21079f-c741590 -> base-os-9.3-STABLE-e9293b2-5d41bb2-2e9addd
Upgrade: FreeNASUI-9.3-STABLE-efa3d56-a21079f-c741590 -> FreeNASUI-9.3-STABLE-e9293b2-5d41bb2-2e9addd
Upgrade: freenas-pkg-tools-9.3-STABLE-842051b -> freenas-pkg-tools-9.3-STABLE-e9293b2
=====

ex>
===[ output from `df -h` while updating
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503170439 531M 528M 2.7M 100% /var/tmp/tmp5lTcmF
===
/var/tmp# cat .upgradeprogress
{"finished": true, "uuid": "7a3580bc6d174b77ac2c263a210c36b4", "error": "[Errno 28] No space left on device", "apply": true, "pid": 32915, "percent": 33, "indeterminate": false, "step": 2, "details": "Installing base-os (1/3)"}
=====

===[ full output from `df -h` while updating
Filesystem Size Used Avail Capacity Mounted on
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201502271818 931M 926M 4.8M 99% /
devfs 1.0k 1.0k 0B 100% /dev
tmpfs 32M 5.3M 26M 17% /etc
tmpfs 4.0M 8.0k 4M 0% /mnt
tmpfs 2.7G 37M 2.6G 1% /var
replica 3.9T 272k 3.9T 0% /mnt/replica
replica/chris 3.9T 408M 3.9T 0% /mnt/replica/chris
replica/corinne 3.9T 96k 3.9T 0% /mnt/replica/corinne
replica/vault 3.9T 240k 3.9T 0% /mnt/replica/vault
replica/.system 3.9T 296M 3.9T 0% /var/db/system
replica/.system/cores 3.9T 2.0M 3.9T 0% /var/db/system/cores
replica/.system/samba4 3.9T 336k 3.9T 0% /var/db/system/samba4
replica/.system/syslog-edba64cb7e0f4275892ce622bdbcbb8e 3.9T 664k 3.9T 0% /var/db/system/syslog-edba64cb7e0f4275892ce622bdbcbb8e
replica/.system/rrd-edba64cb7e0f4275892ce622bdbcbb8e 3.9T 144k 3.9T 0% /var/db/system/rrd-edba64cb7e0f4275892ce622bdbcbb8e
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503170439 531M 526M 4.8M 99% /var/tmp/tmp5lTcmF
devfs 1.0k 1.0k 0B 100% /var/tmp/tmp5lTcmF/dev
tmpfs 8.7G 4.0k 8.7G 0% /var/tmp/tmp5lTcmF/var/tmp
freenas-boot/grub 16M 11M 4.8M 70% /var/tmp/tmp5lTcmF/boot/grub
=====

The /var mount only has 37M in use (out of 2.7G total) prior to kicking off the update. It's strange that the mount that gets created during upgrade is limited to 531M (/var/tmp/tmp5lTcmF).


Before I go about trying to jockey around files, symlinks, etc to free up space and sort this out, has anyone else noticed this behavior?
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

George51

Contributor
Joined
Feb 4, 2014
Messages
126
My updates are also failing - however there was no given reason, upon inspection - my 2 8gb usb boot drives are 96% full - and now won't let me delete any of the many old boot environments that I have... where to go to resolve this?
 

George51

Contributor
Joined
Feb 4, 2014
Messages
126
Sure - When I press delete, it hangs with a greyed out "please wait...." or occasionally it says "Failed to delete boot environment" - I had trouble with the most recent update, reverted to the previous boot environment (that all worked fine) tried to do a manual update to the latest, when that also failed was when I clocked I was at 96% full. I assume that this was also the cause of the initial issue
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
OK, I'm no expert, but I think when a ZFS filesystem gets too full it can lead to problems with deleting snapshots, which is what that delete button is trying to do.

The easiest solution is probably to:
  1. Save your configuration.
  2. Clean install FreeNAS.
  3. Restore your configuration.
 

George51

Contributor
Joined
Feb 4, 2014
Messages
126
Yer, I had heard those rumours before. I managed to delete some after a while of faffing, but I think it was already hosed at this point, so I gave up - currently just done a fresh install and its re-booting into the previous configuration, will then add the disks back in and hopefully I am back where I started. Cheers
 

George51

Contributor
Joined
Feb 4, 2014
Messages
126
Also as an aside - it would, in my view, be a good idea that deletes old boot environments when your x% away from a full boot drive... would certainly have avoided the issues I encountered. Although I only have myself to blame for not keeping aware of it
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
will then add the disks back in and hopefully I am back where I started.
You shouldn't have to "add the disks back in", just reload your saved configuration and you should be right where you left off.
it would, in my view, be a good idea that deletes old boot environments when your x% away from a full boot drive...
It would be cool to have a setting for that. Why not post something under the Feature Requests topic and see if it gets any traction?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I think it would be better to have a warning message because some of us wants to keep old boot env and only delete some of the more recent ones ;)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Sure, "a setting for that" could be "warn" or "automatically delete oldest".
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Ah yes, a setting that you can enable/disable would be great ;)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

Default

Cadet
Joined
Dec 18, 2013
Messages
8
So I ran into the same issue as George51, that deleting an old boot environment would just end up in a greyed out button.
However, closing that dialogue window and refreshing the boot tab shows the environment actually removed, with some space freed.
So it seems the issue is with the dialogue window?
 

Default

Cadet
Joined
Dec 18, 2013
Messages
8
Hm ok after some of the space has been freed, the delete action now works and give a "success" confirmation.
 
Status
Not open for further replies.
Top