8.3 : Out of space, cannot delete

Status
Not open for further replies.

esxbsdguy

Cadet
Joined
Nov 21, 2012
Messages
4
FreeNAS-8.3.0-RELEASE-x64 (r12701M)

500G ZFS volume that is completely full; an automated backup process forgot to delete an older backup before making a new one. Now, the volume and it's only dataset are both out of space. None was reserved. Files cannot be deleted via the shares (NFS, CIFS) nor with rm on the console.

Any way to recover this situation without adding more space or destroying and recreating the volume?
 

esxbsdguy

Cadet
Joined
Nov 21, 2012
Messages
4
Was able to dig myself out of this hole via a 'cat /dev/null > somebigfile'. Do still think I need to file a report though, unless this behavior is expected? Perhaps there should be a minimum reserved space on every volume/dataset so this can't happen.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I had seen something similar on a test pool a long time ago.

UFS maintains reserved space on a filesystem in order to reduce fragmentation effects, and it does this by hiding some of the capacity from the non-root user. However, even on a completely allocated filesystem, files can be removed because no allocation of additional space would be involved.

ZFS doesn't maintain reserved space in that manner. ZFS is a copy-on-write filesystem, which means that in general, writes are made on newly allocated blocks. As the metadata involved (I'm guessing the directory itself) has to be updated, there'll be a need for some free blocks for allocation for the corrected metadata, after which point ZFS will free the file blocks (and the old metadata). Danger of COW. This would seem to be complicated further if there are snapshots. As a matter of fact, I can imagine snapshots being a bit of a pain to unwind in a disk-full situation.

It's kind of interesting that you were able to truncate a big file to fix it; this would seem to imply that either there was a tiny bit of space free and that was all that was needed for the COW to work, or that there's a non-COW path through the code that truncation can function through.

I wonder how much space is needed for something like this. Coming from the old days of UFS on disks of a few megabytes, I'd like to think that metadata updates couldn't require more than a megabyte of space even for ZFS, but I have to keep reminding myself that ZFS's motto is "Think Big, Eat Big." And how do you implement something? Creating an "emergency" file won't work unless you can be sure it doesn't get snapshotted inadvertently. Maybe creating a volume or an unmounted filesystem? "zfs create -o reservation=1M -o mountpoint=none pool/.reserved-space" does seem to reduce the available space on a pool but I haven't tried filling a pool with that.

I would also be interested in hearing William's reasoning. FreeNAS has no problems allocating 2GB/disk for swap that shouldn't strictly be necessary. Having your filesystem fill and then having terabytes of data stranded on a fileserver seems bad, and it seems like it would be worth a little space since you can't safely use it anyways.
 
Status
Not open for further replies.
Top