No space left on device

Status
Not open for further replies.

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
as I told before, the total volume size was ~240 GB (Gigabites)
I created a 500 MB (Megabytes) quota dataset (for test) and the rest of volume was empty

probably I'm doing something wrong
please tell me how to set the data set quotas for a new dataset
there are 4 quotas, I my case I put 500M at the first and 0 (unlimited) to the next 3
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
as I told before, the total volume size was ~240 GB (Gigabites)
I created a 500 MB (Megabytes) quota dataset (for test) and the rest of volume was empty

probably I'm doing something wrong
please tell me how to set the data set quotas for a new dataset
there are 4 quotas, I my case I put 500M at the first and 0 (unlimited) to the next 3
Can you please provide an exact step-by-step instructions on how to reproduce the problem on a new pool/dataset? I'm not able to reproduce the problem:
Code:
[root@freenas] /# zfs create test/test
[root@freenas] /# zfs set refquota=50M test/test
[root@freenas] /# zfs get space test/test
NAME      PROPERTY              VALUE          SOURCE
test/test  name                  test/test      -
test/test  available            50.0M          -
test/test  used                  31K            -
test/test  usedbysnapshots      0              -
test/test  usedbydataset        31K            -
test/test  usedbyrefreservation  0              -
test/test  usedbychildren        0              -
[root@freenas] /# dd if=/dev/zero of=/mnt/test/test/zeros bs=1m
dd: /mnt/test/test/zeros: Disc quota exceeded
51+0 records in
50+0 records out
52428800 bytes transferred in 1.907752 secs (27481978 bytes/sec)
[root@freenas] /# zfs get space test/test
NAME      PROPERTY              VALUE          SOURCE
test/test  name                  test/test      -
test/test  available            0              -
test/test  used                  50.0M          -
test/test  usedbysnapshots      0              -
test/test  usedbydataset        50.0M          -
test/test  usedbyrefreservation  0              -
test/test  usedbychildren        0              -
[root@freenas] /# zfs snapshot test/test@snapshot
[root@freenas] /# mv /mnt/test/test/zeros /mnt/test/test/rename
mv: rename /mnt/test/test/zeros to /mnt/test/test/rename: Disc quota exceeded
[root@freenas] /# chmod a+w /mnt/test/test/zeros
chmod: /mnt/test/test/zeros: Disc quota exceeded
[root@freenas] /# rm /mnt/test/test/zeros
[root@freenas] /# zfs get space test/test
NAME      PROPERTY              VALUE          SOURCE
test/test  name                  test/test      -
test/test  available            50.0M          -
test/test  used                  50.1M          -
test/test  usedbysnapshots      50.0M          -
test/test  usedbydataset        31K            -
test/test  usedbyrefreservation  0              -
test/test  usedbychildren        0              -

I also did the same via GUI (create the dataset, take the snapshot, ...) & CIFS (try to delete the file) and it works. refquota is the quota you can set via GUI (the one without children). As you can see I completely fill the dataset (available = 0) and create a snapshot. The result is that rename doesn't work, chmod doesn't work, but remove works. The last zfs get space shows that the snapshot still uses 50MB, but as we used refquota it is not a problem (the snapshot is "outside" the dataset) and the dataset has again 50MB available.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
hmm
steps (test machine with one 250GB hdd and v9.2.0 on a usb drive):
- I created one zfs volume with all hdd capacity
- I created 3 datasets, one 20GB (empty), one 10GB (empty or maybe 200-300 MB occupied) and one 500MB quota dataset (put 500M at first quota on Advanced mode, 0 (unlimited) at the rest 3) to experiment snapshots
- this 500MB I shared as cifs
- set up 5 minutes snapshots and during 20-30 min I copied and deleted 300-400 MB by pictures, docs, etc. in order to have some points to test snapshot (I observed a slower copy speed at time when snapshot were taken, bun not more than 30-40 sec)
- at one moment, when on the 500MB dataset was ~200 MB with data, I rolledback a previous snapshot and on the share I found the old data, ~400 MB (very good)
- I deleted something (~100 MB) and after I tried to fill all space with one folder with many small files (took one folder from c:\Windows) - at one moment of course I had a message "not enough space"
- and here starts my problem: try to delete one folder, the deleting starts, seems to do something, but at one moment get error message "directory not empty" and surprise, the folder is still there with all files inside :)
- I tried to delete files inside any folder, again the delete seems to succeed, but file appears again at folder refresh..

meanwhile, I get a new idea to solve this: to create dataset in dataset, the second one with quota less than the parent quota
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
Sorry, but this is not a reproducible scenario. This is a vague recollection of things you did. You need to find a (minimal) scenario anybody can follow to consistently reproduce the issue.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
:( I said all the steps
because the test machine is not in my actual location, Monday I will reproduce again
is very simple: if a datasheet is filled 100%, you can't delete any file from share

one more info, the datasheets are compressed (the recommended level) and I put there a lot of very small files
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
is very simple: if a datasheet is filled 100%, you can't delete any file from share
It is not that simple. You can try the scenario I posted above (I included all the steps/commands) -- the dataset is 100% full and it is possible to delete files.
one more info, the datasheets are compressed (the recommended level) and I put there a lot of very small files
Aha, so your steps were incomplete -- you forgot to mention compression. Is there something else you forgot?
Also, try to use the command line to reproduce the problem. Using CIFS just adds another layer of complexity. For example, you copy a big directory until the dataset is full. However, when I try to do that in Windows 7 it immediately complains that there isn't enough space on the destination and it won't even start the copy.
 

aeon

Cadet
Joined
Jan 23, 2014
Messages
9
I need CIFS, if the data can't be deleted from this share it is a big problem
using "echo >" on shell the files can be deleted, but this is not a solution
 

david kennedy

Explorer
Joined
Dec 19, 2013
Messages
98
I need CIFS, if the data can't be deleted from this share it is a big problem
using "echo >" on shell the files can be deleted, but this is not a solution


If you dont let the shares become 100% full and you wont have any issues.
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
I need CIFS, if the data can't be deleted from this share it is a big problem
using "echo >" on shell the files can be deleted, but this is not a solution
What I meant was that when looking for the reproducible scenario try to do it without CIFS.I never said you should stop using CIFS. I said you should try to reproduce the problem in command line -- it's much easier for somebody to repeat the steps and suggest a solution.
 

macxs

Dabbler
Joined
Nov 7, 2013
Messages
21
Hi,

I have the same problem. I upgraded FreeNas from 8.x (guess 8.3) to 9.2.1.8. Also upgraded pool version.
I have one pool with 4 datasets. None of them is filled >80%. The pool has compression enabled after upgrade, the datasets inherit this setting. 3 Datasets are shared with CIFS, one with NFS.
I tested a backup software this week end and saved the backups to one CIFS share residing on a dataset. I didn't expect the dataset would fill up but it did. The dataset stated 0 bytes of free space left and I wasn't able to delete files. OK, Windows lets me delete files, but they reappeared instantly. When trying to delete files from command line the line appeared: "Disc quota exceeded". Note that the pool itself still had enough space left (59% filled) - only this one dataset reached it's quota.
After overwriting a file with null byte I was able to delete files.

I could reproduce this behaviour half way.
As you stated above, Windows itself does not let you copy files until the volume is full because it compares free space with the size of the file(s) to copy. So I tried to fully fill the dataset another way:
dd if=/dev/urandom of=/mnt/dataset/testfile bs=4K
I could observe a very slow performance for the last few MB. When dd stops, the dataset is full. Then I cannot delete any files by *CIFS*. But now, different from earlier behaviour, I could delete it from command line.

I *guess* that either another action (scrub for example) causes the further inability to delete from command line or just waiting some time. I couldn't test this yet.
The dataset has snapshots but none since filling up quota.


Bye Marco
 

macxs

Dabbler
Joined
Nov 7, 2013
Messages
21
Today I could observe this again.
The Backup-Software (Veeam) wrote it's files to the CIFS share until it was filled up. That was about 1am in the morning. Now I have the same situation: I cannot delete files, even from command line ("Disc quota exceeded")

Bye Marco
 
Status
Not open for further replies.
Top