[SOLVED] /var/log full on upgrade to 9.3

Status
Not open for further replies.

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
Post upgrading to 9.3 I noticed this error via email:

Code:
mv: rename /var/log/mount.today to /var/log/mount.yesterday: No space left on device
mv: /var/log/mount.today: No space left on device



Looking into this it appears to be an issue other people have run into:
It sounds like the issue is that you need avoid getting your dataset close to size of your total zpool and that 9.3 has a particular issue showing the availability as zero even when you have space available.

p03DxcU.png

(note: this is a snippet, I have a few more datasets, but I left them off for clarity)

Even those I have excess space in my "main" zpool my dataset still is listed as having 0 available.

The solutions suggested in these threads don't seem to fit my situation as I should have enough free space. I even tried clearing out 10GiB from the quota on one of the datasets. Anyway I could save my FreeNAS install at this point and get things back into working order?
 
Last edited:

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
Spent some more time on this and I think I am confused about the core issue here.

Do I need to set some an explicit quota and reserved space for my root dataset? I tried reducing the quota/reserved space on some of the children datasets and I still have 0 available in the parent dataset "main".

For reference,

zpool list:
Code:
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  3.72G   524M  3.21G         -      -    13%  1.00x  ONLINE  -
main          5.44T  3.79T  1.65T         -    13%    69%  1.00x  ONLINE  /mnt


zfs list:
Code:
NAME                                                   USED  AVAIL  REFER  MOUNTPOINT
freenas-boot                                           524M  3.09G    31K  none
freenas-boot/ROOT                                      516M  3.09G    25K  none
freenas-boot/ROOT/Initial-Install                        1K  3.09G   510M  legacy
freenas-boot/ROOT/default                              516M  3.09G   511M  legacy
freenas-boot/grub                                     6.79M  3.09G  6.79M  legacy
main                                                  3.52T      0   448K  /mnt/main
main/.system                                          19.5M      0   234K  legacy
main/.system/cores                                    11.3M      0  11.3M  legacy
main/.system/rrd-a05e46669fb748f084c58019310b076c      192K      0   192K  legacy
main/.system/samba4                                   4.94M      0  4.94M  legacy
main/.system/syslog-a05e46669fb748f084c58019310b076c  2.81M      0  2.81M  legacy
main/jail                                                4G  3.37G   642M  /mnt/main/jail
main/jails                                            4.05G      0   442K  /mnt/main/jails
main/jails/.warden-template-pluginjail-9.2-x64        1.29G      0  1.29G  none
main/jails/minidlna_1                                  598M      0  1.86G  /mnt/main/jails/minidlna_1
main/jails/plexmediaserver_1                          2.18G      0  3.46G  /mnt/main/jails/plexmediaserver_1
main/jamie                                            1.12T   297G   853G  /mnt/main/jamie
main/laura                                              10G  9.98G  21.3M  /mnt/main/laura
main/plugins                                             2G  1.55G   457M  /mnt/main/plugins
main/timemachine                                       772G   334G   438G  /mnt/main/timemachine
main/tom                                               260G  18.1G   242G  /mnt/main/tom
main/win-other                                         700G   273G   427G  /mnt/main/win-other
main/windows-backup                                    700G  81.1G   619G /mnt/main/windows-backup


df -h:
Code:
Filesystem                                              Size    Used   Avail Capacity  Mounted on
freenas-boot/ROOT/default                               3.6G    510M    3.1G    14%    /
devfs                                                   1.0k    1.0k      0B   100%    /dev
tmpfs                                                    32M    5.2M     26M    16%    /etc
tmpfs                                                   4.0M    8.0k      4M     0%    /mnt
tmpfs                                                   2.6G     49M    2.6G     2%    /var
freenas-boot/grub                                       3.1G    6.8M    3.1G     0%    /boot/grub
main                                                    447k    447k      0B   100%    /mnt/main
main/jail                                               4.0G    642M    3.4G    16%    /mnt/main/jail
main/jails                                              442k    442k      0B   100%    /mnt/main/jails
main/jails/minidlna_1                                   1.9G    1.9G      0B   100%    /mnt/main/jails/minidlna_1
main/jails/plexmediaserver_1                            3.5G    3.5G      0B   100%    /mnt/main/jails/plexmediaserver_1
main/jamie                                              1.1T    852G    297G    74%    /mnt/main/jamie
main/laura                                               10G     21M     10G     0%    /mnt/main/laura
main/plugins                                            2.0G    456M    1.6G    22%    /mnt/main/plugins
main/timemachine                                        772G    437G    334G    57%    /mnt/main/timemachine
main/tom                                                260G    241G     18G    93%    /mnt/main/tom
main/win-other                                          700G    427G    272G    61%    /mnt/main/win-other
main/windows-backup                                     700G    618G     81G    88%    /mnt/main/windows-backup
main/.system                                            234k    234k      0B   100%    /var/db/system
main/.system/cores                                       11M     11M      0B   100%    /var/db/system/cores
main/.system/samba4                                       5M      5M      0B   100%    /var/db/system/samba4
main/.system/syslog-a05e46669fb748f084c58019310b076c    2.8M    2.8M      0B   100%    /var/db/system/syslog-a05e46669fb748f084c58019310b076c
main/.system/rrd-a05e46669fb748f084c58019310b076c       191k    191k      0B   100%    /var/db/system/rrd-a05e46669fb748f084c58019310b076c
devfs                                                   1.0k    1.0k      0B   100%    /mnt/main/jails/minidlna_1/dev
procfs                                                  4.0k    4.0k      0B   100%    /mnt/main/jails/minidlna_1/proc
devfs                                                   1.0k    1.0k      0B   100%    /mnt/main/jails/plexmediaserver_1/dev
procfs                                                  4.0k    4.0k      0B   100%    /mnt/main/jails/plexmediaserver_1/proc
/mnt/main/jamie/Videos                                  1.1T    852G    297G    74%    /mnt/main/jails/plexmediaserver_1/mnt/videos



EDIT: Excuse the formatting, forum is eating the spacing...
EDIT2: Put outputs in code blocks to preserve spacing
 
Last edited:

DaveF81

Explorer
Joined
Jan 28, 2014
Messages
56
zpool list:
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
freenas-boot 3.72G 524M 3.21G - - 13% 1.00x ONLINE -
main 5.44T 3.79T 1.65T - 13% 69% 1.00x ONLINE /mnt
You're using a 4GB boot drive. Suggest you back-up your FreeNAS config and upgrade the boot drive to a minimum of 8GB as per the documentation.
 

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
You're using a 4GB boot drive. Suggest you back-up your FreeNAS config and upgrade the boot drive to a minimum of 8GB as per the documentation.

I should probably do that eventually but right now it's saying I have plenty of room in the boot fs.

From the docs you linked:
the bare minimum size is 4GB. This provides room for the operating system and two boot environments. Since each update creates a boot environment, the recommendedminimum is at least 8GB or 16GB as this provides room for more boot environments.

4gb is still the minimum and in my case I only have two boot environments listed.

/var/log is being mounted on the 'main' zpool, correct?
 
Last edited:

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
I agree the problem is with the main pool rather than the boot drive. What is the total quota of datasets you have given a quota to, and the total content of datasets you haven't given a quota to? That equals the amount "used". From the result of zfs list that appears to be about 3.4TiB plus a bit. Since the raw space in your main pool is 3.8TiB, depending on the space devoted to parity or mirrors, the original total available space may actually be much less than the 3.5TiB that you have "used". I put used in inverted commas because obviously quota is likely to be bigger than the amount of useful data you have. Until the quota is reduced well below the actual space for data you have on your pool, ZFS is going to regard it as full. Unfortunately, being a relative newcomer to ZFS I can't remember the zfs incantation to find the available space on a pool after allowing for parity, if there is one. But I could make a guess from the RAIDZ level (I assume you are not using mirrors) you are using and number of drives. "zpool status" (in code tags so it is readable, see the little widgets in the forum) would give us most of the information.


Bottom line: prune at least 1.5TiB off your quotas (while not making them smaller than the actual data in each dataset), and then we can see what is happening. And probably things will work. Alternatively, and safer, just remove quotas from all datasets temporarily.
 

DaveF81

Explorer
Joined
Jan 28, 2014
Messages
56
Looking again it appears the system dataset has a hard limit.

Code:
main/.system 234k 234k 0B 100% /var/db/system
main/.system/cores 11M 11M 0B 100% /var/db/system/cores
main/.system/samba4 5M 5M 0B 100% /var/db/system/samba4
main/.system/syslog-a05e46669fb748f084c58019310b076c 2.8M 2.8M 0B 100% /var/db/system/syslog-a05e46669fb748f084c58019310b076c
main/.system/rrd-a05e46669fb748f084c58019310b076c 191k 191k 0B 100% /var/db/system/rrd-a05e46669fb748f084c58019310b076c


Can you run the command zfs get quota main/.system and paste the output.
 

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
Hey guys, I really appreciate the help. I apologise for the poor formatting on my second post, I've gone back and reformatted with code blocks so it's legible now.

@DaveF81

Can you run the command zfs get quota main/.system and paste the output.

zfs get quota main/.system gives me:

Code:
NAME          PROPERTY  VALUE  SOURCE
main/.system  quota     none   default


@rogerh

Until the quota is reduced well below the actual space for data you have on your pool, ZFS is going to regard it as full. Unfortunately, being a relative newcomer to ZFS I can't remember the zfs incantation to find the available space on a pool after allowing for parity, if there is one.

Yeah I wish I knew the command as well. I swear FreeNAS used to display the total space as the total minus the parity, then again non of these issues manifested before 9.3, so they must have changed something that suddenly made it not have enough space.(The first post I linked above suggested the reservation was bumped to 3% for 9.3 for zpool, my actual dataset should be using less than 97% of 3.8TiB (which is ~3.686TiB).


But I could make a guess from the RAIDZ level (I assume you are not using mirrors) you are using and number of drives. "zpool status" (in code tags so it is readable, see the little widgets in the forum) would give us most of the information.

For disks I am using 3 2TiB drives:
sNzbFyF.png


zpool status gives me:
Code:
 pool: freenas-boot
state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sat May 30 03:46:05 2015
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
     da0p2     ONLINE       0     0     0

errors: No known data errors

  pool: main
state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 7h31m with 0 errors on Sun Apr 26 07:31:36 2015
config:

    NAME                                            STATE     READ WRITE CKSUM
    main                                            ONLINE       0     0     0
     raidz1-0                                      ONLINE       0     0     0
       gptid/609896c9-a29e-11e1-a34b-e4115b12b8eb  ONLINE       0     0     0
       gptid/61606576-a29e-11e1-a34b-e4115b12b8eb  ONLINE       0     0     0
       gptid/d50c2ed4-8bac-11e4-9241-e4115b12b8eb  ONLINE       0     0     0

errors: No known data errors
 
Last edited:

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
Ah I got it to work. I kept bumping down the quotas and reserved space on my children datasets (just need another 10GiB off on the "tom" dataset) until I could write to /var/log again, just took awhile.

So for people hitting similar issues the

SOLUTION

is to lower quotas and reservations on child datasets until there is room on the parent dataset again.
In my case this required clearing up 30GiBs for my total of 6TiB space (6TiB including parity).

I feel like I should keep a separate reserved quota for the parent dataset, but I am not sure how to do that, it seems like all combinations of quota and reserved would need to take into account the children's space well.
 
Last edited:

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
The space available for data on your pool is somewhere around 4TiB. I am not sure what it actually is, the Volumes tab appears to suggest it is 3.5TiB which seems rather low, but maybe that doesn't include snapshots. I think the change in behaviour on upgrading to 9.3 is because the available space is worked out more conservatively in 9.3, not necessarily that you have less space actually available in reality. So you have to make sure that data in datasets without a quota, including snapshots space, plus all your quotas is less than this. AIUI snapshot used space does not show up in df or in zfs list. Maybe someone more knowledgeable could suggest how big the system dataset plus jails could get, and make sure this plus all your quotas is less than the total available space.

Alternatively, do you need quotas? I do the opposite and set the maximum space each dataset can use. Do any of your users actually need reserved space? You will not be able to provide the reserved space if the pool runs out of space anyway!

As to children's space, you can set whether the children's quota is included or additional in the GUI.

Edit: you can of course use zfs list -t snapshot to find out how much space snapshots are using, which is otherwise invisible.
 

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
Alternatively, do you need quotas? I do the opposite and set the maximum space each dataset can use. Do any of your users actually need reserved space? You will not be able to provide the reserved space if the pool runs out of space anyway!

Setting the maximum spacing through a quota and then reserving that space seemed like the logical thing to do at the time. Then again this is my first experience with ZFS and I am not a sysadmin, this is just a personal NAS that I give some housemates space on.

As to children's space, you can set whether the children's quota is included or additional in the GUI.

Yeah that's a little confusing to me. According to the official ZFS docs.

  • The quota and reservation properties are convenient for managing disk space consumed by datasets and their descendents.

  • The refquota and refreservation properties are appropriate for managing disk space consumed by datasets.

  • Setting the refquota or refreservation property higher than the quota or reservation property has no effect. If you set the quota or refquota property, operations that try to exceed either value fail. It is possible to a exceed a quota that is greater than the refquota. For example, if some snapshot blocks are modified, you might actually exceed the quota before you exceed the refquota.
I assume refquota and refreservation are non-childern options in the GUI. It sounds like you can't set them higher then the space you allocation for the dataset and it's children.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Setting the maximum spacing through a quota and then reserving that space seemed like the logical thing to do at the time. Then again this is my first experience with ZFS and I am not a sysadmin, this is just a personal NAS that I give some housemates space on.

Well I'm a complete amateur too! But if I were you I'd only reserve space for something important, like myself! Shortage of space is otherwise best dealt with by keeping your eye on not letting the pool get fuller than 80%, which is harmful for ZFS efficiency. If your service is to continue at that point you need to shed users or add storage. Because quotas are occupying most of your space you cannot do this as easily through the GUI, as the real percentage use is concealed by the use for reserved quotas, and that in itself is a source of risk, as a completely full pool can cause data loss. By the way, as a loyal forum user, I should point out that we are supposed to mention our hardware in some detail as well as FreeNAS version when asking questions, and the fact you didn't may be one reason you have had no comments from the forum experts. Just saying.
 

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
By the way, as a loyal forum user, I should point out that we are supposed to mention our hardware in some detail as well as FreeNAS version when asking questions, and the fact you didn't may be one reason you have had no comments from the forum experts. Just saying.

Thanks for the tip. I added my details to my signature (and stole your spoiler tag title).

According to this site, http://www.servethehome.com/raid-calculator/, my usable space should be 3.6 TB / 3725.3 GB. So 80% of that would be 2.88 TB, which is a pretty brutal reduction in the amount of space I have available. Where did you get 80% from?
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Where did you get 80% from?

Mentioned on this forum almost daily. Section 1.4 of User Guide, "1.4 ZFS Primer". The professionals will gleefully point out that if you're worried about the cost of space you're not sufficiently worried about data security. BTW, if ZFS is really (presumably as opposed to notionally) full you will get an alert at 80% and a more insistent one at 90%.

Edit: By the way, my confusion over estimating your pool capacity was because I glanced at your zpool status result and thought I saw four 2TB disks: with your three the capacity of about 3.5TiB makes perfect sense.
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The space available for data on your pool is somewhere around 4TiB. I am not sure what it actually is, the Volumes tab appears to suggest it is 3.5TiB which seems rather low, but maybe that doesn't include snapshots. I think the change in behaviour on upgrading to 9.3 is because the available space is worked out more conservatively in 9.3, not necessarily that you have less space actually available in reality. So you have to make sure that data in datasets without a quota, including snapshots space, plus all your quotas is less than this. AIUI snapshot used space does not show up in df or in zfs list. Maybe someone more knowledgeable could suggest how big the system dataset plus jails could get, and make sure this plus all your quotas is less than the total available space.

Well, since jails can be of any size from a couple hundred MB to TB, there is no way to have someone figure out if things make sense. The server's admin should be able to figure that out.

Alternatively, do you need quotas? I do the opposite and set the maximum space each dataset can use. Do any of your users actually need reserved space? You will not be able to provide the reserved space if the pool runs out of space anyway!

Wha!? The purpose of reservations is to ensure that a certain amount of space is available for the dataset, no matter what.
 

rogerh

Guru
Joined
Apr 18, 2014
Messages
1,111
Well, since jails can be of any size from a couple hundred MB to TB, there is no way to have someone figure out if things make sense. The server's admin should be able to figure that out.



Wha!? The purpose of reservations is to ensure that a certain amount of space is available for the dataset, no matter what.

That is interesting. So even when logging stopped because ZFS could not offer any space on the pool because it was all reserved, his users could have have carried on using the reserved space on their datasets with no problem? In that case I was wrong, but it still does not seem like a good idea to do this.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That is interesting. So even when logging stopped because ZFS could not offer any space on the pool because it was all reserved, his users could have have carried on using the reserved space on their datasets with no problem? In that case I was wrong, but it still does not seem like a good idea to do this.

You are correct, and it isn't a good idea to do this. ;)
 

Free as in Nas

Dabbler
Joined
May 11, 2012
Messages
42
You are correct, and it isn't a good idea to do this. ;)

Yeah that is essentially what happened to me. I ended up blowing away all my reservations and using quotas exclusively.

At 90% capacity, ZFS switches from performance- to space-based optimization, which has massive performance implications. For maximum write performance and to prevent problems with drive replacement, add more capacity before a pool reaches 80%. If you are using iSCSI, it is recommended to not let the pool go over 50% capacity to prevent fragmentation issues.

Is 80% of non-parity space or 80% of space including parity?

It's a little confusing because zpool list gives 5.44TiB for my total size, but that includes parity.
Code:
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot  3.72G   524M  3.21G         -      -    13%  1.00x  ONLINE  -
main          5.44T  3.79T  1.65T         -    13%    69%  1.00x  ONLINE  /mnt
 
Last edited:

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, it's 95%. And second, it's based on user-space free (zfs list).
 
Status
Not open for further replies.
Top