Why is my volume full?

Status
Not open for further replies.

lungfork

Dabbler
Joined
Jan 15, 2013
Messages
16
I have a FreeNAS box with three ~28TB volumes. I should have 11TB free on the mt01 volume shown below, but it is reporting full.
Code:
df -h
Filesystem                                               Size    Used   Avail Capacity  Mounted on
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506292332        143G    535M    143G     0%    /
devfs                                                    1.0k    1.0k      0B   100%    /dev
tmpfs                                                     32M    5.3M     26M    17%    /etc
tmpfs                                                    4.0M    8.0k      4M     0%    /mnt
tmpfs                                                      8G    244M    7.8G     3%    /var
freenas-boot/grub                                        143G    6.8M    143G     0%    /boot/grub
mt01                                                      11T     11T    392M   100%    /mnt/mt01
mt01/share1                                        284G    284G    392M   100%    /mnt/mt01/share1
mt01/share2                                 1.8G    1.4G    392M    79%    /mnt/mt01/share2
mt01/share3                                       11T     11T    392M   100%    /mnt/mt01/share3
mt01/share4                                       392M    162k    392M     0%    /mnt/mt01/share4
mt01/share5                                                3G    2.6G    392M    87%    /mnt/mt01/share5
mt01/share5/1day                                         7.1G    6.7G    392M    95%    /mnt/mt01/share5/1day
mt01/share5/4day                                         2.6G    2.2G    392M    85%    /mnt/mt01/share5/4day


I deleted a bunch of snapshots because of the full volume, but the used storage didn't change after deleting those snapshots.
Code:
zfs list -t snapshot
NAME                                                                    USED  AVAIL  REFER  MOUNTPOINT
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506292332@2015-05-18-11:10:23  2.52M      -   506M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506292332@2015-05-18-14:40:33  2.03M      -   507M  -
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506292332@2015-08-12-12:14:48   137M      -   511M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201505130355          6.77M      -  6.79M  -
freenas-boot/grub@Pre-Upgrade-FreeNAS-9.3-STABLE-201506292332          6.77M      -  6.79M  -


I have no clue why this volume is still reporting full. Is there a missing step after deleting snapshots to reclaim the space they were using?

Thanks for the help.

Jordan
 
Joined
Oct 2, 2014
Messages
925
A full list of system specs would help us
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
Those snapshots you deleted look to be from your boot device. At least that's what your post is showing.
 
Joined
Oct 2, 2014
Messages
925
As jailer pointed out, those snapshots were from the boot device, so then your mt01 is truly full. When you say 28Tb do you mean the overall when you goto the Storage overview in the FreeNAS GUI? As you have used 11Tb out of 11Tb (100% usage), this is why system specs are very important as it would tell us how many drives, in what RAIDzX, and would allow us to figure out if you truly have 28Tb or you have 28Tb before RAIDzX
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
The output of zpool list would be helpful too.
 

lungfork

Dabbler
Joined
Jan 15, 2013
Messages
16
Sorry for the confusion, the snapshot list was after I destroyed every snapshot for the three volumes (I was trying to show that no snapshots existed for that volume).

System specs:
SuperMicro MBD-X8DAH+-F-O Motherboard
Intel Xeon E5620
24 GB RAM
LSI 9211-8i SAS HBA
24x 4TB HGST hard drives split into three RAIDZ1 volumes
2x WD Scorpio Black hard drives (FreeNAS boot device)

Output of 'zpool list'
Code:
$ zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
freenas-boot   149G  1.18G   148G         -      -     0%  1.00x  ONLINE  -
mt01            29T  28.1T   928G         -    56%    96%  1.00x  ONLINE  /mnt
 
Joined
Oct 2, 2014
Messages
925
so you have 8 drives per RAIDz1 , and only one of them is full...ok cool now we're getting somewhere, but that puts you at a total storage of roughly 72Tb.....thats wayyyyyy more then what 24Gb of RAM should be used for, but thats for another day.

If the pool is 100% filled its time to delete some stuff, that means files or move around some stuff, can you provide screen shots of the webGUI's Storage section?
 

lungfork

Dabbler
Joined
Jan 15, 2013
Messages
16
so you have 8 drives per RAIDz1 , and only one of them is full...ok cool now we're getting somewhere, but that puts you at a total storage of roughly 72Tb.....thats wayyyyyy more then what 24Gb of RAM should be used for, but thats for another day.

All of the disks on the system were recently upgraded thanks to the crappy 3TB Seagate disks that were released around 2012/2013. I know I am under the recommended RAM, but an upgrade is planned.

If the pool is 100% filled its time to delete some stuff, that means files or move around some stuff, can you provide screen shots of the webGUI's Storage section?

OK, so here's my thinking on this issue. You saw the list of datasets I provided earlier. Since the used storage is in the root of the volume, there must be something in the mounted directory (/mnt/mt01) or in a system folder mounted from that volume (/var/db/system...).

Here is the list of mounts from the volume:
Code:
# mount | grep mt01
mt01 on /mnt/mt01 (zfs, local, nfsv4acls)
mt01/share1 on /mnt/mt01/share1 (zfs, local, nfsv4acls)
mt01/share2 on /mnt/mt01/share2 (zfs, local, nfsv4acls)
mt01/share3 on /mnt/mt01/share3 (zfs, local, nfsv4acls)
mt01/share4 on /mnt/mt01/share4 (zfs, local, nfsv4acls)
mt01/share5 on /mnt/mt01/share5 (zfs, local, nfsv4acls)
mt01/share5/1day on /mnt/mt01/share5/1day (zfs, local, nfsv4acls)
mt01/share5/4day on /mnt/mt01/share5/4day (zfs, local, nfsv4acls)
mt01/.system on /var/db/system (zfs, local, nfsv4acls)
mt01/.system/cores on /var/db/system/cores (zfs, local, nfsv4acls)
mt01/.system/samba4 on /var/db/system/samba4 (zfs, local, nfsv4acls)
mt01/.system/syslog-cd1fc29ce94d4a81a24df77359252261 on /var/db/system/syslog-cd1fc29ce94d4a81a24df77359252261 (zfs, local, nfsv4acls)
mt01/.system/rrd-cd1fc29ce94d4a81a24df77359252261 on /var/db/system/rrd-cd1fc29ce94d4a81a24df77359252261 (zfs, local, nfsv4acls)
mt01/.system/configs-cd1fc29ce94d4a81a24df77359252261 on /var/db/system/configs-cd1fc29ce94d4a81a24df77359252261 (zfs, local, nfsv4acls)


Here is the du command run from each of these filesystem locations:
Code:
# du -h -d 1 /mnt/mt01
284G    ./share1
 11G    ./share5
1.0k    ./share4
1.4G    ./share2
3.5M    ./sas2flash
7.6G    ./test02
 11T    ./mt01
 11T    ./share3
512B    ./jails
 23T    .

# du -h -d 1 /var/db/system
167M    /var/db/system/update
512B    /var/db/system/configs-cd1fc29ce94d4a81a24df77359252261
512B    /var/db/system/rrd-cd1fc29ce94d4a81a24df77359252261
 21M    /var/db/system/samba4
 71M    /var/db/system/syslog-cd1fc29ce94d4a81a24df77359252261
 52M    /var/db/system/cores
323M    /var/db/system


Here's the contents of the /mnt/mt01 folder:
Code:
# ls -alh /mnt/mt01
total 131
drwxr-xr-x  11 root       wheel    11B Sep  4 20:21 ./
drwxr-xr-x   5 root       wheel   160B Aug 31 22:02 ../
drwxrwxr-x+  7 21434      21408    19B Aug 10 12:18 share1/
drwxrwxr-x+  3 root       wheel     4B Jun  2 13:52 share2/
drwxrwxr-x+  8 root       22271     9B Jun  2 13:52 share3/
drwxrwxr-x+  2 21133      wheel     3B Jun  2 13:52 share4/
drwxr-xr-x   2 root       wheel     2B May 22 08:21 jails/
drwxr-xr-x   5 root       wheel     5B May 22 08:21 mt01/
drwxr-xr-x   3 root  wheel     6B Aug 17 13:07 sas2flash/
drwxrwxr-x+ 10 root       20513    16B Jul 27 15:09 share5/
drwx------   7 root       wheel     8B Oct  7  2013 test02/


So without any snapshots in the mt01 volume, where did those 11TB of used storage come from?
 

lungfork

Dabbler
Joined
Jan 15, 2013
Messages
16
#^*% me, I found it. Please disregard!

The /mnt/mt01/mt01 folder should not be there. FML.

To let you guys in on the secret, that folder is rsynced from our mirror system, so it rsynced the entire mt01 volume into the folder /mnt/mt01/mt01 instead of into the /mnt/mt01 folder.
 
Status
Not open for further replies.
Top