Volume at 70% Datasets at 99% - No quotas configured

Status
Not open for further replies.

JimmyUk

Dabbler
Joined
Sep 6, 2014
Messages
18
Hi,

I have a RaidZ2 array with 6x 2TB drives. 16GB ECC RAM

NFS mounts on some Linux computers reported that there is not enough space on the device. However the Pool has 3.2 TB free and 7.7TB used, although the datasets are reporting full. I have no quotes setup on the pool or Datasets. I'm a little confused as to why I cant use all the space reported. I've used FreeNAS for a number of projects over about 5 years or more and haven't come across this before.

Anyone have an explanation for this occurrence?

Thanks
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
Are you using Snapshots? This is typically the answer. If you have or are using snapshots, you need to delete some of your snapshots.
 

JimmyUk

Dabbler
Joined
Sep 6, 2014
Messages
18
I am however the refer total is minimal. I've removed the snapshots and it's not changed the situation. from my OP i've deleted 10GB worth of data (and all snapshots) and it's showing as available across all the datasets. however there's 10% (20% if ignoreing 80% rule) of usage that I cant seem to see being used anywhere.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Can you provide a screenshot of your storage tab and the output of zfs list (in code tags please)? I think you are looking at the raw pool size without factoring in the RaidZ2 overhead.
upload_2016-3-7_8-26-23.png
 

JimmyUk

Dabbler
Joined
Sep 6, 2014
Messages
18
Just looked into the Raidz2 overhead... I think you maybe onto something there, I've moved some more stuff off the box to try and give it a few more GB since last post

Code:
[root@freenasDC] ~# zpool list
NAME           SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTRO
Volume1       10.9T  7.67T  3.20T         -    40%    70%  1.00x  ONLINE  /mnt
freenas-boot  14.9G  5.79G  9.08G         -      -    38%  1.00x  ONLINE  -
[root@freenasDC] ~# zfs list
NAME                                                       USED  AVAIL  REFER  MOUNTPOINT
Volume1                                                   6.99T  23.0G   208K  /mnt/Volume1
Volume1/.system                                           63.8M  23.0G   208K  legacy
Volume1/.system/configs-5a1e22f5286d453ca9fcfb104651ce47  14.3M  23.0G  14.3M  legacy
Volume1/.system/cores                                     6.51M  23.0G  6.51M  legacy
Volume1/.system/rrd-5a1e22f5286d453ca9fcfb104651ce47      36.0M  23.0G  36.0M  legacy
Volume1/.system/samba4                                    2.08M  23.0G  2.08M  legacy
Volume1/.system/syslog-5a1e22f5286d453ca9fcfb104651ce47   4.73M  23.0G  4.73M  legacy
Volume1/Archive-FAR                                       17.2G  23.0G  17.2G  /mnt/Volume1/Archive-FAR
Volume1/FDB                                                632G  23.0G   632G  /mnt/Volume1/FDB
Volume1/FTPBackup                                         1016G  23.0G  1016G  /mnt/Volume1/FTPBackup
Volume1/HQ-ServerBackup                                   1.18T  23.0G  1.18T  /mnt/Volume1/HQ-ServerBackup
Volume1/ISOs                                              28.5G  23.0G  28.5G  /mnt/Volume1/ISOs
Volume1/NSS-DROBO                                          483G  23.0G   483G  /mnt/Volume1/NSS-DROBO
Volume1/Remote-Backups                                     191G  23.0G   192K  /mnt/Volume1/Remote-Backups
Volume1/Remote-Backups/Chris                               191G  23.0G   191G  /mnt/Volume1/Remote-Backups/Chris
Volume1/SDSPOOL-VMs                                        111G  23.0G   111G  /mnt/Volume1/SDSPOOL-VMs
Volume1/ServerBackups                                      243G  23.0G   243G  /mnt/Volume1/ServerBackups
Volume1/VHDs                                              59.8G  23.0G  59.8G  /mnt/Volume1/VHDs
Volume1/WebserverIsci                                      192K  23.0G   192K  /mnt/Volume1/WebserverIsci
Volume1/Xen                                                192K  23.0G   192K  /mnt/Volume1/Xen
Volume1/XenStore                                          2.06T  1.15T   961G  -
Volume1/iSCSIHYPERV                                       1.03T   822G   258G  -
Volume1/jails                                              192K  23.0G   192K  /mnt/Volume1/jails
freenas-boot                                              5.79G  8.62G    31K  none
freenas-boot/ROOT                                         5.63G  8.62G    31K  none
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201502162250           52K  8.62G   933M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201502232343           71K  8.62G   944M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201502270607           58K  8.62G   925M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201502271818           70K  8.62G   925M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503071634           53K  8.62G   916M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503150158           94K  8.62G   937M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503200528           84K  8.62G   937M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201503270027           87K  8.62G   938M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201504100216           50K  8.62G   994M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201504152200           50K  8.62G  1019M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201505040117           87K  8.62G  1019M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201505100553           64K  8.62G  1019M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201506042008           98K  8.62G  1023M  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201510290351          120K  8.62G  1.00G  /
freenas-boot/ROOT/FreeNAS-9.3-STABLE-201602031011         5.63G  8.62G  1.01G  /
freenas-boot/ROOT/Initial-Install                            1K  8.62G   932M  legacy
freenas-boot/ROOT/default                                   53K  8.62G   933M  legacy
freenas-boot/grub                                          152M  8.62G  11.4M  legacy
 

Attachments

  • fnas1.PNG
    fnas1.PNG
    53.8 KB · Views: 244

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
Yep, that's the issue. The 3.2TB number at the top of your graphic is the 2 drives worth of RAW. The number below that factors in the RAIDZ2. zfs list show that clearly (22GB free).

Don't let that pool fill up, or you won't be able to delete, and then you will lose all the data.
As a safety precaution, I would create a dataset with a 5-10GB reservation, so that if the pool fills, you can change the reservation to free up space.
 

JimmyUk

Dabbler
Joined
Sep 6, 2014
Messages
18
As a safety precaution, I would create a dataset with a 5-10GB reservation, so that if the pool fills, you can change the reservation to free up space.

Many thanks,

Top tip that, i had a DEV box do as you described a couple of years ago this would have saved my bacon back then!
 
Status
Not open for further replies.
Top