No free space after deleting folder

Status
Not open for further replies.

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
Hello, everybody!
I have interesting problem. I have copied some folder from one pool to another (from pool, for example, Data, to Data2), and after copying deleted this folder from first pool. But free space is not changed. I have rebooted server, but problem didn't solved. What can i do to solve this problem?
PS: bin folder is empty, snapshots is ok
 
Last edited:

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
What do you mean by snapshot is ok? If you have snapshots of the initial data on the initial pool you'll have to delete those snapshot to free up the disk space.
 

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
What do you mean by snapshot is ok? If you have snapshots of the initial data on the initial pool you'll have to delete those snapshot to free up the disk space.
i have deleted snaphots of this pool
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
Please provide the output of
zfs list -r -o name,used,usedsnap,avail,refer <Data>
with <Data> replaced by he name of the pool in question (in code tags, please).
 

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
Please provide the output of
zfs list -r -o name,used,usedsnap,avail,refer <Data>
with <Data> replaced by he name of the pool in question (in code tags, please).
Code:
[root@NAS2] ~# zfs list -r -o name,used,usedbysnapshots,avail,refer Data
NAME                                                    USED  USEDSNAP  AVAIL  REFER
Data                                                   23.3T     1.44T  51.5G  21.9T
Data/.system                                            240M     1.77M  51.5G   320K
Data/.system/configs-0cb0a359379b491698d02a9cd13febb3  16.9M     2.08M  51.5G  14.8M
Data/.system/cores                                     51.3M     29.4M  51.5G  21.8M
Data/.system/rrd-0cb0a359379b491698d02a9cd13febb3       341K         0  51.5G   341K
Data/.system/samba4                                    16.4M     5.74M  51.5G  10.7M
Data/.system/syslog-0cb0a359379b491698d02a9cd13febb3    153M      115M  51.5G  38.7M
Data/jails                                             1.71M     1.33M  51.5G   384K


But
Code:
[root@NAS2] ~# zfs list -r -t snapshot -o name,creation,used Data
NAME                                                                         CREATION                USED
Data@auto-20160903.0400-1m                                                   Sat Sep  3  4:00 2016  22.8G
Data@auto-20160908.0100-1w                                                   Thu Sep  8  1:00 2016  31.1G
Data@auto-20160909.0100-1w                                                   Fri Sep  9  1:00 2016  3.63G
Data@auto-20160910.0100-1w                                                   Sat Sep 10  1:00 2016   192K
Data@auto-20160910.0400-1m                                                   Sat Sep 10  4:00 2016   192K
Data@auto-20160911.0100-1w                                                   Sun Sep 11  1:00 2016   170K
Data@auto-20160912.0100-1w                                                   Mon Sep 12  1:00 2016   234K
Data@Data_manual_1409                                                        Wed Sep 14 10:43 2016   226M

It is not 1.44 Tb

And pool is scrubing now
Code:
[root@NAS2] ~# zpool status -v Data
  pool: Data
state: ONLINE
  scan: scrub in progress since Wed Sep 14 18:40:48 2016
        35.0T scanned out of 35.0T at 438M/s, (scan is slow, no estimated time)
        232K repaired, 100.02% done
config:

        NAME                                            STATE     READ WRITE CKSUM
        Data                                            ONLINE       0     0     0
          raidz3-0                                      ONLINE       0     0     0
            gptid/a74719c6-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/a807c51a-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/a8c744f2-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/a9869072-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/aa48e2b8-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0  (repairing)
            gptid/ab0e9842-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/abce8959-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/ac8d8ad5-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/ad4e1c6c-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
            gptid/ae13b960-4a20-11e3-a7f7-002590ab9f30  ONLINE       0     0     0
        cache
          gptid/ae7044fb-4a20-11e3-a7f7-002590ab9f30    ONLINE       0     0     0

errors: No known data errors


Is it ok when more 100% done? :)
Last update: after scrubing there are no changes in free space
 
Last edited:

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
It is not 1.44 Tb

Does destroying of the Data@auto-201609??.????-?? snapshots help (assuming these snapshots contain no valuable data)?

I'm seeing the same symptom on my system: The sum of the USED values of all snapshots is way smaller than the USEDSNAP value for the dataset being snapshotted. Unfortunately I have no explanation for that.

Code:
scr@blunzn:~ % zfs list -r -t all -o name,used,usedsnap,refer volume0/cbackup
NAME  USED  USEDSNAP  REFER
volume0/cbackup  570G  143G  427G
volume0/cbackup@auto-20160902.1200-2w  0  -  438G
volume0/cbackup@auto-20160902.1800-2w  0  -  438G
volume0/cbackup@auto-20160903.0600-2w  0  -  438G
volume0/cbackup@auto-20160903.1200-2w  0  -  437G
volume0/cbackup@auto-20160903.1800-2w  0  -  437G
volume0/cbackup@auto-20160904.0600-2w  0  -  437G
volume0/cbackup@auto-20160904.1200-2w  0  -  438G
volume0/cbackup@auto-20160904.1800-2w  0  -  438G
volume0/cbackup@auto-20160905.0600-2w  0  -  438G
volume0/cbackup@auto-20160905.1200-2w  0  -  438G
volume0/cbackup@auto-20160905.1800-2w  0  -  438G
volume0/cbackup@auto-20160906.0600-2w  0  -  438G
volume0/cbackup@auto-20160906.1200-2w  0  -  437G
volume0/cbackup@auto-20160906.1800-2w  0  -  437G
volume0/cbackup@auto-20160907.0600-2w  0  -  437G
volume0/cbackup@auto-20160907.1200-2w  0  -  438G
volume0/cbackup@auto-20160907.1800-2w  0  -  438G
volume0/cbackup@auto-20160908.0600-2w  0  -  438G
volume0/cbackup@auto-20160908.1200-2w  0  -  438G
volume0/cbackup@auto-20160908.1800-2w  0  -  438G
volume0/cbackup@auto-20160909.0600-2w  0  -  438G
volume0/cbackup@auto-20160909.1200-2w  0  -  431G
volume0/cbackup@auto-20160909.1800-2w  0  -  431G
volume0/cbackup@auto-20160910.0600-2w  0  -  431G
volume0/cbackup@auto-20160910.1200-2w  0  -  431G
volume0/cbackup@auto-20160910.1800-2w  0  -  431G
volume0/cbackup@auto-20160911.0600-2w  504K  -  431G
volume0/cbackup@auto-20160911.1200-2w  512K  -  430G
volume0/cbackup@auto-20160911.1800-2w  0  -  432G
volume0/cbackup@auto-20160912.0600-2w  0  -  432G
volume0/cbackup@auto-20160912.1200-2w  0  -  432G
volume0/cbackup@auto-20160912.1800-2w  0  -  432G
volume0/cbackup@auto-20160913.0600-2w  536K  -  432G
volume0/cbackup@auto-20160913.1200-2w  0  -  432G
volume0/cbackup@auto-20160913.1800-2w  0  -  432G
volume0/cbackup@auto-20160914.0600-2w  544K  -  431G
volume0/cbackup@auto-20160914.1200-2w  272K  -  435G
volume0/cbackup@auto-20160914.1800-2w  264K  -  435G
volume0/cbackup@auto-20160915.0600-2w  264K  -  435G
volume0/cbackup@auto-20160915.1200-2w  0  -  435G
volume0/cbackup@auto-20160915.1800-2w  0  -  435G
volume0/cbackup@auto-20160916.0600-2w  0  -  435G
 

mykolaq

Explorer
Joined
Apr 10, 2014
Messages
61
Does destroying of the Data@auto-201609??.????-?? snapshots help (assuming these snapshots contain no valuable data)?

I'm seeing the same symptom on my system: The sum of the USED values of all snapshots is way smaller than the USEDSNAP value for the dataset being snapshotted. Unfortunately I have no explanation for that.
Not:(
Update:Hm, after deleting some snaphots, i have returned much space. but now i don't have snapshots :)
 

romwil

Cadet
Joined
Sep 19, 2016
Messages
5
I have a very similar issue however no sizable snapshots are related..

Code:
[root@alexandria] /mnt/massmedia/massmedia# zfs list -r -o name,used,usedsnap,avail,refer massmedia
NAME                                                              USED  USEDSNAP  AVAIL  REFER
massmedia                                                        13.5T         0   510G   153K
massmedia/.system                                                 412M         0   510G   393M
massmedia/.system/configs-ffb84ccc300c4843b1352f93d2beb43e        665K         0   510G   665K
massmedia/.system/cores                                          8.06M         0   510G  8.06M
massmedia/.system/rrd-ffb84ccc300c4843b1352f93d2beb43e            153K         0   510G   153K
massmedia/.system/samba4                                         5.04M         0   510G  5.04M
massmedia/.system/syslog-ffb84ccc300c4843b1352f93d2beb43e        5.15M         0   510G  5.15M
massmedia/jails                                                  4.99G         0   510G   817M
massmedia/jails/.warden-template-pluginjail                       527M      153K   510G   526M
massmedia/jails/.warden-template-pluginjail--x64                  818M      141K   510G   817M
massmedia/jails/.warden-template-pluginjail--x64-20150830180943   527M      153K   510G   526M
massmedia/jails/.warden-template-pluginjail-open-x86              153K         0   510G   153K
massmedia/jails/.warden-template-standard--x64                   2.21G      141K   510G  2.21G
massmedia/jails/openvpn                                           163M         0   510G  2.29G
massmedia/massmedia                                              13.5T         0   510G  13.5T



you can see there are four snaps (all jails) - I have 13.5T being reported as used although I deleted 2T a few days ago. The 510G being reported across the board as AVAIL is very incorrect.

Any ideas?

Will
 

MrToddsFriends

Documentation Browser
Joined
Jan 12, 2015
Messages
1,338
you can see there are four snaps (all jails) - I have 13.5T being reported as used although I deleted 2T a few days ago. The 510G being reported across the board as AVAIL is very incorrect.

zfs list doesn't show any snapshots if neither -t snapshot nor -t all is specified at the command line. So no, there aren't any snapshots shown in your listing. More important: The USEDSNAP column shows that these 2TB aren't bound by snapshots.

But wait: Do you have the Export Recycle Bin option checked in the corresponding CIFS share?
http://doc.freenas.org/9.10/sharing.html#windows-cifs-shares
 

romwil

Cadet
Joined
Sep 19, 2016
Messages
5
Nope.. I was hopeful when I read about that setting while I was pouring over the docs looking for an answer...no dice. Triple checked all shares, that is not set.
 

romwil

Cadet
Joined
Sep 19, 2016
Messages
5
No solution - I ended up installing a fresh 9.10 instance and backing up all of the data to Amazon cloud drive, wiping, formatting and creating new volumes and restoring from ACD. all good now. (well, still replicating a few hundred hours later. :) but going well )

not an optimal solution to be sure but it got me moving forward.

Will
 
Status
Not open for further replies.
Top