Dataset using double the size of actuall files

djb

Explorer
Joined
Nov 15, 2019
Messages
76
Hello,

i have a ZFS dataset in the truenas GUI used space is 100GB. over SMB share the same dataset , if i right click on the files and properties, are 47GB.
don't know what might caused that.

but i notice that snapshots are not deleted correctly after lifetime expires. for example i have every 5 minutes for lifetime 2 weeks, it should be about 4000 snapshots, under storage -> snapshots there are over 60 000 ! snapshots. (i don't know if it has to do).
i have checked the option for snapshoting recursive, so there are 8 datasets below the main dataset... but again assumming that is taking snapshots for each dataset, it should be about 32000 , not 60000.

any suggestions ? what i should check ?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Screenshots and console output would be nice to illustrate what's going on. Don't get me wrong, it's always good to explain it in your own words, but it's easiest with everything in front of us.
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
Datasets picture:

1694093537541.png


zfs list command returns the same results as the web UI

the windows SMB with properties on all files says different size:

1694094015817.png


if you need more data please advice accordingly.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
It'd be useful to see the output of zfs list -o space -r /mainmirror/DataFiles/Beyond
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
So it's snapshots all the way down :wink:

You have active snapshots that take up roughly half of that space.
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
So it's snapshots all the way down :wink:

You have active snapshots that take up roughly half of that space.
Hello !
So USEDSNAP is the snapshot size and USEDDS is the actual data ??

is still strange since on a week cycle, there is no 50GB new data, since snapshots are only the changes. i would expect 3-4GB changes in a course of a week. I will delete all and see how it goes.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
You can also do zfs list -o space -r -t snapshot /mainmirror/DataFiles/Beyond to see which snapshots are at fault, so to say, if any.
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
You can also do zfs list -o space -r -t snapshot /mainmirror/DataFiles/Beyond to see which snapshots are at fault, so to say, if any.
Hello and good morning,

this command didn't populate all snapshots, but i can see all of them through the web UI , storage-> snapshots
1694415782899.png

since i'm doing every 5 minutes for retention 1 week, multiply by the datasets (nested as well) , i'm expecting 23000 snapshots by 88kb will not be over 1,6GB. also the new files added to the dataset, is not more than 3-4GB per week... so i still can't explain how from 49GB of data, on the dataset is using 97GB.

Do i need to run a pool scrub ? can this be old junk files didn't completed transfering or something ??
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
The web shell is fundamentally broken beyond the most simple of tasks. Use ssh and you will see *all* snapshots.
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
The web shell is fundamentally broken beyond the most simple of tasks. Use ssh and you will see *all* snapshots.
thank you Patrick ! is there any way to populate the results in smaller sections ? I'm using putty client with SSH as you suggested, but i'm not able to go back to first line all the way.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
Putty has a setting for the size of the scrollback buffer somewhere ... I have not used putty in years, so I cannot be more precise.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Or you can just use less:
command | less
 

djb

Explorer
Joined
Nov 15, 2019
Messages
76
Putty has a setting for the size of the scrollback buffer somewhere ... I have not used putty in years, so I cannot be more precise.
just increase the buffer lines from 2000 to 6000 and i was able to see all snapshots.
i had some "orphaned" snapshots from 2 years ago.... i just use their naming scheme and filter them through the web UI and delete them.
Thanks for all the help.

For future reference, the commands used is : zfs list -o space -r -t snapshot storagepool/dataset1/subdataset1
(no need to have "/" before the storagepool name.)
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
(no need to have "/" before the storagepool name.)
Oops, yes, I frequently let those leak into zfs commands, but they should never have a leading slash.
 
Top