System dataset full, help

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
Hi guys,

My system dataset is full but it seems that it is for no reason for me. I suspect that there is way too much logs but not sure.
1660772138756.png

My applications are located to ix-application dataset so it seems correct that it uses 7Gb but then there is 31Gb that are used in that dataset somewhere but I don't see anything from console
1660772262656.png


Is there anyone who can help me to cleanup my dataset ?
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Look under /var/db/system to see the space hogs.
 
Joined
Oct 22, 2019
Messages
3,641
What are the outputs of these three commands:
Code:
zfs list -o space ssd-storage

zfs list -o space ssd-storage/.system

zfs list -o space ssd-storage/ix-applications
 
Joined
Oct 22, 2019
Messages
3,641
Here the outputs I've got.
Whoa! Nearly 30 GB used by .system, but its child datasets? :eek:

You can further break it down (recursively "-r") like so:
zfs list -r -t filesystem -o space ssd-storage/.system
 

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
Look under /var/db/system to see the space hogs.
just made du -sh command to check size of this folder, here the output:
1660852333511.png


How can I purge the database to free some space. And to avoid that kind of issue, did I understood proprely that I have to change "graph age in months" and "number of graph points" in reporting settings ?
 

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
Whoa! Nearly 30 GB used by .system, but its child datasets? :eek:

You can further break it down (recursively "-r") like so:
zfs list -r -t filesystem -o space ssd-storage/.system
1660852924590.png

Here, issue is clearly syslog that got way too far. I would like to purge it and set it up so it gets a lower limit. I think that is the settings in reporting but I'm not sure
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Look in /var/log, and see which logs are the space hogs. There will probably be a lot of rolled-over archive logs you can delete.
 

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
Look in /var/log, and see which logs are the space hogs. There will probably be a lot of rolled-over archive logs you can delete.
I've made that command and I can see that k3s_daemon_log are big but I am not sure how to delete it so it doesn't break everything either
1660853782469.png


Is that command would make it clean ? "cat /dev/null > k3s_daemon.log"
 
Last edited:
Joined
Oct 22, 2019
Messages
3,641
I can see that k3s_daemon_log are big but I am not sure how to delete it so it doesn't break everything either
I am looooooooving my TrueNAS Core server so much right now. :tongue: :cool:
 

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
I am looooooooving my TrueNAS Core server so much right now. :tongue: :cool:
haha I was on OMV and switched to TrueNAS Core when I wanted to upgrade version but I wasn't able to do things like I wanted so I decided to use Scale which is closer to what I've been used to with OMV. But I still think that TrueNAS require high performance setup compared to OMV :/
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Well, delete the rolled over k3s_daemon.log.* to get some breathing room at least.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Move your system dataset to a larger pool, preferrably one with spinning rust with > 4 TB capacity each.
 

answer35

Dabbler
Joined
Jan 26, 2022
Messages
28
Move your system dataset to a larger pool, preferrably one with spinning rust with > 4 TB capacity each.
Like I said, TrueNAS feels like it only needs high performance hardware to be efficient. But like winnie said, issue isn't my disk size but log files that are getting too big. Logs should be able to be like 1Gb and not 30.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Scale, so far as I know, snapshots apps at a ridiculous interval, which requires a k3s kubectl stop and then a k3s kubectl start around every snapshot for consistency. There might be a tunable to reduce the snapshot interval to something more reasonable, but that’s essentially why these logs are so large.
 

homer27081990

Patron
Joined
Aug 9, 2022
Messages
321
Until you figure out what is going on (by managing to read a smaller log, when this one is gone, maybe?) maybe you could try creating a simple bash script to be run at boot, or daily, like here? Does TrueNAS SCALE have the ability to run scripts by schedule?
 

neofusion

Contributor
Joined
Apr 2, 2022
Messages
159
Please be sure to report this on the bugreport page, providing as much detail as needed.

Having a runaway log take up close to 30GB potentially filling up the storage medium is a serious issue for an appliance like this; had it filled up completely you would have been in a state difficult or impossible to recover from without a backup.
 
Top