What is Services memory?

Joined
Oct 22, 2019
Messages
3,641
The jail is configured to mount the datasets I want to clean, but when the jail isn't running, the datasets aren't mounted.
Understood. I read your post to mean "I did not configure a mointpoint for that jail".

All good. :smile:
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
It still hasn't gone down and my NFS transfer this morning was at half it's normal speed. I'm probably going to reboot soon.

Code:
ARC size (current):                                    13.4 %    8.4 GiB
        Target size (adaptive):                        15.3 %    9.6 GiB
        Min size (hard limit):                          3.2 %    2.0 GiB
        Max size (high water):                           31:1   62.8 GiB
        Most Frequently Used (MFU) cache size:         61.7 %    4.4 GiB
        Most Recently Used (MRU) cache size:           38.3 %    2.7 GiB
        Metadata cache size (hard limit):              75.0 %   47.1 GiB
        Metadata cache size (current):                  5.7 %    2.7 GiB
        Dnode cache size (hard limit):                 10.0 %    4.7 GiB
        Dnode cache size (current):                    15.1 %  730.8 MiB


Looks like I'm going to have to build in a reboot into every rmlint session. Either that or try turning on the idle tuneables. If I set the delay high enough it might avoid negatively impacting things.
 
Joined
Oct 22, 2019
Messages
3,641
I'm not trying to get you to change your workflow, but is there something you can do differently upstream in which you're creating massively-sized duplicate files in the first place?

It seems like a waste of RAM, CPU cycles, and storage. Perhaps there's another way?

I know that the program czkawka (GUI and command-line version) employs a "partial hash" trick, which not only speeds up the process, but tries to avoid unnecessarily loading and computing the hash for the entire file if happens to be the same exact size as another file.

The way this works is that if two or more files have the same exact size, it doesn't just load them in their entirety to compute the hash for the entire file. Rather, it computes the hash of the first few kilobytes of data. If they differ, then it doesn't bother to move onto the next step. If they match? Only then will it compute the hash for the entire file.

This spares your system from loading and computing the hash for every single "file size match".

However, if you're using the binary pkg manager, installing czkawka will pull in many dependencies, due to its GUI elements.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
I'm not trying to get you to change your workflow, but is there something you can do differently upstream in which you're creating massively-sized duplicate files in the first place?

It seems like a waste of RAM, CPU cycles, and storage. Perhaps there's another way?

I know that the program czkawka (GUI and command-line version) employs a "partial hash" trick, which not only speeds up the process, but tries to avoid unnecessarily loading and computing the hash for the entire file if happens to be the same exact size as another file.

The way this works is that if two or more files have the same exact size, it doesn't just load them in their entirety to compute the hash for the entire file. Rather, it computes the hash of the first few kilobytes of data. If they differ, then it doesn't bother to move onto the next step. If they match? Only then will it compute the hash for the entire file.

This spares your system from loading and computing the hash for every single "file size match".

However, if you're using the binary pkg manager, installing czkawka will pull in many dependencies, due to its GUI elements.

The dupes come from just copying backups and backs without ever bothering to clean them. So I'm not generating any new files, just cleaning up the ones I do have. My pool isn't even half full so I haven't been hurting for space. Just trying to get things organized.

rmlint is lighter on memory if you don't run it in paranoid mode but I really want to make sure I'm not deleting something that's not a duplicate.



I just need to decide if this is worth messing with tuneables. I may give them a try and see if it breaks anything. Not sure.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
I ended up not touching the tuneables and instead just rebooting after my rmlint sessions.

What I found interesting is that on U4 that the Services would expand to take up almost all of the memory, but then would end up shrinking on the next rmlint run. Not sure what was going on there, but now that I've updated to U5 it's returned to the previous behavior of not shrinking Services.
 
Joined
Oct 22, 2019
Messages
3,641
The U5 update included changes in memory and ARC management.
 

CJRoss

Contributor
Joined
Aug 7, 2017
Messages
139
The U5 update included changes in memory and ARC management.

I know it fixes the ARC issue, which is why I jumped on it as soon as it came out. I'm just not sure if it has any changes other than that one fix.
 
Top