Memory usage question

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
Last week I noticed that when copying a large amount of data to my server I was maxing out my memory. I added 32GB more this week and when I looked at memory usage during a scrub it was far less than I would have thought staying about 20% of the 96GB I now have available.

I really thought that a scrub was about as stressful on a system as I could do but apparently, at least for memory, it isn't.

My question is why isn't a scrub stressful on memory? Isn't all that data being read compared and checked? My curiosity has gotten the better of me and so I'm here to suck the knowledge out of one of you gurus.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
The RAM is used for caching IO that might be used again (ARC) like when a user requests or writes a file. A scrub is a system process and the file system is smart enough that it does not cache that IO because it will not be needed again soon and it does not serve a user data request.

Here is a good discussion of ARC and L2ARC - https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/
Last week I noticed that when copying a large amount of data to my server I was maxing out my memory.
I am not sure why memory was being maxed out when writing data to the server. It makes me curious about what the process was. I download to one pool and then transfer from that pool to another pool. The pool to pool transfers tend to use a lot of resources for a very short time because of how I am handling that copy.
 

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
The RAM is used for caching IO that might be used again (ARC) like when a user requests or writes a file. A scrub is a system process and the file system is smart enough that it does not cache that IO because it will not be needed again soon and it does not serve a user data request.

Here is a good discussion of ARC and L2ARC - https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/

I am not sure why memory was being maxed out when writing data to the server. It makes me curious about what the process was. I download to one pool and then transfer from that pool to another pool. The pool to pool transfers tend to use a lot of resources for a very short time because of how I am handling that copy.

I had a backlog of about 90 blu-ray iso files (almost 2.9TB) that I was copying from my main desktop (Win 10 Pro) to the server over a 1gb link. The server is 6' away so cat 5e goes from desktop to switch to server with another switch port that goes to the 16 port house switch and router in the basement. (The house is hard wired with 16 cat 5e drops). I should have 5 or 6 more blu-rays to copy next week so I'll check memory usage again although it might give a different result since I've added another 32GB of ram to the server. Thank you for the reply, it did answer my question.
 

AVB

Contributor
Joined
Apr 29, 2012
Messages
174
I did a test of copying 200 GB and the same as it was when I did the 2.9TB copy I maxed out my memory....or so I thought. I was under the impression that if memory wasn't free it had to be used somewhere. After a few hours of searching and reading I found that wired memory isn't "used" in the way that I thought and having only 6GB of memory free out of 96GB was actually normal as the memory was being used as the L1ARC just as it should be.
 
Top