Minimum required system memory

Status
Not open for further replies.

Kallisti

Cadet
Joined
Jun 29, 2019
Messages
1
Hello everyone.

Can anyone explain to me why 8 GB RAM is the minimum required RAM?
I've built a custom FreeNAS system with a Mini ITX MoBo utilizing a Celeron J1800 CPU and just 4 GB RAM (non-ECC). It works great for over one year now! The transfer rate from SSD over Gbit-Network to my FreeNAS runs at ~65 MB/s and I'm very satisfied with it. Since then no data corruptions or other problems occurred.

I'd be glad if anyone could tell me why 8GB of RAM seems to be neccessary.

Thanks in advance,

Matthias
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey Kallisti,

I would answer this with a question : Can you tell me how is smoking bad for health ?

See, for smoking as for the 8 Gig of RAM in FreeNAS, we do not have such a clear direct cause to consequence link. What we do have for both of them is clear statistics. When you look at the medical statistics of smokers, you very clearly see that they are way worst than those of the non-smokers.

It is similar with FreeNAS : when you look at the kind of problems and weirdness happening here and there, you have way more problems concentrated in the pool of FreeNAS servers with less than 8 Gig when compared with the others.

So what is the specific reason for 8 Gig and more ? I do not have a line of code, config parameter or process monitoring subject to point you too. What our experience shows is that running below 8 Gig is asking for trouble, as demonstrated by the statistics.

Hope this help you understand the reason 8 Gig is recommended as a minimum,
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I believe @jgreco would have a better answer here, but as I recall, some years ago we were seeing a number of threads in which users had experienced unexplained data loss or corruption. The common element in all of them was that they had < 8 GB of RAM--that was the recommended amount even then, but it wasn't recommended as strongly as now. Since strengthening that recommendation and otherwise updating the hardware recommendations, we aren't seeing those kinds of issues very much.

Now, if you're getting data loss or corruption, it's probably a result of a bug--but it seems to be a bug that was only triggered in low-RAM conditions. Maybe that bug has been fixed by now, or maybe it hasn't. Another common characteristic of those data loss scenarios was that the systems in question worked just fine, until they didn't any more.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
we were seeing a number of threads in which users had experienced unexplained data loss or corruption. The common element in all of them was that they had < 8 GB of RAM

Yep, as I said strong stats...

Maybe that bug has been fixed by now, or maybe it hasn't.

And as I said, no clearly identified culprit :smile:

Another common characteristic of those data loss scenarios was that the systems in question worked just fine, until they didn't any more.

And that is the very definition of basically every problem : things always work before they stop working.

By definition, how can something --stop-- working when it never worked ? :smile:

Good if someone has a more precise portrait on that situation but from all I got, we only rely on stats and observations...
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Another issue is the ability to use pre-fetch.

I pulled the information below from an old version of the documentation.

"NOTE: by default, ZFS disables pre-fetching (caching) for systems containing less than 4 GB of usable RAM. Not using pre-fetching can really slow down performance. 4 GB of usable RAM is not the same thing as 4 GB of installed RAM as the operating system resides in RAM."
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
And that is the very definition of basically every problem : things always work before they stop working.
Yes, of course. But the point (and the problem for those of us who are advocating for following the hardware requirements) is that there often isn't (or wasn't) any warning that precedes the failure. So the poster says, as OP said, "everything works fine for me with {4 GB RAM|Realtek NIC|3Ware RAID controller}, I don't see why I should need to do what you recommend", and indeed everything does look fine--until it fails catastrophically and without warning.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I believe @jgreco would have a better answer here, but as I recall, some years ago we were seeing a number of threads in which users had experienced unexplained data loss or corruption. The common element in all of them was that they had < 8 GB of RAM--that was the recommended amount even then, but it wasn't recommended as strongly as now. Since strengthening that recommendation and otherwise updating the hardware recommendations, we aren't seeing those kinds of issues very much.

Now, if you're getting data loss or corruption, it's probably a result of a bug--but it seems to be a bug that was only triggered in low-RAM conditions. Maybe that bug has been fixed by now, or maybe it hasn't. Another common characteristic of those data loss scenarios was that the systems in question worked just fine, until they didn't any more.

Basically I had observed, over a long period of time, that there were catastrophic failures being experienced, especially in the 4GB AMD APU community that was popular for the first year or two of FreeNAS 8, and that there were also problems being experienced on other platforms where there was high memory stress. This typically manifested itself as a sudden pool corruption, one fine day, usually when the pool was rather full-ish (probably inducing additional metaslab/caching stress), and at best you could recover a good chunk of the pool if you were patient and worked around kernel panics, and at worst the pool was basically trashed. The problem was worse because most of the people experiencing this were not experienced sysadmins, which also made it hellishly difficult to collect detailed information.

I changed the documentation to introduce the 8GB base requirement plus 1GB per TB of disk, and this has been very successful in avoiding problems, with the lone exception of dedupe, which has different memory requirements, and if you don't meet them, pool import can be a real problem.

When Jordan came along to iX, he criticized me for having made this change without having quantified it (even though *I* had no affected systems), and then tried to cajole me into "figuring it out," to which I pointed out that I'm not an iX employee, and then he grumbled quite a bit about how this needed to be resolved -- but of course did nothing at all about it.

While I do work with servers and networking professionally, I would note that my level of willingness to help on the forums doesn't extend to complex systems analysis and debugging. On the other hand, I'm also pretty good at noticing trends and inferring things. At one point I had a list of some dozens of examples of threads where people had lost pools. We've switched forumware a few times since then and I don't know where that might be now, and I don't care enough to dig.

The thing is, ZFS on 4GB did actually work for a lot of people. But it's putting the system under a huge strain. It's 2019. Memory is cheap. If your data is important, and we assume that you're using ZFS because your data is important, why risk it. My big goal is to make sure people aren't lulled into doing dumb things that cost them their data, so I feel some responsibility to inform people. But I'm also fine with the idea that you're free to do as you wish, and you own the results of your choices.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
At one point I had a list of some dozens of examples of threads where people had lost pools.
Somehow your "how to fail" thread disappeared into the ether.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
At one point I had a list of some dozens of examples of threads where people had lost pools.
Somehow your "how to fail" thread disappeared into the ether.
Perhaps we need another "How To Fail".
Or we can be more PC, (Politically Correct), and call it: "Reasons why we don't do certain things".

For example, there is a current thread where someone used a 5 disk SATA enclosure, that USB attached to his host. Even though he used RAID-Zx on almost all the disks, he had 1 as it's own vDev, (aka stripe). So he get's an award for making 2 mistakes, USB attached disks, and having a striped disk in an otherwise redundant pool
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Somehow your "how to fail" thread disappeared into the ether.

Apparently some iX staffers felt that the "How To Fail" thread was too harshly named, but basically the concept of documenting examples of what happened to people when they went wildly off the good path is a healthy exercise. I believe we have @wblock in particular to thank for that.

If I wasn't so lazy, I'd be annoyed enough to set up a wiki to host it...
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Good thread, brings back some memories and yes, the "How to Fail" thread was great. I can see how it could be thought of as mean however it was just great examples of many people making poor choices when lead to catastrophe. I'm sure there were some honest mistakes too. I'm not sure I'd go to the extent of setting up a wiki but if someone has the energy, it would be a good link to have.
 

CraigD

Patron
Joined
Mar 8, 2016
Messages
343
One of the highlights...

"ZFS loves to use memory. The more you have the faster it will be. If performance starts slacking that's your queue to add more RAM. 8GB is the minimum for FreeNAS and do not go below that. You aren't as special as your mommy told you and you risk your data if you think you are."
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,977
If I wasn't so lazy, I'd be annoyed enough to set up a wiki to host it...
That would be great, but with the current forum management any reference to it on the forums would vaporize just like the original thread did.
 

pschatz100

Guru
Joined
Mar 30, 2014
Messages
1,184
Hello everyone.

Can anyone explain to me why 8 GB RAM is the minimum required RAM?
I've built a custom FreeNAS system with a Mini ITX MoBo utilizing a Celeron J1800 CPU and just 4 GB RAM (non-ECC). It works great for over one year now! The transfer rate from SSD over Gbit-Network to my FreeNAS runs at ~65 MB/s and I'm very satisfied with it. Since then no data corruptions or other problems occurred.

I'd be glad if anyone could tell me why 8GB of RAM seems to be neccessary.

Thanks in advance,

Matthias
To the OP:
Just because 8 GB ram is the recommended minimum for most people, that does not mean that your particular system will fail if you have less. It really depends on your configuration - which you did not share with us - plus what you are doing with it. However, that does not make it wise to go against best practices. Your risk of failure is higher than it should be.

Keep some things in mind: The developers assume that systems have 8Gb ram, therefore they do not worry themselves about trying to make do with less. I too remember the old posts about failures and data loss that was related to inadequate system resources.

Log in to your system and look at memory usage. How much memory is being used? Is swap being used? If any swap is being used, then I would increase your memory. Swap is "overflow memory" and you do not want to be using swap under normal circumstances. For example, on the system in my signature, FreeNAS is actually using about 12Gb ram memory and no swap. Therfore, for what I'm doing now, I am confortable with 16Gb memory.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
To the OP:
Just because 8 GB ram is the recommended minimum for most people, that does not mean that your particular system will fail if you have less. It really depends on your configuration - which you did not share with us - plus what you are doing with it. However, that does not make it wise to go against best practices. Your risk of failure is higher than it should be.

Keep some things in mind: The developers assume that systems have 8Gb ram, therefore they do not worry themselves about trying to make do with less. I too remember the old posts about failures and data loss that was related to inadequate system resources.

Actually I'm pretty sure the developers don't assume that. They're building for TrueNAS, which actually has - last I recall - a 32GB minimum platform size.

Log in to your system and look at memory usage. How much memory is being used? Is swap being used? If any swap is being used, then I would increase your memory. Swap is "overflow memory" and you do not want to be using swap under normal circumstances. For example, on the system in my signature, FreeNAS is actually using about 12Gb ram memory and no swap. Therfore, for what I'm doing now, I am confortable with 16Gb memory.

ARC and other kernel space is never eligible for swap - swap is exclusively for userland stuff.

On one hand, you could interpret the use of swap as an indicator of userland memory pressure, so it could mean that you might want to add memory. This is classic UNIX sysadmin advice.

On the other hand, over the long run, many idle userland processes will tend to get swapped out when memory is momentarily constrained, so it isn't as good an indicator as we might like.

On the OTHER other hand, the real risk is when you experience a kernel panic, which would likely be caused by problems in kernel memory land (at least with respect to ZFS).

Anyways - it's true that your particular system isn't guaranteed to fail if you have less than 8GB RAM. However, this is really about risk factors. Further, it's also about the fact that it's no longer 2008 and 8GB of RAM is not an onerous amount of memory. It's like fifty bucks, depending on specifics.

We stopped seeing large quantities of failures in the forums when people followed the posted guidelines. Put in 8GB of base memory for the system. Add 1GB per TB of disk for good measure, unless you're using dedupe. This remains a reasonable recipe for problem-free operation. There are ways to squeeze it down once you get out past 16-32GB.

So I think the OP's question has been sufficiently answered as follows, and probably doesn't need to continue beyond these two concise observations:

the systems in question worked just fine, until they didn't any more.
And that is the very definition of basically every problem : things always work before they stop working.
 

JoshDW19

Community Hall of Fame
Joined
May 16, 2016
Messages
1,077
That would be great, but with the current forum management any reference to it on the forums would vaporize just like the original thread did.

That simply isn't true. No one has asked to self-host it. I'd be happy to give you the content of the thread if you'd like to host it on your own site jailer and you can even point a resource at it here on the forums. There's no need for the cynicism and negativity.

I see a lot of informational value in that type of thread so I'll repeat what I've told others on the moderation team. Some felt that the thread went too far to unnecessarily shame others to the point that they wouldn't participate anymore. I know that wasn't the intent but that's what the majority of the complaints about the thread were. The intent was good to kind of provide an index of bad practices to not duplicate, but to get specific, some felt the headings were making fun and that people were laughing at the expense of others over honest mistakes. That's not a good look regardless of the intention.

If anyone on the moderation team would like to adopt the thread and change it up so that it doesn't feel like a shaming thread we can put it back out for public consumption. A good solution might be to grab all of those links and turn it into a resource instead of a discussion. That way people would be less inclined to say haha look at this stupid idea.

Hope that helps clear things up.
 
Status
Not open for further replies.
Top