How bad *is* it to have less than 1 GB RAM per TB?

Status
Not open for further replies.

Lucien

Dabbler
Joined
Nov 13, 2011
Messages
34
Originally I was planning a small NAS, an ITX board with 6 x 3 TB drives in a RAIDZ2. I've picked up everything except the hard drives at this point, including 2 x 4 GB sticks of ECC RAM. The thing is when I was speccing out the box, I'd totally forgotten about the 1 GB RAM per TB guide, and I could have sworn I saw builds where people were doing the same.

Since this it going to be a home build and my requirements are low, I don't need blazing performance. But recently I've been looking at going for 6 x 4 TB drives instead, and reading up I was reminded about the RAM guidelines. So how bad would it be if I stuck with 8 GB RAM total? because it's a mini-ITX board, I'm limited by only having 2 DIMM slots.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
I am in the same boat right now (2x4gb), but have 16gb (2x8gb) on order..

Im running an MSI ITX board with 2 DIMMs and 1 PCI-E..

The other thing you could do if memory becomes an issue is use SLC and MLC ssds for your ZIL and L2ARC cache disks.. this could help offset some of the performance hit from not having sufficient memory... this is what i plan on eventually doing.

I am running 4 x 1.5TB and at 8gb ramm i know memory is my bottleneck. As all my other hardware is up to spec.. M1015 IBM card in JBOD mode..
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
The other thing you could do if memory becomes an issue is use SLC and MLC ssds for your ZIL and L2ARC cache disks.. this could help offset some of the performance hit from not having sufficient memory... this is what i plan on eventually doing.


I'm unsure why people think adding an ssd will offset their ram situation.

Adding a slog (dedicated zil) shouldn't have any effect on ram. Other than the fact that most home users won't benefit from it, it shouldn't affect ram one way or the other. I've actually measured a decrease in performance in a home environment when adding an slog unnecessarily.

On the other hand, adding an ssd as an l2arc device will make a low ram situation worse. Zfs has to use ram in order to track what's in the l2arc. So you're stealing even more ram away from the system in order to have the l2arc. This is just making things worse. The limited ram situation is not helped by having ssd cache. And tracking what's in the cache takes ram. L2arc's are meant to provide main pool offload in the case of very busy fileservers that have a large working set. IE, if you have 500 gigs of database data that gets read constantly. Having 500 gigs of ram is quite expensive. However, having 128 gigs of ram, and a ~500 gig ssd is more practical. In this case, the system has enough ram to be able to track the contents of the ssd. And enough of the commonly read data can make it into the ssd allowing the main pool to be available for other things, like writes.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
I'm unsure why people think adding an ssd will offset their ram situation.
Because there is so much mis-information out there... websites saying "add SSDs to offset lack of memory" This is what i've always read, and thus always thought.. One example is this site.. www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/

Which.. after just re-reading the comments on that site several people said similar things to your statements.

Adding a slog (dedicated zil) shouldn't have any effect on ram. Other than the fact that most home users won't benefit from it, it shouldn't affect ram one way or the other. I've actually measured a decrease in performance in a home environment when adding an slog unnecessarily.
How would adding zil not have a positive effect? zil is used for writes, right? Could you explain why?

On the other hand, adding an ssd as an l2arc device will make a low ram situation worse. Zfs has to use ram in order to track what's in the l2arc. So you're stealing even more ram away from the system in order to have the l2arc. This is just making things worse. The limited ram situation is not helped by having ssd cache. And tracking what's in the cache takes ram. L2arc's are meant to provide main pool offload in the case of very busy fileservers that have a large working set. IE, if you have 500 gigs of database data that gets read constantly. Having 500 gigs of ram is quite expensive. However, having 128 gigs of ram, and a ~500 gig ssd is more practical. In this case, the system has enough ram to be able to track the contents of the ssd. And enough of the commonly read data can make it into the ssd allowing the main pool to be available for other things, like writes.

This makes perfect sense for l2arc.
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
How would adding zil not have a positive effect? zil is used for writes, right? Could you explain why?


jgreco explains it far better than I ever could, but I'll try. The zil is used for *sync* writes. Async writes get queued up in regular transaction groups (txg's) and get flushed to the pool at 'regular' intervals. Sync writes get commited to the zil first (so they're not lost), and also get queued up in txg's for flushing out to the main pool. By default, the zil is 'in pool'. If there's not a lot of sync writes going on, this is perfectly acceptable. Async writes don't hit the zil and just get buffered in ram. That's a typical home use scenario. In the case of lots of sync writes, zfs needs to ensure data integrity. So these writes are commited to stable storage (zil) first, AND queued in ram in order to be flushed out to the main pool later. If there's never an unclean shutdown, the zil is never read from. It's only there as a 'just in case'. Moving the zil onto a dedicated device (slog, or separate intent log) allows use of an ssd to 'absorb' all the critical sync writes as fast as possible, allowing for data integrity while maintaining a write buffer. If there's never an unclean shutdown, or crash, the ssd will never be read from as everything written to the ssd is also bufered in ram and flushed to the main pool in the exact same way an async write is.

Understanding this, helps with the understanding of why not all ssd's are applicable as good zil devices. Most (all?) ssd's use dram internally to 'buffer' writes. If the ssd can't guarantee that the contents of the dram buffer is 'safe', the dedicated zil device is not really providing data integrity. Typically an ssd needs a supercapacitor, or bank of capacitors in order to be able to commit to flash the contents of it's dram buffer.

I've actually benchmarked where a dedicated zil has slowed down regular async writes. Sync writes were faster with the ssd of course, but if you're not doing large sync writes, who cares?

tl;dr would be kinda like this: A suitable ssd acting as a slog (dedicated zil) allows sync writes to be 'converted' into the speed (or near the speed) of async writes while still maintaining posix data compliance. If there's very little sync writes going on, a dedicated zil is of very little use, as everything is async anyway.
 

ndboost

Explorer
Joined
Mar 17, 2013
Messages
78
thank you both for the detailed explanations.
 

Lucien

Dabbler
Joined
Nov 13, 2011
Messages
34
Well... I've got no plans to add a ZIL or L2ARC anyway... It's just the RAM issue that I'm worried about.
 
Status
Not open for further replies.
Top