Too much RAM?

Status
Not open for further replies.

craig51

Dabbler
Joined
Oct 29, 2017
Messages
19
Hello all,

I would like to replace a few of our SAN units and we have several of the 2U Dell R510 with 12 x 3.5 bays, dual xeon and capacity for 8x16g ram for 128g total, we have a lot of 2 and 3 TB SAS 7200 RPM drives to work with so the general drive capacity will be any where from 24 to 36 tb , We have ram sticks of 8g or 16g to work with so i am asking if the max of 128g (8x16) would be overkill and the 64 (8x8) be just fine. I see so many statements of the more ram the better but not sure in this case,

our use case is this;

VMware datastores running ISCSI
4 san units with 6 esxi hosts (some local storage as well)
total vm load about 100

We have been running OPEN-E DSS7 which is linux based behind lSI hardware raids but would
like to take advantage of what looks like better more robust snapshotting in storage vs our current
solution, plus all of our open e boxes are 8 drive 2u and the dell 12 drive 2u will help us save some space


any input would be appreciated

Craig51
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Hello, welcome to the forums!

RAM is indeed super important. More factors come into play than RAM. Here is a breakdown on what to consider when aiming high on the performance chart.

1. Configure vdevs as mirrors or 3 way mirrors depending on your propensity to handle failures.
2. Have GOBS of free space. Ideally - don't exceed 50% utilization.
3. Get a proper SLOG. Intel P3700. Or as of recently - the P4800X.
4. You may benefit from having using a L2ARC. Its max size depends on your amount of RAM.
Getting 128GB instead of 64GB effectively allows you to run more than twice as large L2ARC.
Choosing a less fancy (than your SLOG) NVMe would yeild stupid fast performance - for the most used data.

Cheers.
 

craig51

Dabbler
Joined
Oct 29, 2017
Messages
19
Thank you for the tips, so sounds as if going ahead and maxing the ram is the way to go,

So it is best practice to use a SLOG with ISCSI and VMWare? I had seen different thoughts on that but i could be mistaken, I will look further

Thanks
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
So it is best practice to use a SLOG with ISCSI and VMWare?
SLOG is best practice for iSCSI / datastores (also on NFS with sync=always flag set).
There is a section about this and further links in the Hardware guide, located in the resource tab (next to forums).
They will help you understand why the suggested drives are of interest and why they are important.

There is no scenario where you would not want to have a proper SLOG, when hosting datastores on FreeNAS while expecting data integrity.
 

craig51

Dabbler
Joined
Oct 29, 2017
Messages
19
Dice,

Thank you for the advice, we are getting ready to finalize install on one of the machines and all i have left to do is get the SSD drives, I see you mention P3700 and it is a PCIe unit, one of the issues I see on our R510 boxes is that they are PCIe 2.0 not 3.0, with this this limitation would a unit such as DC S3710 be a suitable substitute in that the PCIe unit may not perform at peak , S3710 is a sata connection but also only 274 dollars for 200g size vs 680 dollars for p3700 400g. I do not mind spending but do not want to overspend for minimal performance gain.

Thanks for input
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
The DC S3710 is among the best you can get without jumping to NVMe, for SLOG purposes. It has the desired features.

What's the network setup ?
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Alright.
I suggest you search the forum for sizing SLOG properly, sort of as an exercise ;)

The size is related to your network speed, since, as trivial as it may sound, that's where your R&W instructions come from.
The faster the network, the higher capacity for data throughput to your pool, the bigger the SLOG needs to be.

If you've not read this post yet, it is sort of mandatory:
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/

Hint: You don't need to use the entire capacity of the drive as SLOG. Rather you'll be way better off to partition the drive in a way that the drive's internal wear levelling can reign freely.
This can be achived following this guide (I've done it on my SLOG too):
https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm
 

wblock

Documentation Engineer
Joined
Nov 14, 2014
Messages
1,506
SLOG details from http://doc.freenas.org/11/zfsprimer.html: "ZFS currently uses 16 GB of space for SLOG. Larger SSDs can be installed, but the extra space will not be used. SLOG devices cannot be shared between pools. Each pool requires a separate SLOG device. Bandwidth and throughput limitations require that a SLOG device must only be used for this single purpose. Do not attempt to add other caching functions on the same SSD, or performance will suffer."
 
Status
Not open for further replies.
Top