RAM requirements and compatibility for 72TB JBOD extension to 96TB Supermicro X9SRH-7F based server

Status
Not open for further replies.

trionic

Explorer
Joined
May 1, 2014
Messages
98
Feel free to skip the waffle and go straight to this post's key question! For those with time or interest, here's the waffle:

A couple of years ago I built the following ZFS server:

Build’s Name: None!
Operating System/Storage Platform: FreeNAS-9.2.1.7-RELEASE-x64 (fdbe9a0)
CPU: Intel Xeon E5-1620v2 Processor 3.7 GHz LGA2011-0
Motherboard: Supermicro X9SRH-7F LGA 2011
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 4TB WD40EFRX Red in four RAID-Z2 VDEVs
RAM: 64Gb ECC Registered RAM using 4x Hynix HMT42GR7MFR4C-PB 16GB DDR3 PC3-12800 RDIMMs
Add-in Cards: None
Power Supply: Zippy C2W-5620V 620W Dual Redundant PSU
Other Bits: 2x 3ware 8087-8087 Multilane cable
UPS: APC Smart-UPS SC 1500VA 230V
Usage Profile: NAS media and backup server

The server has been reliable and fast and I have been very pleased. My original build thread is here. I have build photos and process to add to that thread.

However, it reached 85% capacity usage a couple of months ago and I have been using random disks on another (non-ZFS) server for overspill. Now has come the time to build a JBOD extension.

I have purchased most of the parts for the build:

Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 3TB WD30EZRX Green (idle changed) in four RAID-Z2 VDEVs
Add-in Cards: Intel RAID Converter Board RCVT8788 JBOD Connector
Power Supply: ASPOWER R2A-DV0550-N 550W 2u EPS Redundant
Other Bits: SAS chassis external cable 8088-8088; SAS internal cables X-Case eXtra Value 8087-8087 Multilane cable 0.6m (too cheapass?)

In the future I will buy a cabinet to house all of my IT stuff, plus an APC 3kW 2U rack-mount UPS in-place of the little SC1500i.

All of the components have been identified except for: more memory.

Although the general recommendation for ZFS is 1Gb RAM for every 1TB of RAW disk space, after discussions and reading on this forum it was clear that 64Gb would be okay for 96Tb. However, I think I'd be pushing my luck using 64Gb with 168Tb of disk so a RAM upgrade seems sensible.

The X9SRH-7F supports up-to 256Gb RAM across eight slots. From the manual:

Up to 256GB of memory are supported using ECC QR (Quad Rank or 4-Rank) registered DIMM technology at 1600/1333/1066/800 MHz.

With four slots occupied with the 16Gb Hynix HMT42GR7MFR4C-PB RDIMMs and four slots empty, I can:
  1. Keep the existing RAM and fit four more 16Gb DDR3 1.5v 1600MHz Hynix HMT42GR7MFR4C-PB RDIMMs for a total of 128Gb
  2. Keep the existing RAM and fit 32Gb DDR3 1.5v 1600MHz RDIMMs for a maximum of 192Gb
  3. Ditch the existing RAM and fit 32Gb DDR3 1.5v 1600MHz RDIMMs for a maximum of 256Gb
(1) Is cheap but limits future expansion. There is every chance that I will build another JBOD chassis at some point.
(2) Allows expansion for the new chassis, with an option for a future upgrade (if needed). This is probably the most pragmatic choice.
(3) Is expensive and a bit mad but gives maximum future expansion potential.

The real question is: which RDIMMs? Preferred brands seem to be Hynix or Samsung. The Hynix has served me well and at this point, I'd rather buy more Hynix instead of mixing brands (just a non-evidenced-based preference).

The original memory choice was easy as Supermicro list 16Gb DIMMS on their HCL and the Hynix units were the only ones I could source at the time in the UK. Supermicro do not list 32Gb DIMMS but Hynix do make a DDR3, quad-bank, ECC, 1600MHz RDIMMs: the HMT84GR7AMR4C-PB.

I can buy those each for £110 inc tax from Scan when they receive new stock in January. That price seems way too cheap but we'll see.

The key question is: will the HMT84GR7AMR4C-PB RDIMMs be compatible with the X9SRH-7F and E5-1620v2? I suspect so but wanted to get the advice from experts here before spending money. There must be FreeNAS users on here running >16Gb RDIMMs on a Supermicro motherboard.

Thanks for your time and advice :)
 
Last edited:
Joined
Feb 2, 2016
Messages
574
Option two is the best bet. RAM only gets less expensive and you may come up with a reason to replace the motherboard/processor/server before needing the full 256 Gb of RAM. Going from 64 Gb to 192 Gb is a giant leap forward. You can always swap the original sticks out later.

That said, I'd roll the dice and see how the system runs with the additional disk space before adding RAM. If the bulk of your data sits unused, you may not be taxing your RAM. Is your ARC hit ratio good? Performance good?

I can't speak to the specific compatibility of your RAM combination except to say it looks like it should work.

Cheers,
Matt
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
I'd probably fit 2x 32Gb DIMMs initially, for a memory capacity of 128Gb and see how the thing runs, leaving possible expansion to 192Gb later and even to 256Gb (by replacing the 16Gb DIMMs) if required by further expansion.

If the bulk of your data sits unused, you may not be taxing your RAM. Is your ARC hit ratio good? Performance good?
Seems that ARC reporting is not available in FreeNAS-9.2.1.7. I can see though that currently, memory usage is at 50Gb...
upload_2016-12-21_21-20-56.png

...but it has been much higher
upload_2016-12-21_21-23-9.png

I confess that these days I don't check that stuff very often... the server just works and does everything without a fuss.

Performance has always been good for the usage the server's seen, which is light usage. As you suggest, the bulk of the data does sit unused. The server's used for video and music media and backups.
 
Joined
Feb 2, 2016
Messages
574
That reinforces my belief that you'll be fine without adding any RAM. If, as you add data, performance becomes less than peppy, toss a couple sticks into the beast. If you see a meaningful amount of swap being used, toss it some RAM. You'll be fine. You'll have ample time to order and install RAM long before the server grinds to a halt.

Cheers,
Matt
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
IMO, I'd give it a shot and see what happens. 64GB is a fair amount of space, and it sounds like you're just using this for a basic file server (or an impressive porn collection :D)... no VMs, no 4K non-linear video editing, no dedupe, etc. No point in throwing money at something that won't give you benefit... slap the JBOD on and see what happens. The worst that can happen is a lot of cache churn and reduced cache hit rate. It won't compromise your data.
 

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
Drives: 24x Western Digital 3TB WD30EZRX Green (idle changed) in four RAID-Z2 VDEVs
Add-in Cards: Intel RAID Converter Board RCVT8788 JBOD Connector
Power Supply: ASPOWER R2A-DV0550-N 550W 2u EPS Redundant
I'd like to point out that 550w on 24 drives is stretching things way to thin.
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
I'd like to point out that 550w on 24 drives is stretching things way to thin.
Thank you for posting this because I was about to make a blunder. And I have already made a blunder with the first chassis PSU.

I just read Proper Power Supply Sizing Guidance by @jgreco and realised with a shock that the Zippy C2W-5620V 620W Dual Redundant PSU in the first chassis (with 24 drives, mobo, RAM, CPU etc) is underpowered. With both module operating the PSU can just power the chassis at start-up and is probably okay at idle, but if one module were to fail then the remaining module could fail to supply sufficient power. Using jgreco's approximate power guides, the first chassis needs 1059w and the JBOD needs 840w. I should have purchased at least 1200w and 1000w PSUs for this project.

So now I must buy replacement dual-redundant PSUs for server and JBOD chassis. Supermicro and Intel would be good choices but may be LOUD because they're intended for a data centre. This server operates in my home office. The Zippy PSU was a bit loud and shrill when new but quietened down after a couple of months. After a quick look online, I didn't see a Seasonic dual-redundant PSU with sufficient power capacity.

Yikes!
 

tvsjr

Guru
Joined
Aug 29, 2015
Messages
959
So now I must buy replacement dual-redundant PSUs for server and JBOD chassis. Supermicro and Intel would be good choices but may be LOUD because they're intended for a data centre. This server operates in my home office. The Zippy PSU was a bit loud and shrill when new but quietened down after a couple of months. After a quick look online, I didn't see a Seasonic dual-redundant PSU with sufficient power capacity.

Unfortunately, you're right. I run a 36-bay Supermicro 4U chassis. The power supplies aren't ridiculously noisy (I still wouldn't want to be in the room with them). The really noisy part is the fans to pull air across the drives... they are high static pressure, high speed fans.

When we renovated our new home, I dedicated a 4x8 closet to the IT equipment. The closet got heavy insulation, a dedicated inverter-based mini-split AC, and soon a solid-core door. This keeps the noise isolated. I'm running the 4U FreeNAS box, 3 2U VM hosts, switch, patch panel, Tivo, etc. While you might not need to get quite as in-depth as I did, you might look at a similar option.
 
Joined
Feb 2, 2016
Messages
574
If you've been running this long on the Zippy, @trionic, I wouldn't swap it in advance of failure. Power supplies are easily sourceable. If it burns out, you can have one the next day. Can you be without the server for a day or two?

Finally, I'd go with a less expensive single-unit, non-redundant power supply. They are so much less expensive than the dual units. Heck, for the price difference between a single quality power supply and a server-grade dual power supply, you could almost keep a spare single power supply as a cold spare on the shelf that could be a replacement for either the server or the JBOD. Plus, a good Seasonic running below capacity is going to be more quiet than a dual server power supply.

If you had separate power feeds as we do in the data center - each power supply goes to its own UPS - and a high uptime requirement, dual would be the way to go. However, in your environment, I think it's far more likely that you'll lose all power than a power supply will fail. Save your monies.

Cheers,
Matt
 

trionic

Explorer
Joined
May 1, 2014
Messages
98
@tvsjr My plan for 2017 is to put everything into a cabinet, including a 4U ESXi host, 1500w and 3000w APC UPSs, cable modem and a 1U or 2U pfSense chassis. However, in my small home I don't have the luxury of a separate closet. However, I am in the process of renovating my home and will consider accommodating the cabinet in its own space, with air conditioning.

@MatthewSteinhoff The issue with the Zippy is that one 620W module ought to be insufficient to power the whole server. I am paranoid about power failure pool loss in the event of, hence the UPS and redundant PSUs (although not in an ideal setup, as it turns out). I have a 1200w Corsair PSU in another box and it's whisper quiet. Perhaps there are better ways than redundant PSUs to guard against power failure pool loss.

My environment is just a home office with a media server. There is no SLA :) However, preventing data loss is the priority (and I have a backup server in mind for the future but that will cost ££££).
 
Last edited:
Status
Not open for further replies.
Top