trionic
Explorer
- Joined
- May 1, 2014
- Messages
- 98
Feel free to skip the waffle and go straight to this post's key question! For those with time or interest, here's the waffle:
A couple of years ago I built the following ZFS server:
Build’s Name: None!
Operating System/Storage Platform: FreeNAS-9.2.1.7-RELEASE-x64 (fdbe9a0)
CPU: Intel Xeon E5-1620v2 Processor 3.7 GHz LGA2011-0
Motherboard: Supermicro X9SRH-7F LGA 2011
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 4TB WD40EFRX Red in four RAID-Z2 VDEVs
RAM: 64Gb ECC Registered RAM using 4x Hynix HMT42GR7MFR4C-PB 16GB DDR3 PC3-12800 RDIMMs
Add-in Cards: None
Power Supply: Zippy C2W-5620V 620W Dual Redundant PSU
Other Bits: 2x 3ware 8087-8087 Multilane cable
UPS: APC Smart-UPS SC 1500VA 230V
Usage Profile: NAS media and backup server
The server has been reliable and fast and I have been very pleased. My original build thread is here. I have build photos and process to add to that thread.
However, it reached 85% capacity usage a couple of months ago and I have been using random disks on another (non-ZFS) server for overspill. Now has come the time to build a JBOD extension.
I have purchased most of the parts for the build:
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 3TB WD30EZRX Green (idle changed) in four RAID-Z2 VDEVs
Add-in Cards: Intel RAID Converter Board RCVT8788 JBOD Connector
Power Supply: ASPOWER R2A-DV0550-N 550W 2u EPS Redundant
Other Bits: SAS chassis external cable 8088-8088; SAS internal cables X-Case eXtra Value 8087-8087 Multilane cable 0.6m (too cheapass?)
In the future I will buy a cabinet to house all of my IT stuff, plus an APC 3kW 2U rack-mount UPS in-place of the little SC1500i.
All of the components have been identified except for: more memory.
Although the general recommendation for ZFS is 1Gb RAM for every 1TB of RAW disk space, after discussions and reading on this forum it was clear that 64Gb would be okay for 96Tb. However, I think I'd be pushing my luck using 64Gb with 168Tb of disk so a RAM upgrade seems sensible.
The X9SRH-7F supports up-to 256Gb RAM across eight slots. From the manual:
With four slots occupied with the 16Gb Hynix HMT42GR7MFR4C-PB RDIMMs and four slots empty, I can:
(2) Allows expansion for the new chassis, with an option for a future upgrade (if needed). This is probably the most pragmatic choice.
(3) Is expensive and a bit mad but gives maximum future expansion potential.
The real question is: which RDIMMs? Preferred brands seem to be Hynix or Samsung. The Hynix has served me well and at this point, I'd rather buy more Hynix instead of mixing brands (just a non-evidenced-based preference).
The original memory choice was easy as Supermicro list 16Gb DIMMS on their HCL and the Hynix units were the only ones I could source at the time in the UK. Supermicro do not list 32Gb DIMMS but Hynix do make a DDR3, quad-bank, ECC, 1600MHz RDIMMs: the HMT84GR7AMR4C-PB.
I can buy those each for £110 inc tax from Scan when they receive new stock in January. That price seems way too cheap but we'll see.
The key question is: will the HMT84GR7AMR4C-PB RDIMMs be compatible with the X9SRH-7F and E5-1620v2? I suspect so but wanted to get the advice from experts here before spending money. There must be FreeNAS users on here running >16Gb RDIMMs on a Supermicro motherboard.
Thanks for your time and advice :)
A couple of years ago I built the following ZFS server:
Build’s Name: None!
Operating System/Storage Platform: FreeNAS-9.2.1.7-RELEASE-x64 (fdbe9a0)
CPU: Intel Xeon E5-1620v2 Processor 3.7 GHz LGA2011-0
Motherboard: Supermicro X9SRH-7F LGA 2011
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 4TB WD40EFRX Red in four RAID-Z2 VDEVs
RAM: 64Gb ECC Registered RAM using 4x Hynix HMT42GR7MFR4C-PB 16GB DDR3 PC3-12800 RDIMMs
Add-in Cards: None
Power Supply: Zippy C2W-5620V 620W Dual Redundant PSU
Other Bits: 2x 3ware 8087-8087 Multilane cable
UPS: APC Smart-UPS SC 1500VA 230V
Usage Profile: NAS media and backup server
The server has been reliable and fast and I have been very pleased. My original build thread is here. I have build photos and process to add to that thread.
However, it reached 85% capacity usage a couple of months ago and I have been using random disks on another (non-ZFS) server for overspill. Now has come the time to build a JBOD extension.
I have purchased most of the parts for the build:
Chassis: X-Case RM 424 Pro with SAS Expander & SGPIO Backplane
Drives: 24x Western Digital 3TB WD30EZRX Green (idle changed) in four RAID-Z2 VDEVs
Add-in Cards: Intel RAID Converter Board RCVT8788 JBOD Connector
Power Supply: ASPOWER R2A-DV0550-N 550W 2u EPS Redundant
Other Bits: SAS chassis external cable 8088-8088; SAS internal cables X-Case eXtra Value 8087-8087 Multilane cable 0.6m (too cheapass?)
In the future I will buy a cabinet to house all of my IT stuff, plus an APC 3kW 2U rack-mount UPS in-place of the little SC1500i.
All of the components have been identified except for: more memory.
Although the general recommendation for ZFS is 1Gb RAM for every 1TB of RAW disk space, after discussions and reading on this forum it was clear that 64Gb would be okay for 96Tb. However, I think I'd be pushing my luck using 64Gb with 168Tb of disk so a RAM upgrade seems sensible.
The X9SRH-7F supports up-to 256Gb RAM across eight slots. From the manual:
Up to 256GB of memory are supported using ECC QR (Quad Rank or 4-Rank) registered DIMM technology at 1600/1333/1066/800 MHz.
With four slots occupied with the 16Gb Hynix HMT42GR7MFR4C-PB RDIMMs and four slots empty, I can:
- Keep the existing RAM and fit four more 16Gb DDR3 1.5v 1600MHz Hynix HMT42GR7MFR4C-PB RDIMMs for a total of 128Gb
- Keep the existing RAM and fit 32Gb DDR3 1.5v 1600MHz RDIMMs for a maximum of 192Gb
- Ditch the existing RAM and fit 32Gb DDR3 1.5v 1600MHz RDIMMs for a maximum of 256Gb
(2) Allows expansion for the new chassis, with an option for a future upgrade (if needed). This is probably the most pragmatic choice.
(3) Is expensive and a bit mad but gives maximum future expansion potential.
The real question is: which RDIMMs? Preferred brands seem to be Hynix or Samsung. The Hynix has served me well and at this point, I'd rather buy more Hynix instead of mixing brands (just a non-evidenced-based preference).
The original memory choice was easy as Supermicro list 16Gb DIMMS on their HCL and the Hynix units were the only ones I could source at the time in the UK. Supermicro do not list 32Gb DIMMS but Hynix do make a DDR3, quad-bank, ECC, 1600MHz RDIMMs: the HMT84GR7AMR4C-PB.
I can buy those each for £110 inc tax from Scan when they receive new stock in January. That price seems way too cheap but we'll see.
The key question is: will the HMT84GR7AMR4C-PB RDIMMs be compatible with the X9SRH-7F and E5-1620v2? I suspect so but wanted to get the advice from experts here before spending money. There must be FreeNAS users on here running >16Gb RDIMMs on a Supermicro motherboard.
Thanks for your time and advice :)
Last edited: