BUILD Supermicro X10SRi-F + Norco RPC-4224 Build

Status
Not open for further replies.

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I have since completed this build. Checkout the build report here
--

I'm in the process of building a new storage server, although I've done a fair bit of research, and I think I have a fairly complete parts list here, I do have some questions, and if I've made any obvious mistakes, would like some advise :)

(specific questions at the bottom)

I expect this system to have a 5-10 year lifespan. We've been using FreeNAS as a Backup target, and its time to move it up our storage hierarchy. We've been well pleased with it, and we're contemplating deploying servers in jails or VMs on the FreeNAS host hardware/system.

And system noise is a concern as the rack is in an office, hence the 120mm fan walls, and Noctuas.

Current specifications:

Chassis: Norco RPC-4224 + Rails (Received)

I priced a number of options, and it turned out that a 24 bay unit from Norco was the best option taking into account current drive needs, and possible future drive expansion.

I ordered the 120mm fan wall and OS drive bracket too, thinking the chassis didn't come with the os bracket, and only the 80mm fan wall.

The chassis arrived with a 120mm fanwall, 3 120mm fans, the os bracket, and 2 80mm fans installed. And the 'spare' 120mm fanwall and os bracket.

Which means I get to return the redundant fan wall and os bracket!

Motherboard: Supermicro X10SRi-F (Ordered)

I would like the ability to upgrade the CPU beyond 4 cores if necessary, support multiple high bandwidth PCIe devices, including at least one x16 slot, as well as plenty of memory expansion beyond 32/64GB. The i350 NIC is a bonus, and this motherboard is the one that was actually available in AU. Future expansion options include HBAs, 10gigE, PCIe NVMe SSDs, and possibly even a GPU.

Previous systems have reached their end-of-life either because of RAM capacity or PCIe2 bottlenecks.

CPU: Xeon E5-1620 v3 (or v4) (not ordered)

For our current work-loads, I believe single core performance is of primary concern, but we also think we may virtualise more systems in the future. By the time core count becomes a problem, I figure many core E5 v3/v4 Xeons should be available on the used market at significant discount to their current price. In the meantime, a 4/8 3.5Ghz+ Xeon should do.

Am thinking of going with a 1650 for the extra 2 cores. Its double the price for 50% more.


Cooler: Noctua NH-U9DX i4 (not ordered)

I believe this is the best/quietest cooler which will fit in the enclosure. The Noctua NH-U12DX i4 needs a clearance of 158mm, and this chassis only provides 155mm I believe. The NH-U9DX i4 is Noctua's recommended 4U narrow ILM cooler and has a height of 125mm.

RAM: Crucial 32GB ECC Registered PC-19200/2400Mhz (not ordered)

Either 2 x 16GB or 1 x 32GB? Crucial seems to be the only reasonable non Kingston option in Australia. Not sure if a single RDIMM is a valid configuration. 2x16 is marginally cheaper, and would provide dual-channel bandwidth. I figure we'll grow to 4x16, and then if we need more than 64GB we can grow a couple of 32s at a time, before replacing the 16s (which would be repurposed). We're at the start of DDR4s life cycle, I expect DDR4 to get cheaper with time and 16 and/or 32GB ECC RDIMMs to be a useful component for repurposing in the future.

The Noctua cooler leaves 35mm for the DIMMS. Standard DIMM form-factor is 32mm so this should be okay.

Boot: Dual Cruiser Fit 16GB USB3.0 (Ordered)

Have been happy with the Cruiser Fits on our current Backup system. In USB2 they perform at 40MB/s, in USB3 they get about 160MB/s. Essentially they provide a pair of front-mount hotswap boot disks.

HBA: none

Will use 2 breakout (received) cables to connect 8 of the motherboard's 10 SATA3 ports. If we decide to grow further I can acquire some HBAs. Is there such a thing as an LSI card which supports 16 bays?

HDs: various 1.5, 2 and 3TB drives of various grades (Red, Red Pro, RE4, Green) (Repurposing)

I have a number of smaller RAID5s in use. Will be decommissioning a few, and using their drives initially.

Will either replace with or add larger drives in future.

I'm aware that you can't add a drive to a RAIDZ vdev in ZFS. And if you want to reshape the pool, you need to backup/restore the pool.

I'm aware that RAIDZ will only expand to the largest common size of all member drives.

I'll only be considering RAIDZ2 as I don't enjoy nervous RAID5 rebuilds.

Either 6 or 8 disk vdevs in RAIDZ2 is where I'm thinking.

SSDs: none

If I determine L2ARC is needed, then I intend to purchase a PCI NVMe SSD, Intel or Samsung?

We use iSCSI, so if I determine a SLOG is needed, then I'm looking at some sort-of Intel with PLP. There should be two SATA ports left over, and the OS tray in the chassis can take two 2.5" SSDs.

I like the idea of using dual U.2 SSDs for both SLOG and L2ARC (partitioned, then mirrored and striped respectively). Is there such a thing as an 8x to Dual U.2 PCIe adapter card?

PSU: tbd.

My current plan is to use a spare ATX PSU I have on hand to determine the 'base' load without HDs, then I can add 24W * 24 (ie 2A startup draw at 12V for all drive bays) to the base load + 10%. Then I'll decide if I should use a pre-existing PSU or obtain a suitable high quality one.

How much extra does an HBA draw?
What about a PCI SSD?
A 10gbaseT card?

UPS: 5U SmartUPS 5000VAC (Existing)

The above is awesome btw. I acquired it for $500 10 years ago. Still going strong... Runs an entire office and servers at only 20-25% load. Provides 4 hours of runtime. Needs a 32A hard-line!

Backup: Replicate to another FreeNAS (Existing)

Backup system does not support ECC unfortunately.

Offsite Backup: tbd.

Currently a form of Rsync is in use, with period on-site replication when a large changeset needs to be propogated. Would like to use offsite replication to a new home-based Mini-ITX FreeNAS/Plex system in order to avoid the Rsync 'scan'. Would need to be able to have the replication be tolerant to dropped connections. This will be investigated further once the new storage server is commissioned.

----

Questions:

RAM: Should I use 2x 16GB or 1x 32GB?

CPU: Should I stump up for the 6 core Xeon instead of the 4 core?

HBA: Should I get 2 HBAs or 1 HBA for the additional eventual 16 drives?

PSU: Is my sizing approach correct? Should I take into account the potential future HBAs/NICs? or will that easily come out of the 10% buffer? I don't think a redundant PSU is worth it.
 
Last edited:

Dice

Wizard
Joined
Dec 11, 2015
Messages
1,410
(if you have any key questions in mind beyond "any tips" it might help to highlight them/summarize them at the end)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
(if you have any key questions in mind beyond "any tips" it might help to highlight them/summarize them at the end)

Thanks for the tip, OP modified
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Motherboard arrived. I've now ordered the RAM (2x Crucial 16GB DDR4 2400 ECC RDIMM CT16G4RFD424A ), and the Noctua Cooler, NH-U9DX i4

I was looking into the HBA situation. All the advise seems to be that the best option is to go with an IBM M1015 (x2), but there is a strong risk of chinese fakes.

I can get M1115 server pulls (in Australia) for literally half the cost of an M1015 unknown authenticity from China via Hong Kong.

Is the M1115 a fine substitute for an M1015?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Some updates:

The Noctua and USB drives have arrived.

Ordered the Crucial 16GB x 2 RDIMMs

Ordered a Xeon E5-1650 v4. Decided when looking at the total BOM, the incremental cost was worth it over the 1620 for an extra 50%+ CPU performance. The V4 was only marginally more expensive than the V3.

Ordered 8 x 4TB Seagate NAS HD drives. I debated this for some time, as I had some very bad experiences with Seagate's 1.5TB drives... losing 90% of them, and the 3TB I hear were just as bad, if not worse. The 4TB seem to be doing well though, and I could only buy 6 WDs for the same price, or even less HGSTs.

Ordered an IBM M1115 Serveraid HBA off ebay. This will take care of the next 8 disk vdev.

Still need to determine what length SFF-8087 cables to order to enable neat cable routing :)

And then there is the PSU. Thinking 950-1050W
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
850W or 1000W?


Datasheet Research:
X10SRi-F draws 15.3W according to SuperMicro link (find this hard to believe actually)
DDR4 DIMMS use 6W per 16GB link
IBM M1115s uses 9W link
Intel X540-TT2 uses 13.4W link
Intel E5-1650 v4 TDP is 140W link
Intel 750 PCIe SSD uses 22W max link
Seagate NAS HD 8TB drives use 9W on average and 20A spinup (24W?) link
Fans: link

Couldn't find any information on Cruiser Fit USB3 ;)

Max Configuration (in the future):
Motherboard = 15.3W (I guess this is a key value, if it out by a factor of 20... then everything is off )
CPU (e5-2699 v4)= 145W
RAM (256GB) = 96W
HBA (x2) = 18W
10gigE = 14W
SSD = 22W
Fans = 25W
USB Boot = 10W?
= 345W

It will be many years before this server grows into its max config, if at all.

HDs, 9*24 = 216W average power usage

Thus, at load, maximum configuration = 561W, with additional 25% margin = 702W. 80% sweet spot on an 850W PSU is 680W.

The server will most likely spend a large amount of time idle. We do transcoding, compiling, and moving very large files, but when that's not the case, it'd be idle.

Now, I think its unlikely that all HDs will spinup at the same time, but suppose most of them did...

If I take an 850W PSU, and subtract the max expected future load (without HDs), then that leaves 505W to spinup the drives. At 24W to spinup, then I can simultaneously spin up 21 drives, or at 36W, 14 drives.

With a 1000W PSU, there'd be 655W, ie 27 simultaneous drives at 24W, and 18 at 36W.

Alternatively, another way of looking at it is that most of the 850W PSUs have a 70A+ 12V rail, and the spinup draw is *only* 48A for 24 drives @2A each.


Thus, will an 850W be sufficient? Or should I step up to a 1000W? Or is 850W too much and I'm looking at this wrong?

Do my figures sound right?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
In that 505W, you're assuming every last watt is available.

680W/80% isn't the sweet spot on an 850W PSU.... it's fairly heavily taxing the PSU.

The real question is how likely will you to be happy in 10 years when the PSU components have aged and are no longer in prime condition, and you try to boot the machine and it zips right up there to around 800 watts trying to spin everything, and the magic smoke that makes your computer run comes out.

Generally speaking I think you're looking at the right numbers-ish, but where possible and practical, I suggest avoiding that and using a derating strategy instead, see discussion in https://forums.freenas.org/index.php?threads/proper-power-supply-sizing-guidance.38811/ One of the shocking things is that though the vendors might SAY 2 amps to spin, it can be higher than that. Also, you have to remember that that's in ADDITION to the existing current draw to run the electronics, so you need to budget 24 watts to spin PLUS whatever the idle current for that drive is.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Okay, so I think the conclusion is I *need* 850W, but I should go for 1000W

If in 10 years time the magic smoke came out, I think I would be sad that I didn't originally get a 1000W PSU and that's what finally brought 'old faithful' down.

(as I say, looking at the 12 y.o. 1U Dual G5 XServe that used to have 750+ days of uptime, that I've still got in the rack because it looks good, even though the PSU blew a few years ago, and it wasn't economical to replace :()

Ironically, this server is being built to replace the hasty hodge podge of solutions which were put in place when that XServe unexpectedly quit.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Thinking of going for the Corsair RM1000x. Its highly efficient at low loads. Seems affordable compared to the Seasonic etc (where I am), and actually has the right distribution of 4 pin molex connectors that I need. 7 year warranty too.

Single rail 12V of 83A, 25A for 3.3V and 5V too.

And I don't need the Windows doodads that the RMi offers.

Quiet too.

https://www.techpowerup.com/reviews/Corsair/RM1000x/6.html
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Usually a manufacturer having a long warranty period on a PSU is a good thing. PSU technology doesn't really get much cheaper as time passes.

Samsung for instance provides a 10 year warranty on some of their SSD's, but if one of those fails in 5 years, a 500GB SSD that costs $250 today will only be $50 then. :smile: It isn't clear if they're confident in their product or their ability to replace it cheaply.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
7 year warranty too.
They might have a 10-year warranty now. It's the new trend. Seasonic's new lines are going to be differentiated, in part, by warranty length, whereas currently the G-Series gets 5 years and X-Series/Platinum gets 7 years (the old premium warranty length, until a few months ago).
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
They might have a 10-year warranty now. It's the new trend. Seasonic's new lines are going to be differentiated, in part, by warranty length, whereas currently the G-Series gets 5 years and X-Series/Platinum gets 7 years (the old premium warranty length, until a few months ago).

I did look at the Seasonic Prime etc (not available here)

Good news is the RM1000x seems to have a 10 year warranty now :)

http://www.corsair.com/en-au/rmx-se...t-80-plus-gold-certified-fully-modular-psu-eu

The Seagate NAS HDs arrived with a 5 year warranty instead of the expected 3 year too.
 
Status
Not open for further replies.
Top