BUILD Any help with my build appreciated

Status
Not open for further replies.

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Hi,

I'm currently rocking a N40L but it's dying a slow and painful death. It was a good first home server. It's got 4x 3TB WD green drives which have served me well over 6 years (ZFS and monthly scrubbing seems to indicate no errors).

It's time to upgrade because the server will crash about once a month due to ECC errors, and honestly, it's a bit below spec. I'm using it for backup (about 4TB, want to add another 2TB), media, general storage, and so on. I've got a couple of ZFS partitions I rotate to offline disks, via external USB enclosures which are a bit slow.

I'm looking at building something fairly decent, but also pay attention to cost. If it lasts me 6+ years like the last system did, it will be cost effective in any case.

I'm going to bring my old WD drives forward, so that's 4x 3TB, and buy another 4-6 4TB WD reds. I've got a single Samsung SSD 840 PRO for the boot drive and ZFS read cache, ideally can use them. I'm thinking I'll have 8-11 bays filled, with 1-2 bays for rotating backups (or perhaps continue to use the external Wiebe Tech USB enclosures - performance not really an issue).

So, after much mucking around, I've figured out the following:

- Motherboard: Supermicro X10SRH-CLN4F (or other variant with only 2 ethernet ports)
- Enclosure: Supermicro SC826TQ-R500LPB
- CPU: E5-1620 v4 (what is the difference between v4 and v3?)
- Ram: 32Gb ECC whatever is decent price and cost effective (recommendations?)

I have a few questions.

- How do I connect the motherboard to the drive backplane? Using SFF-8087 connectors? The motherboard advertises 8 ports via the integrated SAS controller - so I need 2x SFF-8087 cables to go from the MB to the backplane, and that will drive 8 drives? So, do I need to also connect the SATA connectors to the backplane too? What is the correct way to do this? I'm assuming I'd need 2x SFF-8087 <-> SFF-8087 cables and something else with SATA?

- Also considering the other motherboard which doesn't have SAS (X10SRL-F) - if I went with this option and bought a separate card with, say, 4x SFF-8087 connectors or something like that (advice please?), would that make more sense than using something built into the MB?

- I'd like to continue using the Samsung 840 PRO for an OS drive, it's been super reliable. Additionally, I might consider getting a 2nd one and using them as a write cache. Can I mount the 2.5" drive into the 3.5" enclosure or is that a bad idea? What should I do here? I notice the Supermicro MB has "SuperDOM", what looks like a proprietary flash drive which I could use for the OS. Can I use the SuperDOM ports as regular SATA ports for my 840 PRO? Advice/suggestions welcome.

Thanks everyone I really appreciate any time/support/ideas/help you can give me.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How do I connect the motherboard to the drive backplane?
With that chassis/backplane (ick), you'd need two SAS breakout cables and four SATA cables to hook up all the bays in the chassis. I say "ick" because the -TQ backplanes expose a single SATA port for each bay, and it makes for a real rats' nest of cabling. It's not awful in a 2U chassis, but even there it's pretty bad. Look for a -A, or better yet, a -E16 chassis for much improved cabling. I'm also thinking the 500-watt power supply would be a little light for up to 12 drives and a Socket 2011 board.

Edit: See @jgreco's writeup on all things SAS here for more information about the backplane and cabling options.

I'd like to continue using the Samsung 840 PRO for an OS drive, it's been super reliable.
Certainly no problem with doing so; the demands on the boot device are pretty modest. The SuperDOM ports will work fine as plain SATA ports, but will also power a SuperMicro SATA DOM, which are handy but expensive. A DOM is pretty much a tiny SSD that plugs directly into a motherboard SATA port. They're convenient--no issue of where or how to mount them (or even power them in this case)--but you can get a standard SSD much cheaper for the capacity.
 

ChriZ

Patron
Joined
Mar 9, 2015
Messages
271
I believe that the cables you will actually need with this motherboard are SFF-8643 to sata not SFF-8087 to sata.
So, with this chassis you will need two of those plus 4 sata for the extra 4 disks.
It should work, but as @danb35 said, it will be messy..
Regarding satadoms, you can find cheaper than supermicro, e.g. Innodisk, but still, an ssd will be cheaper if you find a nice place to mount it
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
CPU: E5-1620 v4 (what is the difference between v4 and v3?)

v3 is Haswell-E
v4 is Broadwell-E

They are quite similar. V4 has a 200Mhz higher boost speed, other models have a higher base clock as well. V4 supports DDR4-2400 ram as well as v3s max of DDR4-2133. And v4 supports TSX instructions, which will be irrelevant to you.

And v4 supports 1.5TB Of ram.

So if you're looking at the 1620, you're looking at a 4core CPU. Why not look at an E3-1230v5/6 CPU on an X11 LGA1151 motherboard?
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Is X11 with LGA1151 a better option? Lower price? Better performance?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Is X11 with LGA1151 a better option? Lower price? Better performance?

Lower price. Similar performance. 64GB ram limit instead of 1.5TB. Maximum of 4 cores vs 22.

But since you're only looking at 4 cores anyway, the ram limit is the one that matters.
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Thanks everyone for your feedback and help so far.

So, after sleeping on it and doing some more research, and listening to the awesome helpful feedback here, I'm now leaning more towards:

- Motherboard: Supermicro X10SRH-CLN4F [$750NZD]
- Enclosure: Supermicro SC836BE1C-R1K03B [$2500NZD]
- Cable: CBL-SAST-0568 (SFF-8643 <-> SFF-8643) [$???]
- CPU: E5-1620 v4 [$500NZD]
- Ram: 2x Crucial DDR4 PC19200/2400MHz ECC Reg CL17 16GB (CT16G4RFS424A) [$200NZD each]

Questions:

- I can't find any ram on the recommended list in NZ. Does anyone have any suggestions or ideas regarding the above Crucial memory?

- Does the enclosure come with the right cable or do I need to buy it? I can't find that cable available in NZ but I can check directly with the supplier.

- Can I connect BOTH SFF-8643 ports on the MB to the backplane for increased throughput?
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Oh, yeah, sorry, I just copied that verbatim, but yes I did read about it. Thanks so much for the follow up :)
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Okay, so here is my final proposed build:

- SuperMicro SC826BE1C-R920LPB
- SuperMicro MCP-220-82616-0N (rear disk enclosure).
- SuperMicro X11SSL-CF
- Intel Pentium G4560
- 2x Crucial 16GB ECC DDR4 2400Mhz CT16G4RFS424A
- LSI00403 Cable SFF8643 <-> SFF8643
- WD Red 4TB x6 (+ existing 4x WD Green 3TB)

Goals - lots of storage for ZFS, low power use, future expandability (could double RAM, install Xeon.

Final question:

- Can I connect BOTH SFF-8643 ports on the MB to the backplane for increased throughput?
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Okay, so here is my final proposed build:

- SuperMicro SC826BE1C-R920LPB
- SuperMicro MCP-220-82616-0N (rear disk enclosure).
- SuperMicro X11SSL-CF
- Intel Pentium G4560
- 2x Crucial 16GB ECC DDR4 2400Mhz CT16G4RFS424A
- LSI00403 Cable SFF8643 <-> SFF8643
- WD Red 4TB x6 (+ existing 4x WD Green 3TB)

Goals - lots of storage for ZFS, low power use, future expandability (could double RAM, install Xeon.

Final question:

- Can I connect BOTH SFF-8643 ports on the MB to the backplane for increased throughput?

If the backplane is an expander, then yes.

What storage layout were you planning?
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
If the backplane is an expander, then yes.

What storage layout were you planning?

My plan is to put 6x 4TB WD Red Drives in RAIDZ2 configuration, and 4x WD Green drives in RAIDZ1 (old pre-existing array). My goal is migrate data from the old array to the new array, and use the old drives for rotating offline backups until they die.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
My plan is to put 6x 4TB WD Red Drives in RAIDZ2 configuration, and 4x WD Green drives in RAIDZ1 (old pre-existing array). My goal is migrate data from the old array to the new array, and use the old drives for rotating offline backups until they die.

Best to make sure they're separate pools then.
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
Yes, I have an existing N40L with a single pool running Arch Linux, and I'll be moving that in it's entirely to the new enclosure.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Did I miss it? Did you ever state what the server was being used for? I ask becasue while there are a lot of hardware recommendations, I don't understand what you will be using this system for. I understand it will be for backups but is that it? Why do I ask? Because the system should be designed around the use expected.
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
I need about 6TB for backups and another 4TB for media. I'd like to have a decent amount of headroom, for additional backups.

I want to start making offline backups, so I figure having 2-4 spare bays will allow me to use some of my existing disks for that purpose.

I ingest about 10-100Gbytes of backups from VPS systems around the globe per week. I've got 1000/500 fibre connection but speed isn't a huge concern.

I occasionally use the system for development purposes -e.g. nginx, passenger, ruby, etc.

I occasionally run a Minecraft server for friends, but again < 8 players at a time.

The file server will be running Arch Linux, ZFS and primarily accessed via Samba.

I have an existing OS drive (Samsung 840 Pro) which I hope to drop in and boot from. It's already GPT partitioned although I'm using old fashioned BIOS in the N40L (it's a bit of a hack) to get it to boot.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
So it sounds like the Minecraft server is the heavy load. So the components listed doesn't sound like overkill.
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
TBH, I haven't run MC for years. With the LGA1151 I could update the CPU to something a bit better in the future. 32Gb of ram is.. 4 times more than I currently have which is barely sufficient but workable. I could take that to 64Gb in the future if it made sense.
 

ioquatix

Dabbler
Joined
May 9, 2017
Messages
48
The funny thing is, unless you go the Xeon route, the i3 (with ECC support) is only about 5-10% better performance wise.
 
Status
Not open for further replies.
Top