First build, HGST 10x6TB Z2, Supermicro based

Status
Not open for further replies.

Alassiry

Cadet
Joined
Jan 17, 2015
Messages
5
FreeNAS aficionado friends,

I'd appreciate help confirming optimal hardware options for my FreeNAS build in progress as follows:

NAS requirements:
Extended redundant storage for media library / torrent box / Plex serving a probable maximum of 3-4 concurrent streams at the moment, with future (not very probable) expansion in mind to up to 6 streams.

Pending technical feasibility, I would also like to be able to stream game CD Images (maximum of 1 at any point) from the server to an emulator box, if the term "streaming" is accurate in this usage.

At the moment, my main storage pool will be the 10 HDDs, but the chassis can take 5 more HDDs and I would want to be able to expand storage capacity if I ever need to do so.

Parts purchased so far:
Chassis: SuperMicro CSE-933T-R760B ( http://www.supermicro.com/products/chassis/3U/933/SC933T-R760.cfm )

SATA Interface Extension: IBM ServeRAID M1015

HDDs: 10x6TB HGST Deskstars

Parts pending discussion:

Motherboard:
This is basically the big decision, where I'm torn between:

1) Working with what seems to be the defacto standard build options of the LGA 1150 Supermicro X9/X10 SLM-F / SL7 based build and be restricted to the maximum 32 GB of RAM, or;

2) Go for a dual LGA 2011 socket class of motherboard (Which I am finding trouble finding reference builds for) and gain the overhead in RAM capacity and CPU performance in exchange for overall build cost differential.

CPU:
In case of option 1 above, it seems Xeon E3-1230v3 is the standard option. In case of option 2, since I couldn't find an applicable reference build, I'm not sure which CPU would be optimal, aside from it probably being a Xeon E5.

RAM:
Similarly, compatible RAM off the QVL should be fairly simple to find once the motherboard is decided. I realize that in case of option 2 I'd be eyeing DDR4 and that would also change things a bit.

Budget:
There is no mark set in stone for this, but the general expectation was initially in the range of $1~2K USD. I have no problem with exceeding this however if there is any justified advantage for my case.

Misc:
I'm not sure if I forgot something else I'd need, and since this is my first build I totally expect messing up with some decisions, but would like to at least be careful about any avoidable mess-ups, so I'd appreciate any suggestions on additional parts I'd need or want, if any.

TL;DR version:
HGST 10x6TB Z2, Supermicro based build needs optimal choices for Motherboard, CPU and RAM with reasonable room for expansion, preferably in the budget range of $1-2K USD.

Thanks a lot in advance!

- Fahad
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
DP motherboards? Nah, if you're considering Xeon E3, UP Xeon E5 oughtta be fine for you (and significantly cheaper). You would have to choose between Ivy Bridge-EP with DDR3 or Haswell-EP with DDR4.

Of course, if you're planning on expanding only in a few years, it may be more cost-effective to go with a Xeon E3 now and another one in a few years, depending on the amount of expansion and how far away it is.
 

Alassiry

Cadet
Joined
Jan 17, 2015
Messages
5
Thanks Eric.

The only expansion I can foresee in the meanwhile would be something along the lines of even more storage if what I have gets saturated, and I anticipate that only to come in at least a few years. Do I get from that, that I should not over-optimize for expansion and go with a setup more or less fit to my requirement and then do a full revamp in the future if I need one?

If I got you right, then should I be looking at UP Haswell-EP motherboards if I aim for 64 GB RAM with a setup of this capacity? I think I remember reading that cyberjock has a similar capacity setup and that 32 GB RAM wasn't very comfortable for his usage, which makes me hesitant in settling with 32 GB.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
If you feel limited by 32GB, an LGA 2011 platform is probably a better start.

Haswell-EP is still rather expensive due to DDR4 pricing:
  • X10SRi-F ~360 bucks
  • Xeon E5-1650 v3 ~670 bucks
  • 32 GB (4 * 8GB DDR4 ECC RDIMM) ~530 bucks (4 * 16GB for ~1000 bucks)
An interesting possibility is to buy just two 16GB RDIMMs now and buy two more at a later time. Of course, the board takes 8 DIMMs total, so you can still upgrade if you go with lower capacity DIMMs, without getting rid of the old ones.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
With a 15bay chassis the best you could do would be a 11drive raidz3 and keep the other bays for SSDs (for whatever reason). with 3x5drive raidz on those big drives you'd be murdering your data. You could try a big 15drive z3, but that's really pushing it. In case that isn't satisfying, the other 4 disks could be used for striped mirrors for jails, temporary storage, whatever.

Regarding CPU choice: UP Xeon E5 is good enough. Anything from the E5-1620 onwards (v2, v3 respectively according to the generation you want) is fine. Get 4x16GB RDIMMs so you can upgrade to 128GB RAM. 32 and more GB per LRDIMM is still not really worth it - altough I've spotted some LRDIMMs only being 10% more per GB compared to 16GB RDIMMs. Those would give you a total of 256GB on a 8-slot board, which should be good for the next 5 years.
Also the newer DDR4/S.2011-3 systems come with 10x SATA onboard as standard.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
(all prices including 19% tax, roughly translates to USD without tax)
X10SRL-F for 280EUR
E5-1620 v3 (it's a Quadcore, it's still good enough. you only need more ram, not more CPU) for ~300EUR
4x16GB Crucial (probably not QVL) for ~700EUR total or 1x16GB Samsung (QVL) for ~180EUR each.

That's roughly 1300EUR for a 64GB Xeon E5 system. not 2000EUR.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
700 bucks for 4 16GB Crucial DDR4 RDIMMs? That must be the low-profile ones. The regular ones are significantly cheaper (the 530 bucks I mentioned).

I'm surprised at the price difference between the X10SRL-F and the X10SRi-F, they seem to be mostly identical (GbE from two i210s vs. a single two-port i350 and slightly different PCI-e port layout).
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
No, Low-Profile ones are double that price. AFAIK they're only used in Supermicro Microblades. 56x Xeon E5-2xxx v3 in 6U. Dat density.

Also note the 4x16GB, meaning 4 16GB DIMMs for a total of 700EUR. Dunno where you got your 4x8GB UDIMM prices from (they're less than 400EUR total), but I wouldn't buy those with a Xeon E5 anyway.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Never mind, I got them confused with the 8GB DIMMs. Crucial's store has the 64GB kit at 1000 bucks.
 

Alassiry

Cadet
Joined
Jan 17, 2015
Messages
5
@Marbus: My intention with the 10 HDDs was to run 8+2 ZFS2 and leave the spare 5 bays for a suboptimal small 3+2 ZFS2 pool or append a JBOD chassis or some other workaround when I do want to upgrade..

Since this is a personal use system the purchase of 10x6TB disks already took a chunk out of the virtual wallet, and I wasn't convinced on the need for triple parity for my type of data.. Though my opinion might be swayed if I missed out on some other aspect.

Also, thanks to both of you on the specific motherboard and CPU recommendations. I'll continue research on that line of motherboards and see what I can get out with.

Is mention of online store recommendations allowed on this forum? I have no idea where all the international members get their server parts from, and would appreciate recommendations as well.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
For US your options are Newegg, Amazon (sometimes hit-and-miss there with specific server parts) and Wiredzone (their Supermicro stock is great).

In addition to the country-specific Amazon-sites Germany-based sona.de would be the EU one-stop-shop for supermicro, however jacob-electronic.de probably has a bigger selection listed.

Well, you initially said you want much redundancy - z3 is the highest zraid level for now, so I thought you'd prefer that. But if you want to just add another 15drive JBOD (I suppose they're pretty cheap), 10 disks shouldn't be the worst choice since 3 vdevs would then fit in there.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
TI think I remember reading that cyberjock has a similar capacity setup and that 32 GB RAM wasn't very comfortable for his usage, which makes me hesitant in settling with 32 GB.

You are correct. :)

I'm running an E3-1230v2 with 32GB of RAM and a RAIDZ2 of 10x6TB drives. I'm relatively disappointed because the performance is "good" (but not great) and seems to be slowly heading downhill as I put more and more data on the server. I'm eyeballing a new build, but I'm moving in a few months and I don't want to drop money on a dying platform. In particular, I'd like to go with DDR4 and socket 2011-3 but DDR4 is pretty freakin' expensive compared to DDR3 right now. ;)
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
Some things worth considering...

Going with a striped pool of 6x6 + 6x6 if you think expansion is in your future. That way you don't have to plop down cash for 10 more drives all at once and your performance will be better.

If your planning on purchasing all of your RAM now, maybe stick with the v2 E5 to save a few bucks. DDR3 prices are likely to climb in the coming years as DDR4 becomes more pervasive. Otherwise, take the hit now and gain the benefits of the newer platform.

Drives this large still scare the hell out of me, go z3 if you can.
 

Alassiry

Cadet
Joined
Jan 17, 2015
Messages
5
Thanks for all the suggestions.

So far, I'm looking at a narrowed down list of the following:

Mobo: X10SRI-F (~$288 at time of posting: http://m.newegg.com/Product/index?itemnumber=13-182-928 )

CPU: E5-2620 v3 (~$429 at time of posting:
http://m.newegg.com/Product/index?i...80&cm_re=Intel_Xeon_E5-_-19-117-480-_-Product )

RAM: SAMSUNG 16GB DDR4 ECC Registered x4 (QVL, ~$200 each at time of posting: http://m.newegg.com/Product?itemNumber=N82E16820147382&Keyword=M393A2G40DB0-CPB )

From the feedback I'm seeing, while not the best bang for buck, these 2011 v3 options seem like they would satisfy my future upgrade potential requirement and provide solid performance without being excessively pricey compared to expected budget.

10 onboard SATA ports on 2011-v3 boards means I wouldn't need the HBA, unless I'm adding more disks. Dual NICs + IPMI also seems nice to have.

Any other thoughts on this specific setup?

SuperMicro CSE933T Chassis | SuperMicro X10SRI-F | Xeon E5-2620 v3 | 64GB (4x16GB) Samsung DDR4 ECC | 10x 6TB HGST Deskstar in RaidZ2
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
One choice to consider is processor. You have a single socket board, so the 1620 or 1650 gets you a much higher clock speed for similar $. Not worth the extra cores unless you have the money to burn, samba likes speed. The 2600's are slower until you spend $$$ 1620 v3 (4 core 3.5Ghz) is $316 The 1650 is a pretty sweet bang for the buck in terms of $$/cycle if you want to spend a bit more, and grabs the extra cores + the higher clock.

Even the x10srl-f isn't a bad choice. The difference I could see was the nics... and the i210 on the cheaper version is known good. So not really a downside, imho.

It is very hard to go wrong with an Supermicro x10 2011-v3 board. Your choices are fine. Pretty much personal preference. I might even start with 32GB just to see how it runs if i were you. You know you have the headroom, but in the meantime you keep the dollars in your pocket.

Looks great. Enjoy.
 

Alassiry

Cadet
Joined
Jan 17, 2015
Messages
5
Thanks for all the feedback earlier gentlemen, I ended up going with the following:

SuperMicro CSE933T Chassis | SuperMicro X10SRI-F | Xeon E5-1650 v3 | 64GB (4x16GB) Samsung DDR4 ECC | 10x 6TB HGST Deskstar in RaidZ2

As for progress, I only received the last part I was waiting for this past weekend and finished assembly two days later.

First attempt to POST led to a scary power on-off cycle where the fans would spin for a second then it would turn off. Funnily enough, after attempting to troubleshoot and then referring to the internet, I found the solution in one of Cyberjock's responses to a similar case where it turned out I had left some unused chassis standoffs beneath the motherboard and they were causing shorts.

I have now successfully been able to boot and install FreeNAS but prior to committing anything to it I'm going through pre-commissioning tests (Memtest+ followed by HDD burn in testing as suggested in the how-to).

A couple of questions:
My x10SRI-F has 10 onboard SATA ports, 6 of which are through the Intel c612 AHCI, and the remaining 4 through the SCU on the same chipset. Both appear to support up to 6 Gbps, regardless of actual throughput. I also happen to have an M1015 which I have been able to flash into LSI 9211-8i in IT mode.

Would anyone know if there would be any advantage in using the HBA in this case instead of the onboard ports, such as to avoid a performance penalty by using the SCU ports for example?

Second of all, I noticed an option in the Motherboard BIOS for DCU mode ("Set the data-prefetching mode for the DCU (Data Cache Unit). The options are 32kb eight away without ECC and 16KB four away with ECC").

I understand that we prefer to build ZFS based systems with ECC support, but was unsure whether this option was desirable, preferable, detrimental or irrelevant.

Appreciate everyone's help throughout the build.
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
I have an X9-gen Supermicro mobo with a similar setup. Though I have no numbers to back this up, I have not noticed any difference in performance between the Intel ports (ich or scu) and the (onboard 2308) LSI ports while attached to spinny disks on FreeNAS.

As to your second question, I am not the most qualified to answer but I would stick with caution and use the ECC mode.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks for all the feedback earlier gentlemen, I ended up going with the following:

SuperMicro CSE933T Chassis | SuperMicro X10SRI-F | Xeon E5-1650 v3 | 64GB (4x16GB) Samsung DDR4 ECC | 10x 6TB HGST Deskstar in RaidZ2

As for progress, I only received the last part I was waiting for this past weekend and finished assembly two days later.

First attempt to POST led to a scary power on-off cycle where the fans would spin for a second then it would turn off. Funnily enough, after attempting to troubleshoot and then referring to the internet, I found the solution in one of Cyberjock's responses to a similar case where it turned out I had left some unused chassis standoffs beneath the motherboard and they were causing shorts.

I have now successfully been able to boot and install FreeNAS but prior to committing anything to it I'm going through pre-commissioning tests (Memtest+ followed by HDD burn in testing as suggested in the how-to).

A couple of questions:
My x10SRI-F has 10 onboard SATA ports, 6 of which are through the Intel c612 AHCI, and the remaining 4 through the SCU on the same chipset. Both appear to support up to 6 Gbps, regardless of actual throughput. I also happen to have an M1015 which I have been able to flash into LSI 9211-8i in IT mode.

Would anyone know if there would be any advantage in using the HBA in this case instead of the onboard ports, such as to avoid a performance penalty by using the SCU ports for example?

Second of all, I noticed an option in the Motherboard BIOS for DCU mode ("Set the data-prefetching mode for the DCU (Data Cache Unit). The options are 32kb eight away without ECC and 16KB four away with ECC").

I understand that we prefer to build ZFS based systems with ECC support, but was unsure whether this option was desirable, preferable, detrimental or irrelevant.

Appreciate everyone's help throughout the build.

The question is: "What the hell is a DCU?"
 
Status
Not open for further replies.
Top