BUILD First FreeNAS build -- Build instead of buying a QNAP box

Status
Not open for further replies.

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
Firstly, Thank You! for all the information on this site for the newbies like me. I have tried to learn from the stickies and the forum. Still reading but feel I am ready to order the HW. I understand the specs maybe a bit excessive, but it is mostly current tech, and in this case excess is fun :)

My questions on the build are listed below. Also, the list of HW is also listed for your review/consideration. Thanks in advance for your comments/input!!!

Questions:
1. CPU - Any reason I should rather choose the E5-1650v3? I am trying to trade off performance vs. TDP. I am not sure FreeNAS needs a 6 core CPU. Does it really matter? NOTE: Choice of memory will make a difference on the processor. Please see point-3.

2. Controller/HBA - I understand that you can buy used, but this is a 12Gb/s card and makes for a simple build; maybe overkill but simplifies build.

3. Memory - will 64GB be enough or do I need to move to a larger size? If I need more memory then I will have to move to the LGA-2011 socket. How will the performance be with 64GB as max size?

4. NICs - Use onboard NICs. Will upgrade to 10Gb network eventually. Any problems with the NICs I should plan to workaround?

5. RAID - Any RAID constraints I should be thinking about or do I have the freedom pick from the list? NOTE: Planning for RAID-6 type parity array. Also, should I build multiple arrays rather than one large one? Based on your knowledge/experience please comment on structuring arrays - size, type, etc.

Hardware:
- Norco 4224 (case purchased)
- Intel Xeon E3-1275 v5 SkyLake 3.6 GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1151 80W

- ECC 64GB (Max for motherboard. Samsung per Supermicro)
- Supermicro X11SAT-F Workstation - Intel C236 Chipset - Socket H4 LGA-1151
- LSI 9305-24i x8 lane, PCIe 3.0 Full Height SAS SAS 9305 12 Gb/s SAS HBA

- Using motherboard NICs
- HGST HDD (NAS Class) 4TB (24x)
- SanDisk 16GB Ultra Fit CZ43 USB 3.0 Flash Drive, (SDCZ43-016G-G46) for FreeNAS
- Other: Noctua cooler, a PSU 750W (recycling) and Norco SFF8087 to SFF-8087 (6x) cables.

Purpose:
- Serve as a FreeNAS box to store and serve media
- Primary storage/back-up for PCs on the network. I have a lot of data and need a central repository.
- All Win10 PCs on the network
- Expecting to load with Hitachi 4TB NAS drives (24 of them. Will start with 12 and nuild out from there)

Objectives:
- Fast, Reliable, 24/7 system - NIC is expected bottle neck at 1 Gb network
- Remote management capabilities
- FreeNAS maintenance activities - faster better
- Enough HW headroom (for added complexity)
- Planned size 24 x 4TB = 96TB (start with 12 disks and build to 24)
- RAID configuration - undecided; will need redundancy
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Well, going with a SuperMicro chassis with a SAS expander backplane would simplify your cabling and probably allow for a less-expensive HBA, but if you already have the Norco I guess that can't be helped. I'd say your CPU and RAM are both overkill unless you're planning on transcoding several HD video streams simultaneously--I'd probably drop the CPU down to an i3, and the RAM to 32 GB (though as 2 x 16 GB sticks, so you still have room to expand).

For disk layout, I'd suggest two six-disk RAIDZ2 (RAID 6 equivalent) vdevs in a single pool. You can then expand your pool six disks at a time.
 

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
danb35, Thank you so much for the quick feedback. I now regret not looking at a SuperMicro chassis! Any model that you might recommend? I agree with your CPU comment - will think about that one. And glad to hear that I can drop the memory down to 32GB. From what I was reading, I thought you would tell me to go higher :)

Maybe I will go for a SuperMicro chassis and sell mine on ebay or something!

Thanks for the advise on the RAID clusters. I need to read-up on vdevs next!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
64GB is overkill for a 96TB NAS?

Anyway, if 64GB is underkill then you might want to look at the E5. I have an E5-1650v4 in my Norco RPC-4224 with a Noctua cooler

I upgraded the noisy fans to Noctuas, and now it's as close to silent as you could expect :)

But it's designed to support transcoding and VMs. (Not so silent at 100% CPU load)

The SuperMicro board you've selected is a workstation board right? Any reason you didn't go for a server board with IPMI?

Mine is the x10-SRi-F. Has 10 Sata ports and 8 Dimm slots. Means I can drive all 24 bays with the motherboard ports and 2 8i HBA cards. I wanted the 16x PCIe slot for stupid fast NVMe... One day :)

I've gone with 8 wide raidz2
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I now regret not looking at a SuperMicro chassis! Any model that you might recommend?
The SC846-E16 series is probably what you want to look for. It's a 4U, 24-bay chassis, usually with dual redundant 1200-watt power supplies, and a SAS2 expander backplane. If you're lucky, you can find them for $500 or less on eBay. There's also the -E26, which has a SAS3 backplane, but those aren't nearly as readily available on the used market and therefore are pricier.

Another option (which is what I have) is the 847 series, which adds 12 more bays on the back of the chassis, giving you 36 bays in 4U of space. I found my complete server, minus the Chelsio NIC, for about $1200, but I think that was a lucky find--I haven't seen quite that good of a price since. But here's an example of a complete server on an 847 chassis, dual (older) Xeons, 64 GB of RAM, very suitable HBA, and IPMI (which is very handy for headless servers), for just under $1000. But I just realized you're in Australia, so what I'm finding on eBay here probably won't help you. Sorry.

I'll warn you, though, that the Supermicro rack chassis are noisy. If you're looking for a quiet system, they aren't for you. They are, however, engineered well enough that cooling isn't usually much of a problem, and that's important.

64GB is overkill for a 96TB NAS?
It's more than needed for a 32 TB NAS, which is what he's going to start with if he follows my recommendation for the layout of his initial 12 drives. As you get past 16 GB of RAM, the rule gets much looser, and nothing of what he's said about his use case suggests that it will be terribly RAM-hungry.
 

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
Stux,

Mobo - thought I was getting the server board!!! Well I checked and it does have IPMI. Am I mistaken?

As to your point on memory, given the size of my box, I am not sure how much memory I really need; that was the point I was trying to make in the OP.

This board was of particular interest because, it has a bunch of 16x PCIe slots. Thought might be useful over the long run.

Interesting you picked a E5-1450v4, what Noctua cooler are you running and how are the temps?
 

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
danb35, thank you for the case suggestions. I will keep my eyes open. Thx for the clarification on the memory needs. That is helpful.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
It is a server board. Just has s bunch of stuff you probably won't be able to use (i.e. Audio, thunderbolt) and stuff it doesn't have (i.e. 8 Sata ports)

I use the Noctua NH-U9DX-i4 cooler
http://noctua.at/en/products/cpu-cooler-workstation-server/nh-u9dx-i4

It's the socket 2011 version of their 90mm cooler. A 120mm would be too tall.... Just.

It's great.

Flat out, with drives and 12 threads of mprime my CPU temps topout at 65C. That's worst case.

Most of the time i have all fans running at 30% and virtually silent.

I write a pretty robust fan controller script to minimize noise:
https://forums.freenas.org/index.php?threads/script-hybrid-cpu-hd-fan-zone-controller.46159/

I currently only have 8 Seagate 4TB NAS HDs. But the plan us to grow to 3x 8, each in z2. And each vdev addition will probably be slightly bigger.

Thus I went for e5 for the memory, and 1650 for the cores and single core speed.

I currently have 32GB for 24TB (32 actual). And that feels snug.

I think you'd want more than 64GB for more than 96TB.

But I don't know. Thing is, I've basically built/building the same system as you, and I went e5 ;)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Been meaning to do a build report. Since the box is very close to going into production I should get around to it ;)
 

Keresian

Dabbler
Joined
Dec 27, 2013
Messages
16
Another option (which is what I have) is the 847 series, which adds 12 more bays on the back of the chassis, giving you 36 bays in 4U of space. I found my complete server, minus the Chelsio NIC, for about $1200, but I think that was a lucky find--I haven't seen quite that good of a price since. But here's an example of a complete server on an 847 chassis, dual (older) Xeons, 64 GB of RAM, very suitable HBA, and IPMI (which is very handy for headless servers), for just under $1000. But I just realized you're in Australia, so what I'm finding on eBay here probably won't help you. Sorry.

Sorry to kind of hijack this - The server you linked here.. Do you think, even though its an older board/xeon, it would support transcoding for plex and storage? I know its an 8 series and most folks recommend 10 series boards.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Its a Westmere EP Xeon. This is before Intel got on their power saving kick.

IIRC, Westmere EP basically has about 50% worse clock for clock performance than state of the art. Which is not bad. My biggest concern is they basically don't under stand the concept of idling.

The i7 generations: Nehalem, Westmere, Sandy Bridge, Ivy Bridge, Haswell, Broadwell, Skylake, Kaby Lake

It was Haswell where the idle power usage dropped to the point where PSUs had to be replaced with ones which would still work with a CPU that was drawing very little power ;)

Westmere was a die shrink of Nehalem with very little difference, except they brought out 6 core CPUs. I actually have 3 of these, Dual X5680 (turbo to 3.6ghz) in a Mac Pro 2009 (24 threads.... great machine), and a i7 990x running at about 4.4ghz, also a great machine. Both machines are reaching the end of their lives because they have both become IO bound.

The E5645 is a 2.4-2.67Ghz part. That's a bit slower than the 5680 or 990x.

Here is a detailed look at the generations

http://www.nextplatform.com/2016/04/04/xeon-bang-buck-nehalem-broadwell/

Summary is, don't bother with anything pre Nehalem (ie Core 2 Duo/Quad). They're limited by the front side bus, and they don't have hyperthreading or turboboost (iirc).

Other limitations worth thinking about... the board won't support PCIe3, DDR4, SATA3 etc.

You can find old benchmarks for video type stuff if you look for the equivalent i7s to the Xeons. The E5645 is slower per core than the original i7 920.

I'm not 100% certain how good it would be for Plex. The plex guys might know ;)
 
Last edited:

Keresian

Dabbler
Joined
Dec 27, 2013
Messages
16
Awesome, thanks for the response, it helps out quite a bit. If I found a case like that, and swapped in a better (ie: the recommended) mb/cpu, it would be great i'm assuming though?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
You certainly could swap out the motherboard for something newer, but the X8 board/CPUs/RAM would likely serve you well, if being a bit more power-hungry than newer gear. Benchmark-wise, dual E5645s rate a little above a single E3-1246 or E5-1620, so it's still going to have pretty respectable performance.
 

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
It is a server board. Just has s bunch of stuff you probably won't be able to use (i.e. Audio, thunderbolt) and stuff it doesn't have (i.e. 8 Sata ports)

.....

But I don't know. Thing is, I've basically built/building the same system as you, and I went e5 ;)

Thanks for the reply! I agree it looks like you and I are building the same system :) I will explore the LGA-2011 mobo also. I was worried about the 64GB limit and that was offset by the TDP. Now that you have your toes in the water with Noctua, I am now thinking that I will also go that route. The one difference I can see is I will use the same controller I have been planning. Gives me more SATA ports now than I will know what to do with!!!

Keresian - no worries thanks for the question. Learned a bit more about the chassis.

danb35 - love the vorlan (sp?) image! Thanks again for your kind engagement!

I will post final draft of the equipment list based on the info that you all have been kind enough to provide.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
I was worried about the 64GB limit and that was offset by the TDP.
Modern Intel CPUs support idling, so the TDP isn't actually used very often. An E5 is still going to draw more power at idle than an E3, but it isn't going to be drawing 100 watts just for the CPU.
 

audiophile20

Dabbler
Joined
Sep 21, 2016
Messages
11
... An E5 is still going to draw more power at idle than an E3, but it isn't going to be drawing 100 watts just for the CPU.

This now opens up the E5 discussion. I will start reading up and post a revised HW list. Thanks!
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Modern Intel CPUs support idling, so the TDP isn't actually used very often. An E5 is still going to draw more power at idle than an E3, but it isn't going to be drawing 100 watts just for the CPU.

Yep. It only draws 140W if you're flogging it
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
On my current system, I see a difference of about 150 watts of power draw by starting up the BOINC jail. That's with dual E5s of a previous generation, which are rated for 115W TDP. Assuming that BOINC is coming close to maxing out the CPUs (and I really don't know how safe/accurate of an assumption that would be), that would mean that they're drawing somewhere around 30 watts each at idle. Current-generation parts would probably help a bit, and of course a single chip should help a lot.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Then you can add an SC847 JBOD expansion chassis (4RU and 45 drives) with an external SAS cable.
That might be kicking up the scope of the project a few notches... (but yeah, it's a neat idea. No room in my rack for one, but I still have 18 bays free, so it'll be a while...).

For @audiophile20, yeah, @Mirfster's pretty fond of the C2100s. They're an older-generation server (X8-equivalent), but new enough that they use DDR3 and don't have an FSB. As with the Supermicro box I linked earlier, they're going to draw a bit more power than the current stuff, but you get a lot of server for your money.
 
Status
Not open for further replies.
Top