New big build

Status
Not open for further replies.

McVit

Dabbler
Joined
Sep 20, 2014
Messages
18
Hello FreeNas Forum!

iam about to build my first FreeNas and iam going big.

Case: X-Case RM 424s - Home Server
USB stick: Corsair Flash Voyager 16GB USB3
Motherboard: SuperMicro MBD-X10SL7-F-O
CPU: i3 4160T
RAM: Kingston 8GB DDR3 1333MHz ECC CL9 1.5V X2

HDDs: unknown at this point
Controllers:IBM ServeRAID m1015 X2


So to the problem at hand:
1.I have the case for 24HDDs. will i be able to run all 24 disk with the built in LSI controller of the motherboard and the 2x m1015? all IT flashed ofc.

2.disk recommendations/raid setups?
i was looking at the WD red 6TB disks, but iam a little worried that the density will kill other disks when rebuilding. thoughts? turning to the pros.

Have a great weekend!
 

Z300M

Guru
Joined
Sep 9, 2011
Messages
882
Hello FreeNas Forum!

iam about to build my first FreeNas and iam going big.

Case: X-Case RM 424s - Home Server
USB stick: Corsair Flash Voyager 16GB USB3
Motherboard: SuperMicro MBD-X10SL7-F-O
CPU: i3 4160T
RAM: Kingston 8GB DDR3 1333MHz ECC CL9 1.5V X2

HDDs: unknown at this point
Controllers:IBM ServeRAID m1015 X2


So to the problem at hand:
1.I have the case for 24HDDs. will i be able to run all 24 disk with the built in LSI controller of the motherboard and the 2x m1015? all IT flashed ofc.

2.disk recommendations/raid setups?
i was looking at the WD red 6TB disks, but iam a little worried that the density will kill other disks when rebuilding. thoughts? turning to the pros.

Have a great weekend!
I can't offer any advice concerning the questions you asked, but I cannot refrain from pointing out that Supermicro never recommended Kingston RAM for their motherboards, and even Kingston no longer has any 8GB modules they recommend for that motherboard.

I could be wrong, but I'd be surprised if 16GB of RAM would be sufficient for that amount of disk space if you want decent performance.
 

McVit

Dabbler
Joined
Sep 20, 2014
Messages
18
I can't offer any advice concerning the questions you asked, but I cannot refrain from pointing out that Supermicro never recommended Kingston RAM for their motherboards, and even Kingston no longer has any 8GB modules they recommend for that motherboard.

I could be wrong, but I'd be surprised if 16GB of RAM would be sufficient for that amount of disk space if you want decent performance.

Thanks for quick response Z300M
regarding the RAM, i was deciding between some 8GB Samsung modules or the kingston. guess iam going back to the Samsung modules then.
and the plan is atleast for now to get 16GB RAM (2x8GB) to get the machine started and then add 2 more 8GB modules for a total of 32GB ECC ram
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Yea for the x10 you don't want kingston

Sent from my SGH-I257M using Tapatalk 2
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You can run all 24 drives off the built in lsi plus your 2 1015's. You will need reverse breakout cables to use the 8 onboard lsi drives. Your backplane comes with mini-sas (8087) connectors.

Don't sweat the 6TB drives, we'll all be there eventually. Raid z2 will be adequate protection during resilvering, but if you want to run z3 nothing is stopping you. As for vdevs and layout... It is a trade-off of space for iop speed. Throughput is easy due to spindle count, the network will likely hold you back.

I'd look at 4 * 6 drive vdevs in z2 as pretty balanced, reasonable resilver times, sufficient number of vdevs for performance. 3 x 8 z2 is pretty nice and picks you up 2 more drives storage. Anything wider (i.e 2x12) is a decision to optimize for space, imho. You might be fine with the performance, you might find as the pool gets full that performance and resilvering etc are unacceptable. Personal choice. I'd wouldn't go that wide... but I am a go-fast guy. 2 z3 12's would be the widest/slowest I would consider. I am disregarding 'optimal' layout in lieu of compression.

Have fun. You'll have a helluva box if you fill that thing with 6TB drives. :)
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
If you're really looking at 24 x 6 TB disks, you might want to consider a motherboard and CPU that will support more than 32 GB of RAM. The 1 GB RAM / 1 TB capacity rule is far from set in stone, but with over 100 TB of capacity, a max of 32 GB just might not be enough.
 

McVit

Dabbler
Joined
Sep 20, 2014
Messages
18
If you're really looking at 24 x 6 TB disks, you might want to consider a motherboard and CPU that will support more than 32 GB of RAM. The 1 GB RAM / 1 TB capacity rule is far from set in stone, but with over 100 TB of capacity, a max of 32 GB just might not be enough.

Yes , this has crossed my mind. the thought was to build an "cheap" and energy conserving build and put all the money towards disks. 24Disks of any size is a rather hefty lump of cash.

@72TB of disk space 24 x 3TB would 32GB RAM still be taking a chance?

my storage space today is roughly 20TB and the need for expansion is big. but might not be 24 x 6TB big .
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Honestly, I don't know--I've not worked with such a large system before, to know what you can get away with. I'll note that E5 Xeons start around $200, and SuperMicro Socket 2011 boards (with 8 DIMM slots, supporting up to 512 GB of RAM) start at $280. Cost delta is about $200 compared to what you're suggesting. For all I know, you could do just fine with the hardware you're looking at, but I'd think more room for expansion in the RAM area would be a good idea.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
It is a pretty good point considering you are around the 8k mark just for disks with 6TB units. +- a few hundred on a 10k build is neither here nor there. Sweet spot on my current drive spreadsheet is 3TB only considering cost... not density.

However, my real guess is that you can/will build this out slowly. Typical home loads are media based, which almost act like cold storage. There is very little data that is accessed frequently to benefit by caching. So how much use is there for more ram that will only be used as ARC? Almost none. Rules of thumb such as 1GB per TB have to make assumptions that loads are scaling with space. For many of us it will be x devices, limited by 1GBe, whether we have 10TB or 100TB. Plus the access is random, contiguous, and barely cacheable.

I wouldn't hesitate to fill the rest of my 24 bay case with 6TB drives or larger and manage it with my Haswell e3. But not willing to drop 8k on drives at the moment, and couldn't fill it. If I was feeding esxi, or many users, then 32GB is limiting. In fact it already has me ready to retire my e3 to backup server and grab an e5 for more scalability. But not based on a rule of thumb that can not take into account home workloads... we are using a freight train to move feathers in most cases.

In terms of strategy, I'd hold off on the e5 until ddr4 hits just a wee bit better price (q2 2015?), and then if we are considering scalability and future proofing we are on the next tier up. I'll probably get impatient and just pull the trigger... but my brain is telling me to be patient ;).

Admittedly I don't see a $200 delta when jumping to e5 as the dual boards with better disk controllers and 10GBe often beckon. I always end up way over double... plus a whack more of RAM. I do like danb35's math though if one has the discipline. Truth is you will love this thing any way you slice it. It's a primo rig that most will only dream of.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
At this point i would basically say try to avoid SAS expanders when using SATA drives. I need to get around to making a post explaining why.. im just lazy i guess.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
At this point i would basically say try to avoid SAS expanders when using SATA drives. I need to get around to making a post explaining why.. im just lazy i guess.

A better way to say that is to try to avoid using cheap SAS expanders with SATA drives. Many of us are using SAS expanders with SATA and have no ill effects.

One of the more recommended is the Intel 24 port. The dilemma with this purchase... you will pay about $250 to add 20 ports where you can buy M1015s about $100 (and those add 8 ports each). So it's not "the most cost effective" but it does give you more free PCIe ports in case you ever need it.
 
Last edited:
Status
Not open for further replies.
Top