BUILD My 1st FreeNAS build; noob from Windows

Status
Not open for further replies.

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
Noob from the windows world
So I'm finally "thinking" about pulling the trigger on my first FreeNAS build.
I have been reading @cyberjock material & signing up for webinars by @Linda Kateley
Looking for some community critique on the hardware.

*notes*:
parts with *xx* are parts I already own
italicized parts are ones I would have to purchase

The Build :
Chassis: SuperMicro *CSE-216E26-R1200LPB* or *SC836E26-R1200*

JBOD: *LSI 620 J* (may not use)

Mother Board: Supermicro X9SRH-7F or *X9DRi-F* or *X9DR3-LN4F+* or *X9DRD-7LN4F-O*

CPU: Intel Xenon E5-1620v2 or *E5-2620*

RAM: Enough to fill all DIMM slots, using 16 GB SAMSUNG *M393B2G70BH0-CK0* sticks - ECC DDR3 1600

PSU: Built in redundant power supply in Chassis

Boot Device: *Samsung SATA SSD* (x2 mirrored) or SATA DOM

HDD: Seagate *ST91000640SS* or WD or HGST NAS HDD (size TBD)

SSD: STEC SAS SSD *Zeus Z16IZF2E-200UCU* or *Z16IZF2D-400UCM*

UPS: *Eaton 5PX2200RTN* + *5PX Extended Battery modules*


Any advantage going with the higher clock speed of the 1620 vs the dual 2620's?

I still have a lot of learning to do but I can handle being positive critique no problem

How does this system look?
Have I missed anything?

Feedback/Comments are much appreciated.

Thanks
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
Usually we see n00bz trying to make FreeNAS work on grossly inadequate hardware. You're kind of at the opposite end of the spectrum. My gut feeling is that you're looking at ridiculous overkill, but nobody can really say without some indication of your use case. How much do you plan to store on this system, how many users, and what will they be doing with it?
 

demon

Contributor
Joined
Dec 6, 2014
Messages
117
As danb35 mentions, you need to plan your use case and capacity better, as you should have roughly 1 GB of memory for each TB of disk space (obviously, the 8 GB RAM baseline applies). And if you're going crazy anyway, why go with IVB-gen boards and CPU(s)? Why not HSW?

Edit: I misunderstood, I thought you were saying 16 GB total, not 16 GB DIMMs. Though I'm pretty sure DDR3 16 GB DIMMs aren't well supported, are you sure those would actually work? Never mind, RDIMMs.
 
Last edited:

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
Usually we see n00bz trying to make FreeNAS work on grossly inadequate hardware. You're kind of at the opposite end of the spectrum. My gut feeling is that you're looking at ridiculous overkill, but nobody can really say without some indication of your use case. How much do you plan to store on this system, how many users, and what will they be doing with it?
danb35,
I've never believed in cutting corners on the hardware side.
My Win 8.1 desktop is a HP Z800, all SSDs, the CPUs are old (2x Xeon E5640) :mad: with 48GB of RAM.
No reason for me to build a "smaller" FreeNAS box?

Use case: is for home use & testing scale for small business usage
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
As danb35 mentions, you need to plan your use case and capacity better, as you should have roughly 1 GB of memory for each TB of disk space (obviously, the 8 GB RAM baseline applies). And if you're going crazy anyway, why go with IVB-gen boards and CPU(s)? Why not HSW?
What is HSW?
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
No reason for me to build a "smaller" FreeNAS box?
No harm at all in overkill, other than the harm to your wallet. The only concern I'd have is what you intend to do with the SSDs you mention. ZFS supports both read (ZIL/SLOG) and write (L2ARC) cache, but they're not frequently needed, and using a cache device in the wrong situation can actually degrade performance. With the amount of RAM you're talking about, there's a reasonable chance you could make use of an L2ARC device, but you may want to hold off on installing either of those until you know they're something you'd actually benefit from.
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
The harm to the wallet was done at time of purchase :)
I hear you. I only plan to use the SSDs if needed.

What about the choice of MB/CPU, single vs dual?
 

demon

Contributor
Joined
Dec 6, 2014
Messages
117
What is HSW?
Haswell. The new Haswell-E Xeon (E5 v3) CPUs. They require DDR4 RAM, but since you're going to be buying RDIMMs anyway, the cost overhead shouldn't be significant, and it gives you a future upgrade path. From what I understand, major DRAM vendors are already starting to wind down DDR3 manufacturing capacity, and build out DDR4 manufacturing capacity to replace it. Broadwell-E, when it's available, will likely share the Socket 2011-v3 layout, so you have future upgradability there as well. IVB is last generation tech (well, technically 2 generations behind, with Haswell-R, but the enthusiast/server platform is always a generation behind the desktop/laptop/ultrabook platforms). The pricing for the boards is about the same, and some of SuperMicro's X10 boards appear to be more readily available than their X9 equivalents.

If you're going to be spending this kind of money on a high-end rig, I'd say get something with more future while you're at it.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
My build would be:
CSE-216E26-R1200LPB
X9DRD-7LN4F-O
1x E5-2620
8x Samsung M393B2G70BH0-CK0

How many 2.5" HDDs and SSDs would you have available? What size would you need on each type? If I know that, I could plan the best pool layout.
 

sfcredfox

Patron
Joined
Aug 26, 2014
Messages
340
ZFS supports both read (ZIL/SLOG) and write (L2ARC) cache, but they're not frequently needed, and using a cache device in the wrong situation can actually degrade performance.
Dan,
Can you double check this statement?
Did you mean read cache (L2ARC) and SLOG for Sync writes?
I'm sure you already know that the SLOG/ZIL isn't a write cache like most people initially think, but does help negate the performance impact of SYNC writes.

Just double checking that I and OP understand the difference.

As you already said, he'll need to see if his workload even requires SLOG.
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
My build would be:
CSE-216E26-R1200LPB
X9DRD-7LN4F-O
1x E5-2620
8x Samsung M393B2G70BH0-CK0

How many 2.5" HDDs and SSDs would you have available? What size would you need on each type? If I know that, I could plan the best pool layout.
HDD:
20+ Seagate ST91000640SS

STEC SAS SSD:
1x Zeus Z16IZF2E-200UCU
4x Z16IZF2D-400UCM
4x S840E200M2S

Size & Type: TBD
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
ZFS supports both readwrite (ZIL/SLOG) and writeread (L2ARC) cache, but they're not frequently needed, and using a cache device in the wrong situation can actually degrade performance.
Can you double check this statement?
Sorry, I got them reversed. Fixed in the quote above.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
The ZeusIOPS are good SLOG SSDs (would only help with VMs tho), the S840's would work well in a dedicated SSD pool.

I'd stuff the other 16-20 bays full with the HDDs and run them in striped mirrors for performance or 2x 10disk z2 for storage.
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
One thing I forgot to mention....

This box will see a lot of use storing VMs - Windows Server 2012 R2 Hyper-V VMs
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
The ZeusIOPS are good SLOG SSDs (would only help with VMs tho), the S840's would work well in a dedicated SSD pool.

I'd stuff the other 16-20 bays full with the HDDs and run them in striped mirrors for performance or 2x 10disk z2 for storage.

I like the striped mirror idea. I assume I can add a JBOD if I need more storage.

Re the S840's (I might have 6 of them), this pool should be striped mirrored or what RaidZ level?
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Unless you need more RAM or I/O bandwidth, a single CPU does the job as well. 128GB are already plenty much compared to the usual run-of-the-mill system. ;)

If you use striped mirrors, you can add 2 more disks at a time, equaling a pool growth by 1x disk size. The JBOD seems a good idea to not worry about expansion and layout - just split your mirrors across the JBOD and the server chassis. 10 HDDs, 3x S840, 2x Z16IZF2D-400UCM, 1x Samsung SSD per chassis.
The SATA DOM would be a nice idea as well, but it's around the same price than this Samsung SSD.

For backup systems I'd recommend the "run of the mill" compact builds with 6x6TB HDDs in a raidz2. Small enough to grab them in case of fire/water/burn. Did you plan on offsite backup yet?
 

FATeknollogee

Dabbler
Joined
Feb 10, 2015
Messages
20
Unless you need more RAM or I/O bandwidth, a single CPU does the job as well. 128GB are already plenty much compared to the usual run-of-the-mill system. ;)
Ok. I can always add the 2nd CPU and more RAM later on. I have been known to go for the "overkill"

If you use striped mirrors, you can add 2 more disks at a time, equaling a pool growth by 1x disk size. The JBOD seems a good idea to not worry about expansion and layout - just split your mirrors across the JBOD and the server chassis. 10 HDDs, 3x S840, 2x Z16IZF2D-400UCM, 1x Samsung SSD per chassis.
If I'm understanding correctly, a mirrored set would consist of "HDD1, chassis 1 + HDD1, chassis 2", "HDD2, chassis 1 + HDD2, chassis 2" etc??
I don't understand what's happening with the 3x S840, 2x Z16IZF2D & 1x Samsung SSD.
I was planning to use the Samsung SSD as a boot drive?

For backup systems I'd recommend the "run of the mill" compact builds with 6x6TB HDDs in a raidz2. Small enough to grab them in case of fire/water/burn. Did you plan on offsite backup yet?
No plan yet, but I like your idea
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Yup. Disk counts are per chassis. You have 2 chassis. The bootmedia needs to be connected to the system too, so why not use the hotswap bays for these too. And yes, since 9.3 FreeNAS supports boot mirrors.

I'd look into getting an 9207-8e for the JBOD chassis, it uses the same SAS chipset the X9DRD-7LN4F has onboard, only with 2x SFF-8088 external connectors.

For offsite backups I'd look into OVH's FS-30T. thats 5x6TB Enterprise drives in an ECC capable system, so create an raidz on there (considering it's not your only point of the backup of the backup that's okay), maybe order their vRack gateway as well for a secure connection between your OVH servers and your home. I'd order one in Canada and one in France for replication of your home backup box.
 
Status
Not open for further replies.
Top