BUILD Build Double Check: Painting by numbers

Status
Not open for further replies.

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
I tend to be cautious when purchasing hardware, so I intend to largely paint-by-numbers according to Cyberjock's hardware guidelines. However, as my hardware experience is ENTIRELY on consumer grade components, I'm hoping that someone here would be kind enough to confirm that they think this stuff will all fit together nicely.

This build will be a 10x6TB Seagate SATA single VDev RaidZ2 build intended predominantly to store and stream large files (mostly video and music) to a 4 person family. I will also store some photo, tax, business, and miscellaneous document information on there. This will be a read-often environment with up to 3 concurrent users, and writes will be performed in batches once or twice a week by a single concurrent user. I'll probably also disable atime in order to further decrease writes.

I will probably experiment with a couple jails and see if I can stream something using the Plex or MediaBrowser plugins, but this is of a secondary concern. Down the road, assuming I feel I have the overhead, I might do some additional experimentation with cameras and such, but I don't want to get hung up on the idea of that.

I intend to use the X10SL7-F motherboard so that I can utilize all of the drives without the headache of using another card. The integrated LSI controller will, of course, be flashed to IT mode.

I'll use a Xeon E3 1231v3 as I'm positive it supports ECC RAM (curse that confusing i3 mess). I'll be using 32GB of the stuff, and I'll just get one of the Supermicro approved models for my board.

I'd like to use a Supermicro 2u/3u/4u case like the 2U Supermicro CSE-826A-R800LPB SuperChassis 12 bay SAS SATA Mini-I-Pas, but I'm not sure that, like consumer cases, any microATX board will fit in any microATX case. If that's the case, then I'll just get whatever Supermicro 2u/3u/4u microATX board is inexpensive at the time and has redundant power supplies of 700 watts or greater. Given Supermicro's reputation, I'm guessing these power supplies should be fine.

I'll be purchasing a UPS once the build is finished and the burn-in process is complete. I'll just choose a decently priced one from this list.

Finally, I'll just be booting off a pair of SandDisk jump drives. I believe the Cruzer is well recommended around here, and they're plentiful at local Best Buys/Targets/Wal-Marts.

Anyone have any stupid mistakes to point out or otherwise constructive comments?

Case Study
**********
Hardware Platform

Motherboard - Supermicro X10SL7-F-O - 238.99
CPU - InteL Xeon 1231 v3 - 242.99
RAM - 32 GB (4 x 8 GB) Samsung DDR3-1600 8GB/1Gx72 ECC - 279.96
Boot Drive - Kingston SSDNow V300 60GB 2.5 inch SATA3 Solid State Drive - 45.95
10 x 6 GB Seagate SATA STBD6000100 Hard Drives - 619.90
2U Supermicro CSE-826TQ-R800LPB SuperChassis 12bays - 299.00
PSU - CP1500PFCLCD - 205.99
8 pin and 24 pin extension cables - 6.78
SuperMicro chassis drive bay screws - 6.99
Build Total: $1946.55

Process up to this point (Mistakes and All)
***************************************
Assembled Hardware

Mental note for server hardware noobs: I initially cursed Supermicro for their awkward chassis design decisions, then realized that almost everything I thought was going to be awkward to assemble slid out easily or otherwise allowed me much easier access than I thought it would at first glance. Fans, panels, etc. all slide out. The only real complaints I had with my chassis were the short 8 pin/24 pin cords and the relatively small space available to route 11 SATA cords.

Second server hardware noob note: I saw some vague references on this forum to people not having hard drive screws when building their systems, which I didn't really understand because the HDs come with screws. Those screws will not work on the SuperMicro chassis. You need Supermicro drive bay screws which are flush with the edge of the drive bays. Part # = MCP-410-00005-0N

Powered on and made sure all moving parts were moving, there were no BIOS beep warnings, etc.

Configured BIOS to keep server powered off in case of power loss. I have a UPS, so if this ever occurs it's likely to be one of those nasty situations that could result in some rolling brownouts for a while.

Entered LSI Configuration Utility and confirmed that the firmware was out of date and in IR mode

Flashed to IT mode using these instructions: https://forums.servethehome.com/ind...g-the-lsi2308-on-an-x9srh-7f-to-it-mode.1734/

Re-entered LSI Configuration Utility to confirm version 16 and IT mode

Ran Memtest, but noticed that my CPU was getting warm. Attempted to cool it down by removing chassis panel - do NOT do this, your drives will get too warm as it will pull air in through the panel access and not over the drives.

Went into BIOS, set fan mode to "Full" and reattached chassis panel

Installed FreeNAS

Verified that the OS can see the bare drives. The back panel is indeed "just a circuit board" that passes the SATA connections through to the motherboard.

Verified that a momentary press of the power button didn't immediately shut down the server

Reserved a static IP address via DHCP reservation in my router for IPMI connectivity

Ran short SMART Tests

Ran long SMART Tests

Ran conveyance tests

Executed BadBlocks with the 4096 option using tmux through the GUI shell, but could only start 4 of the 10 drives because I received the "Couldn't create panel" error trying to open the next one

Tmux closed and I couldn't get back into the shell

Configured SSH, connected to the machine through Putty, and attached to my tmux session

Reconfigured the tmux layout using the ctrl+B, space option and was able to start BadBlocks on my remaining drives

Set up 2Factor Authentication on my Gmail account

Integrate Gmail communication into FreeNAS

Set up a 14 day recurring scrub schedule for my boot drive

Set up the smart service to check drive temps every hour, send me an informational email if one is over 40 degrees, and send me a critical email if one is over 45 degrees

Set up UPS integration. Initially saw the issue here: https://forums.freenas.org/index.php?threads/data-for-ups-is-stale.20898/page-3 . I'll have to restart FreeNAS after hard drive burn in to test, it seems.

Restarted FreeNAS

Reran long SMART tests

Set up SMART test schedule (every 2 days for short tests, twice a month for long tests)

Created 10 x 6TB RaidZ2 volume

Created a dataset with atime off and set up as a Windows Share Type. Compression and Dedupe are off

Created a CIFS share with guest access

Now copying files over
 
Last edited:

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
You're right on point. The supermicro chassis will accept pretty much any board. They are a great value from Mr. Rackables. The x10sl7-f is a superb choice as well for that size.

The only question I'd have is whether you think 32GB is going to be enough in the longer term on 60TB+ storage. An E5 board and cpu like a 1620 isn't much more as a percentage of total system cost and leaves many more options for future growth. A few hundred extra now might make sense. I know I'd rather have gone that route with my e3's. i.e RAM maxes instantly with no options. That said your needs are modest enough that your proposed system should handle it easily if not somewhat overkill.
 
Joined
Jan 9, 2015
Messages
430
...I'll use a Xeon E3-1230v3 as I'm positive it supports ECC RAM...
I was going to get the same processor, but ended up with the 1231v3 instead. It was 100MHz faster and about $10-$15 cheaper. Not that the 1230 is a bad processor, I'd just look around a little bit more.

Great build to. This is basically the same build I have and it rocks along great.

Good day.
 

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
The only question I'd have is whether you think 32GB is going to be enough in the longer term on 60TB+ storage.
Yeah, that's one of the questions lingering in the back of my mind. I bantered back and forth and ended on the "32GB is enough" bandwagon merely through some offhand comments about the "1 GB per TB" rule not being necessary at higher TB models.

That being said, I'm very interested in hearing that your E3 system's RAM maxes constantly. Are you in a similar use case as my proposal? It looks like you have a significantly smaller pool than mine, so maybe I should be concerned.
 

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
I was going to get the same processor, but ended up with the 1231v3 instead. It was 100MHz faster and about $10-$15 cheaper. Not that the 1230 is a bad processor, I'd just look around a little bit more.

Great build to. This is basically the same build I have and it rocks along great.

Good day.
Thanks, I didn't even notice that 1231v3 processor. The ARK indicates that it's a little bit faster and was just released later. I'll definitely put that processor in the 1230v3's place on my list.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I only mean maxed, as in I have no option to add more RAM. 1GB per TB is definitely flexible at larger size. My use case is very similar with the exception that I also use esxi. I only use FreeNAS for ZFS and prefer to keep other software, vpn, etc separate.
 

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
Ah, yes. I misread "instantly" as "constantly".

Thanks for the input. I'll consider the E5 route for further use cases. The Xeon E5-1620 v2 CPU with the X9SRH-7F looks like it would be roughly a $150 surcharge over my current proposal - that could well be worth the marginal difference (assuming I'll eventually use that extra RAM, of course).
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
Exactly. $150 is a minimal surcharge (5%) when you are talking $3000 over all. It means you can pop to 64GB or better if you hit 'the wall' on your pool at some point.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Welcome mattbbpl. Glad to see my guides helping you out. Looks like you've got everything under control. ;)

I will say that if you plan to expand beyond the 10x6TB drives, definitely go with the E5 stuff.

I've got 10x6TB in my system with 32GB of RAM (E3-1230v3) and it works fine. I'd anticipate for home use you'd be fine with the E3 stuff. But, if you plan to expand while using the same motherboard, RAM, CPU in the future, go with the E5.
 

mattbbpl

Patron
Joined
May 30, 2015
Messages
237
Welcome mattbbpl. Glad to see my guides helping you out. Looks like you've got everything under control. ;)

I will say that if you plan to expand beyond the 10x6TB drives, definitely go with the E5 stuff.

I've got 10x6TB in my system with 32GB of RAM (E3-1230v3) and it works fine. I'd anticipate for home use you'd be fine with the E3 stuff. But, if you plan to expand while using the same motherboard, RAM, CPU in the future, go with the E5.
Thanks, I'm glad they're working out for me, too :)

I've decided to go with the E5 stuff. mjws00 is right in that it's best to swallow the relatively low marginal cost now rather than choke on a replacement cost in the future.
 

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
Rather than launch a new thread, I am going to pile onto this one. I too benefited from Cyberjock's guide, and wanted to say thanks, as well as ask for any comments on the build. This is my 3rd Freenas build for my house, and I have learned a bit from each one. Had I found CyberJock's guide first, I would likely not be doing this for a third time. The two existing Freenas servers in the house are used as follows: Freenas #1 is all storage providing access via AFP. It serves as the Time machine disc for 6 apple computers, and common storage for all photos, music, and video. It replicates all of its data to Freenas #2. Freenas #2 only provides a site for replication, and has a jail running Owncloud which allows me common access to business documents. Both Freenas boxes are backed up to the Amazon Glacier via a dedicated desktop running Arq. Freenas #1 has 6 3TB drives in ZFS Z2 on an AMD machine, and Freenas #2 has 6 1TB drives in a ZFS Z2.

For the new build, I am planning on 6 Seagate NAS HDD ST4000VN000 4TB 64MB Cache SATA 6.0Gb/s Internal Hard Drive in a ZFS Z2 configuration. This box will completely replace Freenas #2, and I will eventually migrate all of the data stored on Freenas #1 over to the new build, and use the existing Frenas #1 as the target for replication.

The current plan for the new build is as follows:
SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3
Intel Xeon E5-1650 v3 Haswell-EP 3.5GHz 6 x 256 KB L2 Cache 15MB L3 Cache LGA 2011-3 140W Server Processor

1 - SAMSUNG 32GB 288-Pin DDR4 SDRAM DDR4 2133 (PC4-17000) Server Memory Model M386A4G40DM0-CPB
6 - Seagate NAS HDD ST4000VN000 4TB 64MB Cache SATA 6.0Gb/s Internal Hard Drive
Noctua NH-U9DXi4 90mm SSO2 CPU Cooler
SeaSonic X-1050 (SS-1050XM2) 1050W ATX12V / EPS12V SLI Certified CrossFire Ready 80 PLUS GOLD Certified
Fractal Design Define R4 Black Pearl w/ USB 3.0 ATX Mid Tower Silent PC Computer Case

I am planning to try replacing the USB drives with dual, mirrored:
SUPERMICRO SSD-DM032-PHI SATA DOM (SuperDOM) Solutions

The transition plan will be to do a fresh install of Freenas, without any disk drives. I will export the Freenas #2 pool, and configuration, and then import both into the new build once the drives from Freenas #2 have been installed into the new build. Once that is up and running, and Freenas #2 has been retired, my plan is to replace each drive, one at a time, and allow freenas to re-silver the drives, thus expanding the pool to the full size of its new drives. Once all of the new drives are in place, I will move the data, users and shares from Freenas #1 over manually. (If anyone has a way to move the users, shares, and data from Freenas #1 in a more expeditious fashion, I would appreciate the advice.)

After all production Freenas functions are on the new build, Freenas #1 will be set up as a destination for the replication.

Please feel free to comment on any aspect of this, as I am still learning.
 
Joined
Jan 9, 2015
Messages
430
Sounds like you've got a plan.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
The PSU seems overkill (nearly twice what's needed).
 

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
That was one of the items I was not sure how to accurately calculate. I know the processor is 140 watts, but no idea on the rest of the system. I assumed I would go big, and that would leave room to add disks in the future. One question though, will the larger PSU consume more power solely because of its capability? In other words, for the same system, will a smaller PSU consume less energy?
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
For 6 drives, the general suggested size is 45-550 watts. Yes, there is waste, but I don't know how to quantify the amount. The initial cost is probably the bigger waste.

FYI- FreeNAS1 (specs below) with 12 drives, runs at 120 watts idle and 350 watts max.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
That was one of the items I was not sure how to accurately calculate. I know the processor is 140 watts, but no idea on the rest of the system. I assumed I would go big, and that would leave room to add disks in the future. One question though, will the larger PSU consume more power solely because of its capability? In other words, for the same system, will a smaller PSU consume less energy?

Yes but it's something like 1 or 2 dollar per year as @jgreco said in another thread so it's not significant. What's important is to have enough power to spin-up the drives at start-up (and the rest of the system too of course) without hammering the PSU.

So, yeah, a 450 W PSU is ok for this config but if you want to expand later then you'll need a bigger PSU so a 550-650 W PSU should be future-proof (unless you want to add something like 10 drives of course...) ;)
 

loch_nas

Explorer
Joined
Jun 13, 2015
Messages
79
1 - SAMSUNG 32GB 288-Pin DDR4 SDRAM DDR4 2133 (PC4-17000) Server Memory Model M386A4G40DM0-CPB
Why only 1 ram stick? You waste performance if you don't use at least 2 sticks -> Dual Channel. 2 x 16GB ram sticks should be a perfect start while leaving enough room for future upgrades.
 

jerryjharrison

Explorer
Joined
Jan 15, 2014
Messages
99
Uhmmmmm... Lack of knowledge. I had no idea that I was wasting performance with only one DIMM. Actually I assumed the opposite.

I probably will just add another 32G Dimm to the build, instead of dropping to two 16's. Not that big of a difference.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Uhmmmmm... Lack of knowledge. I had no idea that I was wasting performance with only one DIMM. Actually I assumed the opposite.
One 32GB stick is better than two 16GB sticks if you plan to upgrade later. It's unlikely you'd be able to measure the difference in performance between single-channel and dual-channel operation in typical NAS usage scenarios. Of course, two 32GB sticks is better than one 32GB stick...
 

diedrichg

Wizard
Joined
Dec 4, 2012
Messages
1,319
One 32GB stick is better than two 16GB sticks if you plan to upgrade later. It's unlikely you'd be able to measure the difference in performance between single-channel and dual-channel operation in typical NAS usage scenarios. Of course, two 32GB sticks is better than one 32GB stick...
This is an acceptable method for starting a build when the $$$ are not available right away and you will eventually add the second stick later.
 
Status
Not open for further replies.
Top