BUILD New Build Review and Burn-In Question

Status
Not open for further replies.

WhirlwindMonk

Dabbler
Joined
Apr 13, 2013
Messages
15
So, I have a FreeNAS box running at home that I built a couple years back. Unfortunately, space is starting to run low, so I'm looking to upgrade. During my reading, though, I've come to realize that my old build does basically absolutely everything wrong. It's only saving grace is that it uses hard drives designed for servers. That's it, literally everything else is bad, gaming mobo and cpu, non-ECC ram, and raidz1. So, I'm looking to rectify that with my new build (and the new budget that a new job and two incomes can provide).

Case: SilverStone SST-PS07B - MicroATX mini-tower, nothing special about it, just one of the smallest cases I could find that would fit my needs, though it might need another fan or two
Mobo: Supermicro MBD-X9SCL-F-O
CPU: Xeon E3-1230 v2
Ram: 16 GB (2x8) Kingston DDR3 1600 ECC Unbuffered
Hard Drives: 6x3TB Seagate NAS drives to put in RaidZ2 for 12 TB total storage
PSU: SeaSonic SS-350TGM

My main question is the PSU. From the reading I've done, it looks like 350W will cover me, but a little extra assurance would make me feel better.

As for the rest of the parts, this server will be running plugins like SickBeard and Plex, but my research seems to show that this build will handle all the transcoding and encryption I want, so I'm not too worried.

Once the system is built, my biggest concern is testing the hard drives. I have had far too many bad drives over the years and want to make dead sure I don't have a bad drive putting this system together. I've read jgreco's thread on the topic, but he loses me on step 2 of the drive burn in. I understand the basics of the dd command after reading up on it, but how does one go about running multiple dds simultaneously as in step 2, and properly use it for seek testing in step 3? Either I'm misunderstanding what's being said, or my google skills have failed me.

Thank you for any help.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
350W PSU is enough. You could probably get by with 300W.

That Xeon should be enough for what you are doing.

To do drive burn-in, I used Parted Magic with badblocks. Make sure you check SMART data before and after.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I do the following with new disks..

SMART short test
SMART conveyance test
SMART long test
badblocks with at least 2 patterns(normally 0x01 and 0x10)
SMART long test

I would not go below 350w. Don't need to blow out your drives with an undervolt condition on startup due to a wimpy PSU. I'd go with a 450w to be honest. Better to be a little too big than too small.

The X9SCM-F is a better board(extra PCIe slot) so I'd go with that.

If you are okay with spending a little more($25 or so) consider the X10 series motherboards and a haswell E3-1230v3 CPU instead. Slightly higher performance and slightly lower idle power usage. Don't sweat it it you can't get it in your location though. Both of those "benefits" are pretty negligible for FreeNAS.
 

WhirlwindMonk

Dabbler
Joined
Apr 13, 2013
Messages
15
Thanks for the info on hard drive tests, I'll look into badblocks.

As for Haswell and X10, what's $25 on an $1500+ system? Looking at the Supermicro X10s on their site, it looks like the SLL and SLM are the closest to the X9SCM in features. As best as I can tell, the only difference between them is whether 4/6 of the SATA ports are 3.0 GB/s or 6.0 GB/s. Is that correct, and am I correct in saying that that won't have any noticeable impact since the bottleneck is going to be the 5900 RPM drives, not the SATA channels? And heck if I can spot the difference between the SL[L/M] and the SL[L/M]+. I guess that means whichever of those four is both available and the least expensive would be the one to go with.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The + models have 2 Intel NICs that are "higher grade" if memory serves me right.

You are correct that the 3.0Gb and 6.0Gb SATA won't really matter unless you think you might want SSDs someday and 10Gb.
 

WhirlwindMonk

Dabbler
Joined
Apr 13, 2013
Messages
15
Yeah, if the day ever comes that I can afford to build an NAS with SSD storage, I doubt replacing the rest of the hardware too will really dent the budget.

Any recommendations for a UPS? I see APC and CyberPower mentioned a decent bit around the forums, are they of better quality than other brands, or do they just happen to be the best priced for the needs of a home server?

Edit: Eyeing the CP750LCD, but having a hard time figuring out how to tell if it's compatible or not.
 

WhirlwindMonk

Dabbler
Joined
Apr 13, 2013
Messages
15
Placed the order last night. Getting ahold of the X10 stuff, especially tested and compatible memory, was proving to be a pain, so I just stuck with the X9.

Case: SilverStone SST-PS07B - MicroATX mini-tower, plus an extra 120mm PWM fan
Mobo: Supermicro MBD-X9SCM-F-O
CPU: Xeon E3-1230 v2
Ram: 16 GB (2x8) Kingston DDR3 1600 ECC Unbuffered
Hard Drives: 6x3TB Seagate NAS drives to put in RaidZ2 for 12 TB total storage
PSU: SeaSonic SSR-450RM
UPS: CP750LCD

Unfortunately, the hard drives were on backorder, so it may be a few weeks before I can put everything together and get to testing.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
12TB (decimal) is only 10.91 TiB (binary). And size of your files is measured in binary... Do not be surprised.

Also, with ZFS, it is recommended to not exceed 80% of the filesystem capacity, thus you can plan on useable
8.73 TiB = 80% * 10.91 TiB
unless you have some read-only static data.

Using the same calculation you would realize that deploying six 4TB drives would give you 11.64 TiB useable. Since you did not purchase your hard drives yet, you may want to check your storage capacity needs (did you take into account space consumed by snapshots?).
 

WhirlwindMonk

Dabbler
Joined
Apr 13, 2013
Messages
15
Yeah, I'm aware of the hard drive size nonsense. Nearly 9 TiB will be more than enough for me for a long time, seeing as I'm only now beginning to stress my current 2 TB build (3 1TB drives in raidz1) after a couple years of use. The other issue is that the cost premium from 3TB drives to 4TB is substantial, to the point that at current prices, the total build cost with 4TB drives is actually more expensive per TB than the 3TB drives, in addition to driving the total build cost out of my budget. As for snapshots, I'm not too worried about the space requirements, most of my data is stuff that is replaceable. The stuff that I will want snapshots and redundant backups of takes up comparatively little space.

It's a good point, though. If budget permitted, I'd definitely be going for 4TB drives, and likely building redundant systems so I could do full backups.
 

solarisguy

Guru
Joined
Apr 4, 2014
Messages
1,125
Interesting..., using price per byte, buying in my local store, 4TB WD Red is cheaper than 3TB WD Red, but looking around the world that appears to be a policy that is in a minority...

I am using WD Black disk drives, and for that series it looks like it is more universal to sell them with 4TB being cheapest per byte. Now thinking about it, it makes sense since for NAS most people want maximum storage, while for a fast workstation most of storage is external to it. Thus pricing likely follows demand pattern.
 
Status
Not open for further replies.
Top