BUILD Updated build thoughts?

Status
Not open for further replies.

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
E: scratch that, even at 3% annual failure rate i'm looking at a 3.2% shot of losing my array, 5% or 7% results in catastrophic likelihood of things going very, very bad. I probably will stop at 6 and if need be fracture things off to a second zpool and mirror.

I think you're reading my chart wrong. Those first numbers assume independent UREs. The second three columns show empirical URE rates.

Also, Robert's right: the chart assumes no replacement of failed drives. Based on the theory that drive failures are independent events, promptly replacing failed drives will significantly reduce your annual failure rate.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
How does one go about replacing a drive in a vdev that fails?
You follow the manual's instructions for replacing a failed disk. Only takes a few clicks.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
In the first table, the entry for a RAIDZ2 vdev of 10 drives with a drive AFR of 3% is 0.276% (failure rate for the vdev). Whether that's meaningful is another question, since you're not going to leave a failed drive in the array while you wait for 2 more to fail.

In the 3rd table, the entry for a RAIDZ2 vdev of 10 4TB drives with a drive AFR of 3% and URE probability of 1/1E14 is 3.205%. Again, whether that's meaningful is open to question, since you will replace failed drives and ZFS will correct UREs on the fly (one of the main reasons for using ZFS).
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
Does anyone know if the Supermicro bios allows for CPU vcore control? While I stress test the temps i'd like to go ahead and see about downvolting. Not for power concerns but temperature concerns.

I went ahead and looked into it and Intel is still using that horrible TIM on the 4-core die Xeons (the same die the mobile and mainstream desktops use). Did the Devil's Canyon refresh E3s get the updated TIM? I can't believe I may have to delid a server CPU for less atrocious temps. Given the form factor and cooler I have temperatures are a particular concern (testing showed a 4790k at 1.3v was stable at 82c, though i'm not running an overclock, voltages shouldn't be that high and i'd still like to see no greater than 60-65c). While I doubt i'll be pushing load for more than an hour at a time (ever), i'd rather not put that to chance should I be out of the house. I do have emails set up but...well, i've seen things hit the fan before.

I think you're reading my chart wrong. Those first numbers assume independent UREs. The second three columns show empirical URE rates.

Also, Robert's right: the chart assumes no replacement of failed drives. Based on the theory that drive failures are independent events, promptly replacing failed drives will significantly reduce your annual failure rate.
Yes, yes I am. I misread the e15 and assumed it was classed based on URE rather than the adjustment.

I'll move ASAP on drive replacements (not interested in seeing cascading drive failures).

You follow the manual's instructions for replacing a failed disk. Only takes a few clicks.
I checked, it's surpringly simple.

In the first table, the entry for a RAIDZ2 vdev of 10 drives with a drive AFR of 3% is 0.276% (failure rate for the vdev). Whether that's meaningful is another question, since you're not going to leave a failed drive in the array while you wait for 2 more to fail.

In the 3rd table, the entry for a RAIDZ2 vdev of 10 4TB drives with a drive AFR of 3% and URE probability of 1/1E14 is 3.205%. Again, whether that's meaningful is open to question, since you will replace failed drives and ZFS will correct UREs on the fly (one of the main reasons for using ZFS).
I'm thankful you assumed i'd do the best practices. This entire process is making me feel incredibly unqualified to operate anything more complicated than a pencil sharpener.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
downvolting
Absolutely pointless.

temperature concerns.
In most cases, not an issue. If you're using a really small chassis, the solution is to use a CPU with a lower TDP.

I went ahead and looked into it and Intel is still using that horrible TIM on the Xeons.
Not an issue.

Devil's Canyon refresh E3s
No such thing exists.

While I doubt i'll be pushing load for more than an hour at a time (ever), i'd rather not put that to chance should I be out of the house. I do have emails set up but...well, i've seen things hit the fan before.
System firmware halts the system if the temperature gets dangerous.
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
Absolutely pointless.
Less power usage and lower temps under load, though looks to not be an option.

In most cases, not an issue. If you're using a really small chassis, the solution is to use a CPU with a lower TDP.
My case is sufficient in size, the cooler is about as big as I can fit and powerful as it gets but not particularly recommended outside having high airflow. I'll be putting 2-4 fans in to aid that. Might need another PWM...

Not an issue.
Haswell's temps are pretty atrocious due to the FIVR and heat due to friction decreases energy efficiency, necessitating higher voltages and thus power usage. Intel bulk spending $2 on less crappy paste, even charging consumers $5+ more, results in greater efficiency.

For a more extreme consumer case, consider the Fury X, which opted for a water cooler partially for efficiency reasons.

No such thing exists.
I'm referring to the Haswell refresh Xeons.

System firmware halts the system if the temperature gets dangerous.
Yes, yes it does...apologies, hundreds in hardware flashing before my eyes.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Less power usage and lower temps under load, though looks to not be an option.
If it actually worked perfectly, the processor would ship in that state.

My case is sufficient in size, the cooler is about as big as I can fit and powerful as it gets but not particularly recommended outside having high airflow. I'll be putting 2-4 fans in to aid that. Might need another PWM...
If the stock cooler works perfectly (it does), almost any aftermarket cooler is going to be better.

Haswell's temps are pretty atrocious
This was blown out of proportion and is irrelevant in realistic scenarios that do not involve overclocking.

For a more extreme consumer case, consider the Fury X, which opted for a water cooler partially for efficiency reasons.
250W chip on an ancient process - they had to do something to get a bit more performance out of the thing. Highest LGA1150 TDP is the E3-1285 v4 at 95W, and that includes the GPU, which will be essentially shut down.

Haswell Refresh is supposedly identical to Haswell (not even the die stepping changed...). Only Devil's Canyon got the better TIM.
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
If it actually worked perfectly, the processor would ship in that state.
What makes you think it can't exactly? The E3 dies aren't magically different to the mainstream desktop dies. The differences between a 4790k and E3 1231v3 is the TIM and microcode (i'm not sure what Intel opted to do with the iGPU, my guess would be lasering it off). Intel's already shown by production that they can easily swap TIMs. Delidding as a process has only become more popular as Intel CPUs ramp further up in heat from soldering as a cost saving process and refuse to put quality TIM on. In enthusiast circles the heat issue and its easy solution is well known. We even have companies offering to do the process!

It's no secret that mainstream processors, mobile processors and the E3s share the same die.

Do you honestly believe a company wouldn't skimp to cut costs somewhere?

If the stock cooler works perfectly (it does), almost any aftermarket cooler is going to be better.
So does the cooling on Sandy Bridge i7 Macbook Pros, it doesn't mean it's great nor that things can't be better. Better cooling does mean I can keep those fan profiles tighter so less noise, which I won't complain about.

This was blown out of proportion and is irrelevant in realistic scenarios that do not involve overclocking.
Fair enough, I just remain incredibly weary of Haswell's temperatures, particularly for its quad cores and larger.

250W chip on an ancient process - they had to do something to get a bit more performance out of the thing. Highest LGA1150 TDP is the E3-1285 v4 at 95W, and that includes the GPU, which will be essentially shut down.
If you want, I can see about digging up temps/power usage for Ivy Bridge Extreme. Or wait, even worse, Haswell Extreme!

And 28nm isn't ancient, Sandy Bridge was 32nm only 4 years ago. Not that SB is not viable for FreeNAS builds still either.

Also, Fury's, in some cases, can unlock to a Fury X's core count. Still capable of being cooled by air.

Haswell Refresh is supposedly identical to Haswell (not even the die stepping changed...). Only Devil's Canyon got the better TIM.
Is it even mildly better binned? That's a bold step forward.
 
Last edited:

Something

Explorer
Joined
Jan 31, 2015
Messages
93
Deciding to take one last look at things and revise where DDR4 ECC is at. Turns out I mixed up my Ps and Qs with the kit size, 4x16GBs, not 2x16GBs. So DDR4 ECC doesn't cost the GDP of Taiwan for a sufficient amount of the stuff. 32GBs go for $260-280 which is awesome. I looked into it and an E5 (faster clocks, solder, likely better voltages) would be an extra $60, LGA 2011-3 SuperMicro motherboard is an extra $100 (does have dual gigabit ethernet, more fan headers, a heap load of PCIe, and heaps of SATA) and the RAM is an extra $10. I wouldn't be unhappy paying that to have more RAM as an option on the table down the line along with the other perks. Can also think about a cheap Broadwell E5 at some point in the future should efficiency and performance be a concern. Certainly cheaper than later upgrading and scrapping an entire build.

Crucial DDR4 ECC SKUs still don't include information on the Micron memory employed. Both Amazon and Newegg airbrush the Micron module # from the RAM! So, i'll give this some more thought into it and call Crucial to double check if I find myself very interested in going LGA 2011-3.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I never saw my CPU (i3-4370) going over 55 °C (and that's with 35 °C ambiant) whatever I do on the NAS (copy, scrubs, ...) with the Intel stock cooler (but I replaced the stock thermal compound with Arctic Silver 5 just to be sure) so I think you worry a bit too much... :)

Down-volting can render the CPU unstable. It depends on the particular CPU you have in the hands and some will accept lower voltages than others for the same model, it's just pure luck. That's why the voltages are what they are, it's to be sure every CPU will be stable.
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
I never saw my CPU (i3-4370) going over 55 °C (and that's with 35 °C ambiant) whatever I do on the NAS (copy, scrubs, ...) with the Intel stock cooler (but I replaced the stock thermal compound with Arctic Silver 5 just to be sure)
I haven't actually seen my 4370 over 50c, even with Intel stock and the stock thermal paste.

so I think you worry a bit too much... :)
Yeah, I do that. I don't want to make the wrong decision with these purchases. I've been investigating into case fans and trying to figure out what I should go for there for fairly decent efficiency (cost, noise level, cooling performance) and its been a messy process. I can't find much info on 80mm fans so I just think i'll trust Noctua for those but their bearing type there isn't a lot of information on as far as uses go so I contacted them. On the quieter side for 120mm fans I could only find Corsair sleeve bearing fans with LEDs which just gives pause for thought.

I wouldn't normally put so much emphasis into something more minor and unnecessary but one of my stated goals with this NAS was for it to be quiet and so far it is, barely above the ambient. Keeping it that low or getting it lower I won't complain about but an E5 and 6 hard drives does put more of a burden on the cooling situation.

Does FreeNAS have any kind of guide to fan types and their magnetic properties / which are safe to have close to HDs? I couldn't really see anything.

Down-volting can render the CPU unstable. It depends on the particular CPU you have in the hands and some will accept lower voltages than others for the same model, it's just pure luck. That's why the voltages are what they are, it's to be sure every CPU will be stable.
I'm familiar as an enthusiast overclocker. Enterprise and server stuff is uncharted territory for me, i'm more at home with consumer grade equipment, needs, wants and uses.
 
Last edited:
Joined
Oct 2, 2014
Messages
925
Noctua makes some nice fans they maybe a brand to consider


Sent from my iPhone using Tapatalk
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Noctua uses a magnetically-stabilized hydraulic bearing. Probably as good (in their fans) as a comparable ball bearing, when it comes to longevity.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
I haven't actually seen my 4370 over 50c, even with Intel stock and the stock thermal paste.


Yeah, I do that. I don't want to make the wrong decision with these purchases. I've been investigating into case fans and trying to figure out what I should go for there for fairly decent efficiency (cost, noise level, cooling performance) and its been a messy process. I can't find much info on 80mm fans so I just think i'll trust Noctua for those but their bearing type there isn't a lot of information on as far as uses go so I contacted them. On the quieter side for 120mm fans I could only find Corsair sleeve bearing fans with LEDs which just gives pause for thought.

I wouldn't normally put so much emphasis into something more minor and unnecessary but one of my stated goals with this NAS was for it to be quiet and so far it is, barely above the ambient. Keeping it that low or getting it lower I won't complain about but an E5 and 6 hard drives does put more of a burden on the cooling situation.

Does FreeNAS have any kind of guide to fan types and their magnetic properties / which are safe to have close to HDs? I couldn't really see anything.


I'm familiar as an enthusiast overclocker. Enterprise and server stuff is uncharted territory for me, i'm more at home with consumer grade equipment, needs, wants and uses.
I feel like you are getting really focused on lower cpu temps. Lower temps are great if you need the extra head room to increase voltage and get aa higher clock but a cpu running at 30c is going to work the same as a cpu running at 60c. So keeping temps low isn't a problem. You just need to keep it from frying itself.
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
I'll give Crucial a call later, I need to test this UPS before connecting my NAS to it.

Noctua makes some nice fans they maybe a brand to consider

Sent from my iPhone using Tapatalk
Noctua uses a magnetically-stabilized hydraulic bearing. Probably as good (in their fans) as a comparable ball bearing, when it comes to longevity.
I won't do Noctua's marketing for them but...

they're one of my first choices for cooling given their long warranties (they know their products last), high build quality, beautifully engineered designs, and exceptional balance of cooling and noise. They're known for the extra effort they put into the packaging and it really says a lot about who they are. As an engineer myself, I can recognize a labor of love, and their stuff carries that touch as no expense is spared (something I quite love). They take pride in their products and it shows, unwilling to cast out even the old models aside with free mounting kits for newer platform, really embodying the old adage of leaving the business to itself and keeping the customers happy. There are plenty of great names in cooling from Phanteks to Gelid to Corsair but Noctua's always one of the greats that sticks in mind.

You pay a premium (sometimes a rather obscene one, those case fan prices!) and there are generally more cost effective options, but I know I won't end up regretting the purchase for how well it performs but price.

As for the magnetics, I talked with support and SSO2's magnetic properties shouldn't be cause for concern if placed in hard drive bays so Noctua is on my short list for that.

I feel like you are getting really focused on lower cpu temps. Lower temps are great if you need the extra head room to increase voltage and get aa higher clock but a cpu running at 30c is going to work the same as a cpu running at 60c. So keeping temps low isn't a problem. You just need to keep it from frying itself.
I really am, i'll tone that down and readdress that situation later. For now, even with an LGA 2011-3 hex core, I shouldn't have cooling troubles.
 
Last edited:

Something

Explorer
Joined
Jan 31, 2015
Messages
93
Double checked the motherboards and Supermicro is pretty cheeky. The X10SRI-F comes with a worse PCIe setup (1 fewer, lower transfer speeds) than the X10SRL-F at the price of a different Intel NIC, i350-AM2 vs. 210. Both seemed to be dual port GbE so I dug a bit further and checked through ARK. The straight i210 doesn't exist as far as I can tell (i210-AT for example does), and the models that do are single port GbE. So the i210 shouldn't be able to drive 2 GbE connections simultaneously to their fullest as well as lacking other things (jumbo frames doesn't seem to be supported).

If, should the case arise and I need more ethernet ports or a better NIC, would it be possible to buy an Intel/other PCIe NIC to slap in and have it work with FreeNAS? As far as I could tell that's a yes, FreeBSD has drivers for most modern Intel NICs (among others) so I assume FreeNAS should as well.

---

Crucial support pretends it doesn't know what memory ICs it buys from Micron, thankfully I managed to cheat it a little and found this. The readily available CT2K16G4RFD4213 2x16GB kit is A-okay. At least for the motherboards i've checked.

---

So, one final sanity check.

Build (part list)
CPU - Intel Xeon E5-1620 V3 3.5GHz Quad-Core Processor
CPU cooler - Noctua NH-L9x65 33.8 CFM CPU Cooler
Motherboard - SUPERMICRO MBD-X10SRL-F Server Motherboard LGA 2011 R3
Memory - 2x Crucial 16GB (1 x 16GB) Registered DDR4-2133 Memory or 1x Crucial 32GB (2 x 16GB) Registered DDR4-2133 Memory? I prefer not giving Newegg my money and B&H has free expedited shipping.
HD - 6x Western Digital Red 4TB 3.5" 5900RPM Internal Hard Drive
Case - Silverstone GD07B (Black) HTPC Case
PSU - SeaSonic X Series 400W 80+ Platinum Certified Fully-Modular Fanless ATX Power Supply
Disk drive - Samsung SH-224DB/RSBS DVD/CD Writer
UPS - CyberPower BRG1000AVRLCD UPS
Boot drive - SanDisk Ultra Fit™ CZ43 32GB USB 3.0 Low-Profile Flash Drive
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
So the i210 shouldn't be able to drive 2 GbE connections simultaneously to their fullest as well as lacking other things
They use two i210s. Also, forget about jumbo frames. They're a pain in the ass.
If, should the case arise and I need more ethernet ports or a better NIC, would it be possible to buy an Intel/other PCIe NIC to slap in and have it work with FreeNAS? As far as I could tell that's a yes, FreeBSD has drivers for most modern Intel NICs (among others) so I assume FreeNAS should as well.
Yes.
Crucial support pretends it doesn't know what memory ICs it buys from Micro
They don't buy them, they're the same company.

WD reds are 5400RPM drives, not that it matters (for non-pedants).

Excellent PSU, but I prefer to avoid fanless units. It's also a bit on the low side if you want to expand in the future.

Why do you want an optical drive?
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
Your case choice is a bit odd. A HTPC case for a NAS?

In that form factor, you can do much better than that Silverstone in terms of acoustics, capacity, and airflow: HTPCs are designed first to look good, which is almost always a secondary consideration for a NAS.
 

Something

Explorer
Joined
Jan 31, 2015
Messages
93
They use two i210s. Also, forget about jumbo frames. They're a pain in the ass.
Huh, I took a look at both motherboards and couldn't exactly see two NICs on the SRI. Heh, figures jumbo frames are a pain. What would networking be without pain?

Beautiful, cheers!

They don't buy them, they're the same company.
Well you know what I mean, buy/request/utilize. Crucial should have information on the memory ICs of the company they're a part of they utilize.

WD reds are 5400RPM drives, not that it matters (for non-pedants).
Are they? They have the weird Intellispin thing or whatever going on. I just copied the link from PCPartPicker. Personally, I don't care about 5400RPM vs. 5900RPM, I just wanted decent, trust-worthy, 24/7 usage capable drives with good power usage to them. I don't expect to more than max out a 1GbE connection any time soon. 10GbE I would need to purchase NICs and bump up a number of my devices to Wireless AC support to make it viable.

I think the only device I have that supports going beyond 1GbE right now is my phone.

Excellent PSU, but I prefer to avoid fanless units. It's also a bit on the low side if you want to expand in the future.
The parts marked purchased here I already own. The 4 more WD Reds and UPS I have recently purchased and can still return. It's the rest that I already own (like the case and PSU). I'm going to go ahead and place the order for the CPU, motherboard and RAM.

Noise level was a bit of a concern, so I was willing to put a bit extra into that. Seasonic does lovely, high end stuff.

It is a bit on the lower side should I end up with say, an 8 core processor and need a dGPU in there, but should that happen, i'll undoubtedly need to re-evaluate my choices entirely. Trying to keep spec creep to a minimum (not very well though).

Why do you want an optical drive?
I don't, I already own it and I was using it for FreeNAS installation (also doesn't hurt to keep should the need arise). Later, if I expand beyond 6 drives, i'll likely remove the optical drive. It has 2 dual 5.25" drives which I can convert to 3 3.5" drives with a bay for 11 drives total which is plenty for my needs for now.

Your case choice is a bit odd. A HTPC case for a NAS?
Space needs sadly, I needed to be able to put the case in a relatively open area without it being too hideous.

In that form factor, you can do much better than that Silverstone in terms of acoustics, capacity, and airflow: HTPCs are designed first to look good, which is almost always a secondary consideration for a NAS.
Sadly, size as well as look were fairly important given where it needed to be put. There are better choices, but I had needs to balance :/.

If I could, I would re-wire the ethernet and go with not a HTPC but a more optimal case like a full tower and stuff it elsewhere. If not a server case. It'd have the added benefit of me being able to dedicated the NAS to being a NAS, and leave things like virtualization to a more dedicated, ideal machine.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Status
Not open for further replies.
Top