So you want some hardware suggestions.

Status
Not open for further replies.

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
ok..

so I've read heaps over the past few days, and followed the various recommendations listed here and elsewhere...

This is the system I've came up with:
Originally I wanted just a 6 bays system, but as I can't find a small case for 6 bays, and you have to go up to 2U size: and that in 2U I can fit 12 disks for a low price difference; I settled on a 12 bay system

Super micro X10SLH-F (6 SATA3) or for a bit more: X10SL7-F which has 14 SATA/SAS ports (10 SATA2/SAS2 6Gbit) (thanks to LSI 2308 controller)
CPU: Intel Xeon E3-1220 v3
RAM: DDR3-1600 ECC 8x32GB, Kingston KVR16E11K4/32
Case: 12X3.5" hot-swappable
either Norcotek rpc-2212 http://www.norcotek.com/item_detail.php?categoryid=1&modelno=rpc-2212 for AUD $335. But then I still need to find a 2U efficient power supply.
or supermicro SC826TQ-R500LPB. This has has a greap power supply, very efficient and only 500W (redundant too).
Problem is that I can't find any distributors in Oz, then only one I've found is the CSPC-826E16-1200LPB. A 1200W power supply, no matter how its rated isn't going to perform well when it comes to supplying only 100ish watts.

Hard drive: WD Red 4TB: WD40EFRX * 6. The other alternative are the new Seagate Desktop 4TB 5900rpm which are 3dB quieter and uses slightly less power (.5W), they are also $50 cheaper each.
OS drive: 8GB USB3 key (in read-only mode)

System will be set as RAIDZ2 giving me around 16TB storage immediately, and that can be extended to 32TB (as of today, probably will have 8TB disks by the time I need more disk space!)

This is for a mythtv backend and file server / backup. connection to mythtv will be via NFS; mythtv can record at present a theoretical maximum of 15 TV streams (around 10Mbit/s each), the average being 3 at any time.

Now, the great powerpoint presentation mentioned on how the pentium G2020 was a great alternative. Today you have the Pentium G3220 that works with the above mother board, and it's $200 less! (only AU$75 here). Nowhere near as fast as the Xeon. Great thing with the newer G3220 is that is supports AES hardware encryption.

So the question becomes: how relevant is the CPU speed here, would the E3 make a significant difference as a zfs raid controller? Or is the G3220 already fast enough?

I've built a few high-end ZFS RAID machines a few years back, and always used top of the range CPU; but the servers were often running intensive applications, and for the one that was only use for file server, I never saw the CPU going over 10% (that was with an Intel Q6600)

I have no experience with FreeNAS; but I've been using FreeBSD since FreeBSD 3, and I'm very familiar with it (always via the command line)

How noisy do you think that beast with be? the 9K rpm 80mm fans in the supermicro chassis can't be that quiet I'm thinking

Help and advice are welcome; especially in regards to the CPU, the motherboard and the 12 bay chassis. Ok, with everything really :)

Thanks
JY
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The Supermicro case fans are generally PWM and will not rev to full speed during normal operation (you can check the fan type specifics before you order any given chassis). I am a bit of a fan of the Supermicro route in part because it's well designed and because it goes together easily. The preliminary reports on the X10 boards are generally positive with the exception of some problems with USB and the C1 stepping. Haven't actually laid hands on one here. On the X9 boards, fan speed tiers can be configured via IPMI, if you order the IPMI version of a board. I expect that this applies to X10 also, but again, not laid hands.

Your system with 12 disks will probably be a little more than 100W. Figure a disk as 8W times 12. Then add 40-90 watts for the system. You're probably in the 140-200 watt range, at least with modest-to-fairly-heavy activity. Starting with six disks, yes, less watts. Either way I think fundamentally you're right in looking for the smaller 500W power supply. Suggest you walk through the options in this thread.

Do not underestimate ZFS's pigginess. I don't have a good feel for how well those lower-end options work. I think if you are comfortable with not having "top of the line" NAS performance and merely want to make sure you can keep up with 150Mbit/s, almost anything you can jam on that board will be fine. But you also mention AES, at which point I'd start getting a little more cautious. Avoiding piggy features like compression and encryption, a low end CPU is much more likely to be "okay."
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
thanks for the answers, very much appreciated

The Supermicro case fans are generally PWM and will not rev to full speed during normal operation (you can check the fan type specifics before you order any given chassis). I am a bit of a fan of the Supermicro route in part because it's well designed and because it goes together easily. The preliminary reports on the X10 boards are generally positive with the exception of some problems with USB and the C1 stepping. Haven't actually laid hands on one here. On the X9 boards, fan speed tiers can be configured via IPMI, if you order the IPMI version of a board. I expect that this applies to X10 also, but again, not laid hands.

they do get great recommendations; if only they weren't as hard to find !
I wrote to their two Australian distributors yesterday, didn't get an answer. And you can't get pricing other than pressing the "request price" button

Your system with 12 disks will probably be a little more than 100W. Figure a disk as 8W times 12. Then add 40-90 watts for the system. You're probably in the 140-200 watt range, at least with modest-to-fairly-heavy activity. Starting with six disks, yes, less watts. Either way I think fundamentally you're right in looking for the smaller 500W power supply. Suggest you walk through the options in this thread.

the Seagate "Desktop" is rated as 4.5W usage, the Red at 5W... so I think full power will be closer to 150W.

Do not underestimate ZFS's pigginess. I don't have a good feel for how well those lower-end options work. I think if you are comfortable with not having "top of the line" NAS performance and merely want to make sure you can keep up with 150Mbit/s, almost anything you can jam on that board will be fine. But you also mention AES, at which point I'd start getting a little more cautious. Avoiding piggy features like compression and encryption, a low end CPU is much more likely to be "okay."

[/quote]

I hope you mean 150MB/s; 150Mbit/s would certainly be a massive step down from my existing mdadm RAID5 (LVM+JFS) where I completely max out my gigabit link.
I remember reading a post from you in regards to smb share being single-threaded and some bottleneck.
After changing the smb.conf and changing the buffer size with:

socket options = TCP_NODELAY SO_RCVBUF=262140 SO_SNDBUF=262140

I maxout the gigabit link using windows share with no issue.

While $200 more expensive, the overall pricing it "only" makes a 13.5% total increase.
I guess the nice thing about the g3220, it's so cheap that you could almost afford to waste it!

benchmarks I've read place it on par with the E3 in regards to AES encryption (thanks to the intel AES instruction set being available there)
it is a 3GHz processor ; amazing that this would one day be considered as a lower-end :)

I'd like this system to age gracefully, and allow a future upgrade to 10Gbit ethernet... Don't expect to maxout the link ever, but at least to allow more than 1 gigabit client at a time to maxout

Thanks again
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I hope you mean 150MB/s

You said

15 TV streams (around 10Mbit/s each),

which - at 150Mbps - I think is easily accomplished. However, 150MB/sec is in excess of 1Gbps. Generally people have found this to be harder to hit.

a massive step down from my existing mdadm RAID5 (LVM+JFS) where I completely max out my gigabit link.

ZFS is massively more heavyweight, and has lots more overhead. This is always distressing to newcomers. After a while, you just look at it and go "((sigh)) whatever." You pay for features somehow.

I don't have specific advice for CIFS other than that faster cores are better than slower cores. I'm not a "home user" and the uses for FreeNAS here are dependent on a whole different set of requirements - performance in megabits per second not being a primary one. I do encourage you to do lots of reading of what other people have managed.
 

JimPhreak

Contributor
Joined
Sep 28, 2013
Messages
132
PSU recommendations for an i3-4130 system running 6-8 WD Red 3TB drives and 4 Yate Loon D12SL-12 case fans? I have a Corsair HX650w laying around but my guess is that won't put me in the 30-50% efficiency range since it's too powerful.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
PSU recommendations for an i3-4130 system running 6-8 WD Red 3TB drives and 4 Yate Loon D12SL-12 case fans? I have a Corsair HX650w laying around but my guess is that won't put me in the 30-50% efficiency range since it's too powerful.


what's your case? that determine the form factor of your PSU.
For normal form factor and no redundancy; I like seasonic. They have extremely quiet PSU, well built and highly efficient. They have a whole platinum rated range of PSU.
a Red Drive is a maximum of 5W during seek; so 40W with the disks; add 54W TDP for the CPU + another 30W for the fans and overall system: so say 150W. A 300W PSU would be good; or a very efficient 400W one: http://www.seasonicusa.com/Platinum_Series_FL2.htm ; fanless.. can't be any quieter.
That PSU only has 6 SATA connectors; so you'll need some adapters to connect 8 drives unless you have a different type of backplane.
 

JimPhreak

Contributor
Joined
Sep 28, 2013
Messages
132
what's your case? that determine the form factor of your PSU.
For normal form factor and no redundancy; I like seasonic. They have extremely quiet PSU, well built and highly efficient. They have a whole platinum rated range of PSU.
a Red Drive is a maximum of 5W during seek; so 40W with the disks; add 54W TDP for the CPU + another 30W for the fans and overall system: so say 150W. A 300W PSU would be good; or a very efficient 400W one: http://www.seasonicusa.com/Platinum_Series_FL2.htm ; fanless.. can't be any quieter.
That PSU only has 6 SATA connectors; so you'll need some adapters to connect 8 drives unless you have a different type of backplane.


Would this suffice?

http://www.newegg.com/Product/Product.aspx?Item=N82E16817151117
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Anyone read this article?
https://calomel.org/zfs_raid_speed_capacity.html

very interesting benchmarks.
What I was surprised however, is the difference in performance between the onboard SATA in the various motherboards

1x 2TB a single drive - 1.8 terabytes - Western Digital Black 2TB (WD2002FAEX)

Asus Sabertooth 990FX sata6 onboard ( w= 39MB/s , rw= 25MB/s , r= 91MB/s )
SuperMicro X9SRE sata3 onboard ( w= 31MB/s , rw= 22MB/s , r= 89MB/s )
LSI MegaRAID 9265-8i sata6 "JBOD" ( w=130MB/s , rw= 66MB/s , r=150MB/s )

the asus uses an AMD SB950 controller. The supermicro a Intel C602 chipset.
The LSI card is the only one anywhere near the technical specs for that hard drive. The onboard being 3 times as slow !
 

JimPhreak

Contributor
Joined
Sep 28, 2013
Messages
132
yes, that will do the job...

IMHO, the 400W I linked to is a better choice. It's a more efficient PSU (being platinum rated, it means it's 92+% efficient; Even at 20% use and is fanless


It's also twice the price and I'm already well over budget on my build.
 

JimPhreak

Contributor
Joined
Sep 28, 2013
Messages
132
I plan to run 8x3TB drives in a RAIDz2 (so 18TB of usuable space) will 16GB suffice even if I'm using no where near that amount of space? Or is the RAM recommendation based on the total number of space of all disks in the zpool regardless of whether you're going RAIDz1, z2, z3, etc.?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It is deliberately a bit vague. An aggressive jnterpretation would be that you ought to have 24GB (3x8). But 16GB ought to be fine for many 18TB filers. Once you get out of the realm of way too small (4-6GB) and past pretty small (8GB) it is usually a bit less critical. It can still affect performance but usually you have to make much larger changes (16GB->32GB) to address those sorts of perf problems.
 

John M. Długosz

Contributor
Joined
Sep 22, 2013
Messages
160
… even if I'm using no where near that amount of space? Or is the RAM recommendation based on the total number of space of all disks in the zpool …

FWIW, I have a fresh build with 12TiB (10.2TB) capacity in a z2, and Plex set up. The reporting screen shows 5½GB free RAM and 9.2GB "wired", whatever that means.

I think that it doesn't demand that huge amounts of overhead be used for even a single file request; rather, the advice supposes that it will be used by many users hitting different parts of the file system all at the same time.

I also conjecture that perhaps the assumptions of who uses TB's of storage in a NAS have been changing recently enough for sage wisdom or even design goals to have caught up. Small number of home users, huge files, as opposed to many users in an office or web site.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Except if you read on what is "free" that doesn't mean RAM that isn't allocated. In Windows, there is typically single digit % of RAM that isn't allocated. The cache will free up RAM if a program needs more RAM than what is available. FreeBSD does the same thing. So don't be fooled by that "free" RAM as its somewhat lying to you.

RAM needs depend on a lot of factors. MANY factors can have drastic affects on RAM needs to get good performance. I had a 36TB pool and as a single user I couldn't stream a single video file to my desktop for my life. Upgraded from 12GB to 20GB and poof, instant 1GB/sec locally. You will know when you need more RAM, pool performance will suffer. Sometimes to the extreme.

The thumbrule is nothing more than a good guide for how far you should expect to be able to go with your hardware. Building a system with just 8GB of RAM for the upper limit is crazy if you plan to make a 20TB pool. But starting with 16GB of RAM with the potential to upgrade to 32GB of RAM on a 50TB pool isn't out of the realm of possibility or is even that unreasonable. Just recognize that you might have to buy some more RAM when you get the system up and running and start putting data on your pool.
 

John M. Długosz

Contributor
Joined
Sep 22, 2013
Messages
160
I found that clicking on "Display System Processes" will pop up a terminal window that also includes memory stats more specific to ZFS.

9373M Wired, 421M Buf, 5700M Free
ARC: 8200M Total, 300 MFU, ....

So the read cache is included in "Wired".
 

James Snell

Explorer
Joined
Jul 25, 2013
Messages
50
P=I*E
I=P/E


I like the zeal here. Only thing I'll add is this formula is incomplete. For DC systems, this as-is, does hold. But the Power Supply is in-part, an AC system. Power in AC is actually: P=I*E*pf (or, as I much prefer to express it), P = VI*pf

pf is the power factor of the power supply. It's a complicated and endless topic. Go Google it if you're interested in learning more. Suffice to say, power factor has a lot to do with how the power supply essentially converts AC to DC. If you have a power factor of 1, then the situation is ideal and P=VI holds, as per the DC case. But apparently a pf of 1 doesn't really happen. I *believe* circa 2013 power supplies with "Active Power Correction" can get a pf of something like 0.9, which is getting great, but of course, the pf of the PSU also varies with variables like system load, ambient temperature, component age, etc.

Friggin electricity. How the frak did Tesla figure it out!? We're all n00bs. Boo.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I have 15 years of electrical theory so I do know about power factor and all that stuff you just mentioned. I do have a kill-a-watt meter and every PSU I've purchased since 2008 or so has given me a power factor of 0.99 to 1.01 at the wall(basically unity). To meet any of the 80+ specifications you must have a pf that is better than 0.9 at 100% loading. I believe that the silver and above also requires 0.9 or better at 20% and 50% now but that might be gold and above. My corsair gold 1200w as well as my little 450w (Antec?) both show 1.0 right now at my wall.

The thing to keep in mind is that these PSU companies want to make a PSU for the cheapest they can while being reliable and achieving the highest efficiency possible. Generally, the cost associated between meeting 0.9 and 1.0 is extremely low(less than 50 cents per PSU) and since you are trying to get every single ounce of efficiency you can you will accept that very small cost increase so you can enjoy the 2-7 % increase in efficiency. After all, that might make the difference between being bronze or being gold in the ratings.

So my comment to you is "yes it matters.. but thanks to 80+ certification and the economics of selling PSUs, it really doesn't matter for the end user".

As for your comment about "friggin electricity", its a mess. I know EEs that get confused with all this stuff despite the degree. Sometimes I wonder how the heck Tesla did the stuff he did with the technology of his time. He's probably make a mess in his pants if he were around today to see all of the toys we have.

Edit: Here's some of the PSU's I have at home...

Corsair HX1000(80+ standard) - 0.98 to 1.0 as tested http://www.plugloadsolutions.com/psu_reports/SP131-CORSAIR-CMPSU-1000HX-Report.pdf (certified March 2008)
Corsair HX1200(80+ Gold) - 0.99 to 1.0 as tested http://www.plugloadsolutions.com/psu_reports/CORSAIR_CMPSU-1200AX_ECOS 2088_1200W_Report.pdf (certified May 2010)
CoolerMaster M600(80+ Bronze)- 0.98 to 1.0 as tested http://www.plugloadsolutions.com/psu_reports/COOLER MASTER_RS-600AMBA-D3_ECOS 2129_600W_Report_Rev 2.pdf (certified July 2010)

You can actually look up any PSU that has ever received the 80+ rating at http://www.plugloadsolutions.com/80PlusPowerSupplies.aspx . I actually look up any PSU I'm thinking about buying just because I'm nerdy like that.

So yeah, pf of 1.0 is actually not that uncommon, even as of 5 years ago. :) Might want to read up on whatever document you are using for your info as its pretty out of date.
 

James Snell

Explorer
Joined
Jul 25, 2013
Messages
50
....
So my comment to you is "yes it matters.. but thanks to 80+ certification and the economics of selling PSUs, it really doesn't matter for the end user".
....


I guess my comment is more relevant to the people who buy el-cheapo sub bronze rated PSUs. Maybe they don't really even exist? Granted, I recently picked up several 80+ Corsair TX950s which were serious overkill for the project I got them for - as I wanted to be sure my PSU wasn't the issue. Suffice to say, 4 of the PSUs I've ever had fry were two of these. Maybe ASUS motherboards don't mix with Corsair? Why did Intel reduce their motherboard offerings!!!! WHY!!?


As for your comment about "friggin electricity", its a mess. I know EEs that get confused with all this stuff despite the degree. Sometimes I wonder how the heck Tesla did the stuff he did with the technology of his time. He's probably make a mess in his pants if he were around today to see all of the toys we have
One of my degrees is in EE. And yes, I agree it's a mess. My guess is Tesla looked at the whole system through a very different lens. Systemic issues in the self-serving industry that is education, has created situations that prevent truly insightful means of teaching from getting much traction.

<story time>
My University has this math prof who has mastered explaining calculus. He'd managed to explain Calculus I and II in basically a smattering of pages, all hand-written. The students began photocopying his notes and passing them around. People like me, who've always struggled to conquer math, ended up passing, on account of this guy getting to the point and just explaining stuff.​
Years later, I emailed this prof, asking for a PDF of his notes, as I felt like a solid review. He told me that in the years that followed my graduation, students stopped attending their calculus classes, and were getting the grades. All because of his notes. The other profs needed to justify their value and the fact that they take 80 hours of instruction to accomplish what this guy could do with a photocopier (and with poorer results). So.. The sage math notes were banned. They're not allowed anymore. People have to now go to class, to learn calculus. Isn't the point to actually learn!? I guess learning calculus in this esteemed manner is really code for justifying a brutally expensive education, all while jacking-off a few flimsy egos and wasting people's sacred time.​
Milk shot out of my nose when he told me that had happened. And I haven't drank any milk in years!​
</story time>

<rant time>
WTF! THIS IS WHY I HAVE A DEGREE IN EE, AND I FEEL LIKE A NEWB IN EE - best part is, I'm employed, I make electrical stuff and it delivers, so I'm relatively good at it... I'm relatively good at it, not because of my education, but because I got tired of memorized meaningless lines on a page, and started trashing all my toys and buying kits to teach me the basics about (non-fatally) electrocuting my girlfriends ill-mannered cat. I got the job, mind you, in large part because I had a paper airplane made from parchment that reads "Degree" on it. If this is the actually real world, no wonder ZFS requires ECC memory. Oh the humanity!​
If we'd solve some of these societal problems, like useless profs having to justify their existence and provide substandard education, maybe by now we'd all have mastered Faster-than-light travel, anti-gravity, world hunger and how to safely store a photo of your puppy.​
</rant time>

Annnyway. I read a really tasty endless explanation of power factor some months ago. It gave me massive wood. I guess if you buy a part-way decent PSU, you can go ahead and roll with P=VI (or P=I*E, if you like pastries).
 
Status
Not open for further replies.
Top