BUILD 6 x 3TB first time build

Status
Not open for further replies.

sheepdot

Dabbler
Joined
Aug 10, 2013
Messages
31
Hi, I'm hoping to get some quick feedback, as I'm unexpectedly putting money into this right away.
I'd like to run RAIDZ2. The goal is mainly backup of the household computers and media storage.
I suppose if I want to add the SSD, I will have to get some sort of addon SATA card, so I'm thinking I might get the M1015 and take it up to 12 3TB drives (2 spares).


Any thoughts are appreciated.

6 x 3TB WD Red drives
1 x 64GB Intel SSD (already own it)
Rosewill rackmount chassis





SUPERMICRO MBD-X9SCL-F-O LGA 1155 Intel C202 Micro ATX Intel Xeon E3 Server Motherboard


Intel Xeon E3-1230 V2 Ivy Bridge 3.3GHz (3.7GHz Turbo) LGA 1155 69W Quad-Core Server Processor BX80637E31230V2

Kingston 32GB (4 x 8GB) 240-Pin DDR3 SDRAM DDR3 1600 ECC Unbuffered Server Memory w/TS Model KVR16E11K4/32
 

Z300M

Guru
Joined
Sep 9, 2011
Messages
882
I'm surprised nobody's responded yet. I claim no special expertise, but I pass on a couple of points that I've picked up through reading messages here:

1. If you are using the SSD for cache (or maybe for other purposes as well), you should have two of them, mirrored -- and some have suggested that they should be different brands using different technology.

2. People suggest that the best performance is achieved by using particular numbers of drives: n^2 + z. So for RAIDZ2 you'd want six drives (2^2+2) or 11 drives (3^2+2), etc. Plus whatever spares you want, of course.

I recently bought an M1015 on eBay and flashed it to IT mode. So far it seems to be fine.
 

sheepdot

Dabbler
Joined
Aug 10, 2013
Messages
31
Thanks for the response. How worthwhile is it to add SSD caching if I'm mainly storing/serving media files and backups of the household computers? I had planned on using it because I upgraded to a larger SSD for my desktop, but I'm not sure I want to spend money on another SSD unless the cost increase is warranted.

I pulled most of the parts out of the hardware guide, though I haven't seen anyone say anything about the Rosewill chassis.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It's not. I don't use one and very few people have a use for it. If you had read the presentation in the newbie section (stickied) you'd have known this. :P
 

goudviske

Dabbler
Joined
Aug 2, 2013
Messages
15
Seems descent, but personally I'm convinced the WD Red's aren't worth the premium over the Greens
Same drives only different TLER, and a tad more expensive because of that

Personally, at the moment I'd stay away from WD all together, I've read so much hard drive failures recently about WD, it's probably the same for all vendors, but I find it striking that WD, which used to be the reference (pre taiwan floods), has degraded so much in 1-2 years. Certainly the RZ2 is no luxury anymore...
 

sheepdot

Dabbler
Joined
Aug 10, 2013
Messages
31
My (possibly incorrect) understanding was that because the WD greens are geared toward AV use, bit level errors were tolerated; on the other hand, WD Reds were designed for NAS use and had better error handling/checking.

In terms of comparing WD Reds versus similar drives, my feeling was that the longer warranty with the Reds was worth it in a RAIDZ2 environment that would be tolerant of having a disk or two out at a time.
 

goudviske

Dabbler
Joined
Aug 2, 2013
Messages
15
Hi,

You are right when you say the Reds are designed for NAS.
However in most people's understanding, a NAS is a qnap or synology box where you plug in hard drives and the box does some voodoo magic which makes some files/volume/service available to clients.

Warranty may be a factor in favor of the reds, you're right there, but as far as the drives go, they are identical to the green counterparts, only difference is the TLER, which is very beneficial, if not required for hardware raid.
 

sheepdot

Dabbler
Joined
Aug 10, 2013
Messages
31
So you would say that the ZFS error checking would be sufficient without the added benefit of the TLER in the reds?
 

goudviske

Dabbler
Joined
Aug 2, 2013
Messages
15
I don't believe the red hard drives do any form of error checking, at least nothing that the other models don't also do.
And in my opinion, the TLER on the red's (which is why you would want to get the red ones), has no real added value on a software raid implementation, such as ZFS.
 

russnas

Contributor
Joined
May 31, 2013
Messages
113
WD Red NAS hard drives have been extensively tested for compatibility in 1-5 bay NAS
http://www.wdc.com/en/products/products.aspx?id=810

from what i collected,

lower power use, high heat tolerance,24x7 reliability, TLER control, 3D Active Balance technology which reduces vibrational wear on the drives.

i read since greens have Intellipark it can reduce its life in servers, but its fine in domestic situations,

you can disable the function or change from 8 seconds to 300

i got 2 greens for my backup so i will try modify the parking time,i didnt choose the reds due to cost, one extra year warranty and im not running it 24x7

http://forums.whirlpool.net.au/archive/1367904

http://www.ngohq.com/news/19805-critical-design-flaw-found-in-wd-caviar-green-hdds.html

http://www.synology.com/support/faq_show.php?lang=enu&q_id=407

http://en.wikipedia.org/wiki/Error_recovery_control
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Oh, my. This thread really took a turn. Guess I'll clear up a few misconceptions real quick.

TLER is important for hardware RAID controllers(and many SATA controllers). TLER prevents a hard drive from being "dropped" from a hardware RAID because of a single bad sector. This was a big deal years ago, and is getting worse. The reason being that if you have a RAID6 with 8x4TB drives and one fails you want to drop in a new drive and rebuild the array. Well, because of the URE(Unrecoverable error rate) there's a very good chance that a second disk will encounter an error at some point during the rebuild. If it takes too long to do error recovery(something TLER minimizes or prevents) then you will have a second disk drop from the array. Its kind of frowned upon to have 2 disks missing from an array at the same time. ;) This scenario can and does happen with non TLER drives. Don't like it, but the very expensive Enterprise RAID drives. They simply report to the SATA controller that a hard drive error occurred and let the RAID controller do the data reconstruction with parity data. After all, that is what RAID with redundancy is supposed to do, right?

With non-RAID environments some standard SATA/SAS controllers have a built-in timeout anyway. If a drive doesn't have TLER and keeps trying to read the bad sector forever(some do), the disk is as good as useless after that point anyway. Even if your SATA controller doesn't disconnect the drive from the system. ZFS will start racking up errors, the system may crash if your SATA controller is bargain bin quality, etc. If you are like alot of people, you may have purchased a RAID controller for cheap on ebay or reusing an old controller but have set it to JBOD mode. Every controller I've ever worked with will still drop hard drives without TLER if problems begin even in JBOD mode.

ZFS, being a software RAID(of sorts), has no control over if/when a drive is "disconnected" by your SATA controller. So TLER is nothing more than protection from losing more drives during the resilvering process. Some people find that the extra $10 or so give you a good piece of mind, others don't care. It's like having a seatbelt for your car. You can choose to ignore it, but that one time when you need it, you may have wish you had used it. In a business environment, going with anything besides Enterprise class or NAS class(if there is such a thing) is crazy. At home, it all about how important your data is. If you aren't making backups religiously then going with anything besides NAS-class is taking a risk.

But I will tell you that when it comes time to buying drives, I have 24 WD Greens and aside from self inflicted failures from overheating early this summer, I've had excellent results with the drives. Only 1 failure on more than 3 years of uptime for my drives. If I had to build a new system today I'd probably go with the WD Reds just for the longer warranty and the potential TLER. But, WD Reds haven't been used extensively in the NAS environment yet. They are relatively new to the market being out less than a year.

WD Green's intellipark is easily disabled with the wdidle.exe, so saying that WD Greens aren't good for NASes is rubbish. Those that know better can easily enjoy the lower power and cooler WD Green series of drives just by using wdidle.

One thing I'd absolutely never do is buy 7200RPM+ drives except for situations with amazing cooling solutions. They draw too much power, don't really provide much for latency savings when you consider a file servers total function, and they are more prone to overheating which leads to premature failure. So think before you buy. Your data may rely on it.
 

russnas

Contributor
Joined
May 31, 2013
Messages
113
thanks for that, there is alot confusion since they released the reds and there is alot a bad rep for the greens online, the greens are ideal for storage, low rpm,power consumption, cost,i haven't had an issue with them so far and they usually my first choice.

looking at the WD red nas site - they have tested them on those 2-5 bay nas units, i havent had one so i dont know the features, data reliability or file system they use, the drives are close together, lack of airflow due to size and 24/7 use would make these units suit the reds, while all my hdds have been in desktop cases with adequate airflow

http://www.wdc.com/en/products/products.aspx?id=810
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
Seems descent, but personally I'm convinced the WD Red's aren't worth the premium over the Greens
Same drives only different TLER, and a tad more expensive because of that

A 3TB Red is heavier than a 3TB Green, suggesting different internals...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Actually, its simply suggesting that the solid block of metal that makes up the "shell" of the hard drive has extra metal to support better heat transfer. Big whoop.
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
I'll also pitch in that I've had 6 1TB Greens on an ICH based raid 5 running 24/7 since March of 2010 pretty much without a hitch. I say pretty much because I have had 2 different drives drop out of the array. Each time a reboot cured the issue (the drives were not replaced). I'd simply push them back on to the array and it would rebuild for like 8 hours and all was well. It's still running right now. One time a drive dropped when the system was essentially idle, the other time it had been getting hammered pretty good for around 18 hours. They are in a 3U Norco case w/ 3 other drives (one a 2TB Green) with pretty good ventilation. Three of my clients also use them as backup media (and they get transported off site each use) and they haven't had any issues with them.
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
Actually, its simply suggesting that the solid block of metal that makes up the "shell" of the hard drive has extra metal to support better heat transfer. Big whoop.

Maybe, but heat is the enemy.
 

Mguilicutty

Explorer
Joined
Aug 21, 2013
Messages
52
From the wd pdf spec its lighter,

Hmm. I could have sworn last time I had one of each the Red felt a bit heavier. I'll have to check again, maybe I'm just thinking backwards...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There's a tradeoff between metal location, amount of metal, and heat transfer. You want to enough metal to absorb the heat and wisk it away from critical components, but not so much that the metal's own specific heat capacity inhibits heat transfer. ;)
 
Status
Not open for further replies.
Top