Western Digital RED

Status
Not open for further replies.

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I'll just say "it looks promising". There isn't enough useful information to say whether they will work well or not. Price especially. It looks like they're trying to peddle their WD RED drives with their NASware hardware(but there is a hardware compatibility list of other NAS devices). If the drives won't work well as a home-built NAS then I wouldn't recommend them.

For RAID environments, having a very low TLER is important to prevent a drive from being removed from the RAID by the RAID controller. WD used to sell drives that would let you modify the setting using a DOS based WD tool. The drives were suddenly changed and the feature removed. My guess is they wanted more people to buy the very expensive enterprise class drives for RAIDs because they can pocket more money. I'm sure these will be more expensive than your standard Desktop drive. The question is "how much more". If it's more than $20-40 I bet most people that build large arrays won't be looking at them.
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
Hmm, interesting! Thanks Holy, nice to see you around again. I'll have to keep an eye on these since I'm hoping to start replacing disks in my NAS with some larger ones. We'll have to wait and see how much they want and how reliable they actually are.

@Noobsauce80,

TLER is more of an issue for hardward RAID than software RAID like ZFS.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That's why I mentioned RAID controller ;). I have seen that exact issue before.. and it sucks when it happens to you! LOL
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Interesting prices.. One difference that may not be noticed by everyone. Green drives have 2 year warranty and Red have a 3 year warranty.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
TLER is relevant even on "software RAID", because fileservers generally should not go catatonic. TLER is simply more important on hardware RAID because failure to have it may cause a controller to decide the drive's failed.

One interesting thing to note is that a fair number of "prosumer" or "power user" NAS boxes are entirely software driven, and WD has aimed the Red series at these guys as well as the slightly upscale units that might have some minimal RAID hardware. For example, units like the Synology DS1512+ have the Intel ICH10R, but for error recovery purposes that is still essentially a software driver based unit, the drives aren't going to go offline unless the system decides to take them offline.

I've been watching the storage market for many years, and the thinking has evolved that "RAID" implies "high performance" and "open corporate wallet". As such, features like TLER were often restricted to ridiculously expensive, fast, power-hungry drives. The irony here is that for truly massive storage projects, you don't need fast drives, because when you've got 48 or 72 drives in a box, even with 10GE, there are limits as to how much data you can shovel around, and when you're setting up many of these boxes to provide Internet services, your bottleneck simply isn't likely to be the individual storage devices. You also don't want power-hungry drives. That has left designers of such projects with unpleasant choices to make. For example, the WD RE4 2TB model is around $250, while the Green 2TB is around $100. That can make a huge budgetary difference when you're buying two cases of drives for a single server.

Yet, and here's the interesting point, when you have even just 24 2TB drives, are you storing small files or large files? Because if you're storing small files, the difference between 5400RPM and 7200RPM may matter, but for large files, basically it doesn't. If you start running 50 megabits of data into your 40TB array, that's 6MB/sec, and it will take 77 *days* to fill. Touching each location on the disks only once (in general). On the other hand, if you're storing small files, then seek latency comes into play, and you'll discover that you actually cannot fill the disks before they are statistically likely to fail.

So, from a large scale storage system designer's perspective, the "RAID edition" drive strategy from manufacturers kind of sucks. For some time now, the pragmatic design choice has been to buy "desktop" or "green" drives, because that was where 5400RPM and reasonable prices and better power budgeting was. And now it seems that WD has stuck what is essentially RAID firmware on "Green" series drives, at a modest price premium, which is an attractive option for some of us where no good options existed before.

Really, what it comes down to is that I think they finally figured out that desktop computers are slowly dying, the new generation of NAS devices have little need for super-expensive hard drives, since they're almost universally unable to exploit the full potential of a drive anyways, and there's some money to be made by offering a 5400RPM "RAID" firmware drive.
 

lexieb007

Dabbler
Joined
Jul 28, 2012
Messages
10
I'm looking to put these into a 12 BAY rack mounted NAS running ZFS/2. So the WD Red are a new release, so of course haven't been "certified" beyond certain 5 BAY boxes available from various commercial "partners"..

Will there be a problem using these drives in a 12 BAY "enterprise" style machine? My NAS is a commercial grade box but will only be used for backup and movie streaming in a home enviroment, so I'm thinking these drives will be ideal as they are a mixture of WD green (energy efficiency) and 24/7 + low heat etc.. I was going to use the Hitachi Ultrastar, but 12 of these seems overkill, expensive to leave on (power) (not to mention expensive to buy)... What about the lack of vibration protection in these WD Red drives (used in a 12 bay enterprise machine)..problems?..

I'm a newbie to FreeNAS so I don't want too many hassles. Or drive failures. Thoughts and advice everyone?...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I run 16 and 22 drives in 2 different 24 drive cases with no issues. Personally, I'd shy away from the RED until we get more feedback as to how well they work. It would suck to spend SO much money to find they're not a good buy for FreeNAS. Personally, if Green drives work I'd expect Red would too, but do you want to gamble with that much money to find out Red doesn't work or is unreliable for FreeNAS or your particular hardware?

I have personally bought 12 drives for a NAS to find out the model I chose wasn't compatible. It was painful both emotionally as well as monetarily to have to go back and basically rebuild a new system because you made an oopsy like that. When I bought the drives I was convinced the drives would be compatible. Others had used them with no issues. But, a firmware change broke all those rules and I got boned over...badly. The drives worked great for 3 months, then they'd start randomly dropping out of the array due to a firmware bug. Because they weren't actually broken or failing there was no way I could RMA them. They'd pass all of their diagnostics with flying colors. I'll never forget the year I paid for my NAS twice within 5 months because of a mistake of this magnitude. After the cost involved with building a second NAS because the first one was so unreliable(not to mention the nights where I thought I'd wake up to find out all of my data is gone) I'll make 200% sure the drives will work before i buy them and I'll NEVER gamble with the "state-of-the-art" again. Tried and true is where I'm staying.
 

lexieb007

Dabbler
Joined
Jul 28, 2012
Messages
10
I run 16 and 22 drives in 2 different 24 drive cases with no issues. Personally, I'd shy away from the RED until we get more feedback as to how well they work. It would suck to spend SO much money to find they're not a good buy for FreeNAS. Personally, if Green drives work I'd expect Red would too, but do you want to gamble with that much money to find out Red doesn't work or is unreliable for FreeNAS or your particular hardware?

I have personally bought 12 drives for a NAS to find out the model I chose wasn't compatible. It was painful both emotionally as well as monetarily to have to go back and basically rebuild a new system because you made an oopsy like that. When I bought the drives I was convinced the drives would be compatible. Others had used them with no issues. But, a firmware change broke all those rules and I got boned over...badly. The drives worked great for 3 months, then they'd start randomly dropping out of the array due to a firmware bug. Because they weren't actually broken or failing there was no way I could RMA them. They'd pass all of their diagnostics with flying colors. I'll never forget the year I paid for my NAS twice within 5 months because of a mistake of this magnitude. After the cost involved with building a second NAS because the first one was so unreliable(not to mention the nights where I thought I'd wake up to find out all of my data is gone) I'll make 200% sure the drives will work before i buy them and I'll NEVER gamble with the "state-of-the-art" again. Tried and true is where I'm staying.

Wow. OK. Thanks for that. Is there a recommended list of tried and tested drives? What drives are you using?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
In my FreeNAS servers i'm using the following:

8x1TB Seagate hard drives(Don't remember the model # exactly but the model isn't made anymore)
8x1.5TB Seagate hard drives (Don't remember the model # exactly but these aren't made anymore either)
16x2TB WD WD20EARS
6x3TB WD WD30EZRX

I swore by Seagate for 10 year before their faulty drives bit me in the butt. I'll never go back after buying $2000 worth of hard drives to find I couldn't do anything with them. Who wants to buy drives that seem to be semi-unreliable? LOL I still have a few of them I don't think I could give away. I would never trust them to store my data and I always tell friends that if they buy them they assume the risk.

I don't know of any particular model that is absolutely horrible at the moment, but I haven't gone shopping for any new hard drives this year. I bought the 3TB drives just a week or 2 before the flood that caused the price rise we've been dealing with.

In my opinion your best bet is to look at people posting questions online and identify a model, size, or brand(whatever suits you) that people mention using and have no problems with. There's lots of posts with people complaining about software issues, network issues, etc. Often people post the hard drives they are using. One of those would probably be a good choice.
 

lexieb007

Dabbler
Joined
Jul 28, 2012
Messages
10
Ok. So basically what you are saying is that WD Green drives are tried and tested with FreeNAS. Have been for a few years. The Red's may be similar, with a few additional NAS type properties, but best avoid until they are proven with FreeNAS..

But what about WD saying the Green's aren't supposed to be used in a NAS..something about a climbing LLC?..

Is that what you are saying? Despite this issue, still go with the WD Greens. Something like the WD20EARX....
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There's a DOS program called wdidle.exe that will let you change the LLC. I set mine to 5 mins and I have no problems with it. Had those drives for over 2 years now. Just make sure the model you buy is compatible with the program. Some models won't let you change it, some will.

As for your comment "but best avoid until they are proven with FreeNAS" I think it's more appropriate to say "I would not buy them until someone else has posted that they work great". It's really my opinion. I have no evidence that they won't perform, but I prefer to assume nothing will work and then try to find someone that has proven they are than to assume the opposite. I'm conservative. But when you're dropping this kind of cash I don't like taking chances. Red drives might be the next best thing since sliced bread, but I'll let someone else drop $1k+ to prove it rather than me.

Of course, if WD were to send me 5 drives and said have a ball, I'd probably promote them from that point on if they worked well. I'd love to list a hard drive model or 2 in my ZFS presentation(link is in my sig if you are interested in reading). But if I'm going to recommend something(or argue against something) I prefer to have either personal experience or an obvious flaw in the design(like trying to put SATA drives on an IDE card). The red drives look very promising for their low power usage, low heat, designed lifespan and longer warranty.
 

Stephens

Patron
Joined
Jun 19, 2012
Messages
496
I know you've done some reading, but as long as you understand how the different components work, you're the best person to apply that to your situation.

http://en.wikipedia.org/wiki/Error_recovery_control
http://www.smallnetbuilder.com/nas/nas-features/31202-should-you-use-tler-drives-in-your-raid-nas

I don't care much either way about TLER because I'm using "software RAID" and I'm not worried about some RAID controller dropping my drive because it thinks it's dead due to error recovery attempts taking too long. If TLER isn't there, so be it. If it is, good going. Base ZFS will probably fix what's wrong, especially since I'm running ZRAID2. Mostly I'd just like to know when these things happen because if your drive is consistently having trouble reading data, it's time to start considering retiring it. That's what the SMART tests will tell us.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Those answers are opening a big can of worms. Note that what I'm about to discuss does NOT apply to SSDs, flash media or USB thumbdrives.

TLER stands for Time-Limited Error Recovery.

Here's how a standard desktop hard drive will work. Basically, when your hard drive needs to perform a read operation and that sector is bad the hard drive will go into error recovery mode. This mode tries repositioning the head to hopefully get the sector to read. Some hard drives will often do this for quite a while(30+ seconds) while others will continue to try to recover the sector indefinitely until the sector is read without an error. Add to this a sector that intermittently causes an error and this can turn into a terrible problem to resolve. When a hard drive is in error recovery mode the hard drive will not respond to commands from the computer, such as another read or write operation. When a hard drive is in this error recovery mode it often makes a noise nicknamed the "click-of-death". After a predetermined period of time as dictated by the manufacturer of the hard drive some hard drives will eventually return an error while some will continue to try to read the bad sector until it reads it without error. Sometimes the entire computer will freeze while the hard drive tries to recover the sector. Of course, if your hard drive model will keep trying forever the only way you will get to use the computer again is by doing a reset.

On some SATA and some RAID controllers, if a hard drive stops responding after a set time the RAID controller drops the hard drive out of the array. For SATA controllers the SATA controller will assume that the hard drive is no longer attached. Of particular importance is RAID since you have a large number of hard drive so the chance of an error goes up. Having hard drives randomly drop out of an array is not good from an administration standpoint, data reliability standpoint, or server performance. If you have a RAID1, RAID5, RAID6, RAIDZ1 or RAIDZ2 you will be fine since your redundancy will keep your system running. You will now need to add a hard drive(either the same one or a new one). The old hard drive may have reallocated the sector, in which case you may be able to do a rebuild(resilvering for ZFS) and continue on your merry way. Often you will no longer be able to use the drive in a RAID array because when you get to that sector you will have trouble writing to the sector and the drive will drop out again. You will likely not be able to RMA the old drive because a single sector failure does not qualify you for an RMA. Most people will NOT want to buy a new hard drive for their array because of a single failed sector. So what should we do?

Generally, new drives have zero bad sectors(but they actually CAN ship with some, and many do). These sectors are marked in the firmware manufacturers defect list so you'll generally never be able to prove there were any bad sectors anyway.

Over time sectors will begin to fail. This is completely normal and expected. Some drives will start having bad sectors within days, others take months or maybe years. In any case, when a single read error occurs you do not want the drive to drop from the array. You'd prefer to use parity data or a mirror and just reallocate the sector immediately because it is not imperative that you recover the data. This is where TLER steps in. TLER prevents the hard drive from spending more than a certain number of seconds(typically 10 or less). If it can't recover the sector then the sector is immediately remapped and parity data is used to fill in the data for the missing sector. By preventing the hard drive from dropping out of the array and remapping the sector you have prevented the need to order a new hard drive to keep redundancy on your array.

For cheap RAID controllers and/or poorly manufactured hard drives, this can be an ongoing and painful problem because there's no way to predict when a drive will have a bad sector or to control it without TLER. Of course, TLER is only in Enterprise class drives(read: MUCH more expensive). WD did have a consumer hard drive that you could enable TLER on if you had a WD DOS tool. Alot of people started buying those drives because you could get TLER for free. Of course, WD wised up and figured out this wasn't good for business because WD wants you to pay alot more money for the Enterprise class drives. So suddenly the feature disappeared and the DOS tools wouldn't work anymore. If I remember correctly the firmware version didn't even change. So you couldn't call up Newegg and verify the firmware version supported TLER and then buy them. You had to basically hope you could use the DOS tool.

Depending on your configuration you may or may not need Enterprise class drives. The old Areca-1280ML worked very well with many drives that did not have TLER. Note that my hard drive is not specifically listed on the approved list of drives, but they worked flawlessly out of the box from day one. The theory is that if you use hard drives from the manufacturer's approved hard drives then you shouldn't run into a problem with drives that are not enterprise class because they have verified that the hard drives will function. Of course, some of the less expensive companies(Highpoint.. *cough* *cough*) list drives that are "compatible" but they didn't put the hard drives through the proper paces to really say they are compatible.

This is where my previous post about having 12 drives not work comes into play. I had the exact firmware and model called for in their recommended list, but they were still not compatible. This was not due to TLER but the hard drives continuing to randomly issue BUSY commands to the RAID controller while performing high random read operations that involved large seek times. The drive would be dropped out because the controller decided that any hard drive issuing that many BUSY commands must be failed and dropped it. The hard drive manufacturer didn't consider it an issue because the hard drive is performing exactly how it was designed. Highpoint didn't consider it an issue because they either didn't care or they simply said that my issue was "isolated". There's a 20+ page thread on the Seagate forums from 2 years ago with a number of people having the problem and nobody wants to accept the blame nor be responsible for fixing it. The hard drives were sold as being "Desktop RAID" compatible which Seagate later defined as RAID0 and RAID1 with onboard Intel RAID chipsets only. If you weren't meeting those hardware requirements then you were using the drives outside their intended function and therefore it was not their problem. If I hadn't bought that exact model hard drive or that exact brand of RAID controller I would never have had the problems that I was having. I made both choices by reading everyone else's opinions in addition to the hardware compatibility list agreeing with me. My decision was the most informed decision anyone could have made without inside knowledge.

So does TLER matter for ZFS? This answer to TLER really is based on the hardware and less so the software to the best of my knowledge. As long as your hard drives aren't being dropped out of the array and as long as the hard drive will not try to read the sector in perpetuity then you should be okay. As long as you are accepting the fact that when a bad sector is found and you're watching a movie that movie may freeze for 30 seconds or so all will be fine.

So now that i've muddied up the waters really badly with this long explanation, where should we go from here? Red drives are just hitting the market. They are effectively untested from any random Firmware issue, such as Seagate's BUSY commands fiasco, and it is unknown how well they perform with various RAID controllers, SATA controllers, as well as their actual typical lifespan. It is entirely possible that Red drives will work fine because, after all, they are designed for home office and small office NASes. But are you willing to drop $1000+ on hard drives that may be giving you problems in a few weeks but pass all diagnostics and therefore not able to RMA them but have no money to replace them? Ultimately I had no choice but to build a whole new server and copy the data from the old to new server in small chunks to make sure I didn't cause too many drives to drop from the array on the old machine. Note that I had to buy a whole new server and I couldn't reuse ANY parts because the old server still had to function for the data copying. Who has $3k+ to just drop on a new server without planning ahead for it? And of course, each day that you don't do the upgrade is another possibility that you will wakeup and find out you do not have enough drives left in the array to read your data. Backups are always good!

I'll stick to watching what everyone else thinks about them before I spend my money on them.
 

lexieb007

Dabbler
Joined
Jul 28, 2012
Messages
10
^ Thanks for the incredible post above noob.. I learnt so much from this post! Once bitten twice shy I guess. Which is absolutely fair enough!

When you say "not be able to RMA" the drives (if they fail)..do you mean they can't be reformatted for use somewhere else?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
What I mean is you can't RMA the drives because they keep dropping out of your array. They'll pass all diagnostics fine.

You could certainly reformatt them and use them somewhere else. But let me ask you, if you had a drive that seemed to be flaky on your RAID, would you ever trust it with data again? I've lost data before and as soon as a drive doesn't seem to be working right, I pull that sucker right out and never use it again. The only time I use those drives I don't trust is if I have to take 100GB of data to a friend's house and if the drive were bad I wouldn't care(except for the trip back and forth). I won't install an OS, use them in any other array, or anything else after that.
 

Lucien

Dabbler
Joined
Nov 13, 2011
Messages
34
noobsauce80, thanks! That's a very informative post.

I guess the question I'm asking isn't so much whether the Reds will work well with FreeNAS - as you've pointed out nobody will know that until there's enough experience with the drives out there - but more of "is having TLER going to be a dealbreaker when it comes to using these drives in a ZFS system?" I don't know enough about RAID, FreeNAS and a feature like this to tell.

If I had to guess, I'd guess that it wouldn't matter or be a plus. If a drive does have a bad sector, I'd have thought it replies to the controller with some sort of negative or bad data instead of trying again and again to read the sector. Whatever sits on top of that then reports failure, or if redundancy is available, reconstructs the data from parity. Going with your example, I'm thinking that if we were streaming a movie off the array, having TLER would reduce the freeze period. I guess the flip side would be an increasing amount of bad sectors that can't be used by the drive? (And I'm wondering what happens when the drive runs out of spare sectors...)

Like I said though, I don't know enough about all this to know for sure. So pointing out if I'm wrong about any of that would be appreciated. :)
 

ProtoSD

MVP
Joined
Jul 1, 2011
Messages
3,348
TLER only matters with hardware RAID, ZFS/FreeNAS shouldn't have any problems with it at all as long as you're not using a hardware RAID controller and let ZFS handle the RAID.
 
Status
Not open for further replies.
Top