NAS Disk Platters

Status
Not open for further replies.

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
My own experience in disk arrays has been limited to my build since late 2009. It consists of 4 Hitachi 1TB Deskstar drives with 2 platters each configured at hardware RAID 1+0. It is not even a NAS but a Windows 7 desktop running on P55 chipset. The good news is that it is still running fine with 2TB of net capacity. Over the years, that capacity has been dwindling, and that prompted me to look into building a true NAS.

At the gym, I just happened to talk to a hardware manager working for Western Digital. I asked him if hard disks with fewer platters would be more reliable in the long run. He avoided my question by answering all hard disks regardless of the number of platters are guaranteed to work for the warranty time frame. Well?

Given the following Western Digital red drives as of today (04/04/2015),

** 1TB = 1 platter
** 2TB = 2 platters
** 3TB = 3 platters
** 4TB = 4 platters

I can see a statistical lower eggs in Newegg procurement channel when the number of platters is higher than 2.

On Hitachi Deskstar NAS drives, each (regardless of capacity) come with 5 platters, and the customer satisfaction seems to be the same throughout these drives as expected.

Currently, I plan to do 6 red 2TB drives with RAID-Z2. I should be getting abut 8TB of net storage, correct? My experience and my gut feeling are telling me I should stay with hard disks of either 1 or 2 platters in a NAS array for better reliability. What do you experts think?
 

DrKK

FreeNAS Generalissimo
Joined
Oct 15, 2013
Messages
3,630
On Hitachi Deskstar NAS drives, each (regardless of capacity) come with 5 platters, and the customer satisfaction seems to be the same throughout these drives as expected.

Currently, I plan to do 6 red 2TB drives with RAID-Z2. I should be getting abut 8TB of net storage, correct? My experience and my gut feeling are telling me I should stay with hard disks of either 1 or 2 platters in a NAS array for better reliability. What do you experts think?

I believe you over-estimated. 6x2TB drives in RAID-Z2 is 4x2TB worth of raw, which, the way these things are measured, I believe you will find that you have 6.97TB of usable space as it would be reported in Windows. Not sure if those are terabytes or tibibytes or whatever. But it will be 6.97TB.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
yeah, take care that you understand that you should not use more then 50÷ of your space if you prefer speed, or no more then 80÷ anyway.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
The warranty answer that person gave you is true of course, he sounded like "Captain Obvious". There are a few common factors which go into the longevity of a hard drive and those are temperature, spinup cycles, head loading cycles. High temps destroy drives fast so it is recommended to keep the temps below 40C meaning typically forced air flow across the drive bays. Spinup causes current surges and the motor controller gets a bit of wear and tear on it. The fewer platters the better if you want your drives to sleep (read a lot on sleeping drives for FreeNAS becasue there are things you must do to keep the routine FreeNAS activity off the drives). Head loading cycles can also become an issue such as in some of the WD Reds and Greens. There is a lot of data on this as well in this forum, just search for it and how to reduce or disable the head loading problem.
Lastly (if there is such a thing) the time it takes to resilver (replace) a failed or failing drive will depend on how much data is on the drive, meaning that if you have a 50% full 2TB drive it will take less time than a 50% full 4TB drive. So you could purchase six 2TB drives or four 4TB drives for the same capacity and redundancy but it will take a 4TB drive twice as long to rebuild it's data during a replacement. On the flip side, you would be saving likely 10 watts of power by using two less drives.

Overall I would recommend that you figure out what capacity you desire, use the RAID calculator to see what you need to achieve that.

Check out my signature link for a RAID calculator, ensure you select a RAIDZ2 and it will tell you the usable storage value. Also keep in mind that by default with FreeNAS, a storage pool has compression turned on so depending on what type of files you are storing, you may actually be able to store more data simply because of that default setting. And as zambanini said, you should not exceed 80% capacity in a ZFS designed NAS as things will tend to slow down. Since you are looking for a replacement NAS, you should have an idea of how much storage you would need, just oversize it a little if possible. If you are looking for "High Availability" data and not just a home unit, then you must look into some other factors but for a simple and quick home unit, just ensure you have the correct hardware for a ZFS machine. If I were to recommend what to start with it would be 8GB ECC RAM that could be easily upgraded to 16GB ECC RAM just by adding 8GB more, and a supermicro motherboard which supports IMPI, and the CPU that is not overkill. Read the forums and guides, see what others have built (read tag lines), and ask those people in a PM if they like what they bought or what they would do differently if they could do it all over again. We are here to help.

Now a quick comment on my WD Red drives because I have the configuration you mentioned above. I actually prefer the 2TB size simply because of the replacement time factor. I've had most of my drives since Oct 2012 and they have been running constantly (spinning) that entire time without an error (21440 hours = 893 days = 2.45 years) non-stop and turned off about 62 times of which most of those occurred while building and testing my NAS. If I were to do this over again today, I might consider five 3TB drives for my system just because cost is down and I'd save a little amount of power and I'd free up a SATA port for which I could use for my boot device or possibly to easily replace a failing drive, assuming I stayed with the same motherboard. Of course I'd use a different motherboard too, one with IPMI support because my NAS is in the basement. But my point here is the WD 2TB Reds have been very reliable for me and other than the high head cycle count issue we had a few years ago, I have no complaints.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
We haven't heard of any unusual failure rates for any size of WD Reds, so I'd hesitate to attribute low scores to failure rates.

More likely, more platters == more noise, heat and power.

From a high-level engineering point of view, there's no real reason for drives with more platters to be less reliable, since they have the same amount of moving parts, same "process" (1TB per platter) and the same design.
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
Does it make sense to go with RAID-Z (instead of RAID-Z2) on 6 x 2TB WD red drives since these drives are expected to be more reliable with only 2.7W of power at idle?
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
If I were to do this over again today, I might consider five 3TB drives for my system just because cost is down and I'd save a little amount of power and I'd free up a SATA port for which I could use for my boot device or possibly to easily replace a failing drive, assuming I stayed with the same motherboard.

FreeNAS boots from a USB stick. Why do you want an internal boot device? What is the advantage of doing so?
 
Last edited by a moderator:

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Does it make sense to go with RAID-Z (instead of RAID-Z2) on 6 x 2TB WD red drives since these drives are expected to be more reliable with only 2.7W of power at idle?
No, going with raidz2 over raidz isn't about the drive being reliable. It is the probability of having a read checksum error from a drive during resilver.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
FreeNAS boots from a USB stick. Why do you want an internal boot device? What is the advantage of doing so?

Significantly reduced rate of "oops, I hit the USB drive and now all that's left is the PCB barely hanging on to the USB plug" situations.

There's also the question of why a SATA device instead of USB - it's simply because USB drives tend to be very unreliable. Some people have been through several USB drives since 9.3 was released and failing boot deices became easy to identify.
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
Significantly reduced rate of "oops, I hit the USB drive and now all that's left is the PCB barely hanging on to the USB plug" situations.

After it boots up, the USB stick can be safely removed, no?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
After it boots up, the USB stick can be safely removed, no?

No, it's mounted R/W starting with 9.3. The whole OS is loaded into RAM, but there are occasional writes (config changes, for instance).

I'm not sure it'll immediately and cleanly panic, though, so it's best not to experiment on a live system
 

fracai

Guru
Joined
Aug 22, 2012
Messages
1,212
I don't want to say that would immediately cause problems, but it would at some point. As soon as you tried to update some configuration setting, or a scheduled boot volume scrubbed kicked off are obvious instances. I'm sure there are background tasks that will touch the boot volume without being prompted by user interaction.

Removing the stick after boot sounds like a horrible idea.

For what it's worth, I've always kept my USB drives inside the case. I just don't bring the USB header outside.
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
Removing the stick after boot sounds like a horrible idea.

As I understand it, there are very minimal times that one has to reboot up the NAS. After the flash program is loaded into RAM, the flash memory would not be needed anymore, and all necessary data can be modified on the pool. Now, I have learnt that data would still be modified in the USB stick. It is no wonder that such a USB stick would wear out in a short time frame. Just like SSD (which is flash), the flash memory can only sustain a read/write of 100k or so according to specification. That is microcontroller grade flash. For commercial usage of USB stick or SSD, I would expect that to be less. On top of that, SSDs have a sophisticated management of writing while USB flash has none. Now, I understand why SSD is better for this application.

What does it take to modify the latest release with this feature? Can it theoretically be done?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,553
As I understand it, there are very minimal times that one has to reboot up the NAS. After the flash program is loaded into RAM, the flash memory would not be needed anymore, and all necessary data can be modified on the pool. Now, I have learnt that data would still be modified in the USB stick. It is no wonder that such a USB stick would wear out in a short time frame.
I'm too lazy to actually verify, but I believe the freenas-v1.db is stored on your boot device (in most cases USB). This means that data will be written when you change your config, and FreeNAS will read the boot volume when you make config changes. Additionally /etc/local/smb4.conf and various other config files are stored on the boot volume. This means pulling the boot device will cause things to start breaking once they have to stop and restart (for instance samba randomly crashing and reloading your smb4.conf file). This is not the same as constantly writing to the USB stick. I'm pretty sure there aren't very many writes being performed on the boot device, but if you don't trust me you can always run "zpool iostat freenas-boot 1" and watch for writes.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
FreeNAS boots from a USB stick. Why do you want an internal boot device? What is the advantage of doing so?
I could use a SATA DOM (Disk On Module) device or even just an internal 60GB SSD (they are cheap) and place all my jails on it or even create a new pool for still I reference a lot. There are a lot of things a person could do.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I thought that the boot device is written to, or maybe it's only read about every 5 minutes. I know there have been a lot of changes the way the boot device works since version 9.3. Either way you cannot pull the boot device after a boot process, your system would crash.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
It seems obvious that more platters means more opportunity for failure. Where the heads meet the platters is where the problems often begin. That one location has double the locations if the disk has double the platters.

So it seems obvious that more platters means more opportunity to fail. Also some hardcore storage nazis refuse to buy 5-platter drives. Why? They have a higher failure rate. More platters also means more energy required to keep the same RPM, which also means the drives run hotter unless you have adequate cooling. We all know that keeping drives relatively cool is a good thing.

I think it's bad to look at Newegg's reviews because it's fairly well known that as time goes on disks get fewer platters, and with it the model number may not change. So the review may or may not be for more platters than another model.
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
Either way you cannot pull the boot device after a boot process, your system would crash.

Well, I cannot pull the boot USB stick after booting for FreeNAS 9.3.x, x, and that is reality for now. Since I am a perfectionist, my wish list is the old fashion of separating the program space and data space. After an operating system is able to boot up from non-volatile memory, there is no more demand of the non-volatile memory. The operating system ought to be able to write any configuration information into some other form of memory. This would free up hardware resources. It seems to be totally unnecessary to waste an SATA channel just for booting where theoretically each NAS would experience only 1 boot in its lifetime.

Also, from the RAID Reliability Calculator, does "meantime to data loss" mean a complete (thus catastrophic) failure of the entire RAID with unrecoverable data?
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
It seems obvious that more platters means more opportunity for failure. I think it's bad to look at Newegg's reviews because it's fairly well known that as time goes on disks get fewer platters, and with it the model number may not change. So the review may or may not be for more platters than another model.

I have personally experienced very poor satisfaction when I had purchased an item from Newegg with poor ratings or no rating at all, and these sour experiences do still linger with me. Today, I do value the comments of the previous purchasers very respectably.

For example, the WD Red 3TB hard disk has accumulated just 3 eggs out of 5 with 575 inputs today (April 7, 2015). This drive has 3 platters, here is an example of one such gripe.

"The reliability of this drive is highly questionable... I've heard the 4 and 2 TB drives are reliable but for whatever reason, the 3TB series seems to be a dog."

Well, the 4TB has few inputs. We know the 2TB has only 2 platters, and its satisfactionwith 337 reviews so far is much better than 3TB ones.

Then come the Hitachi Deskstar NAS. They all have 5 platters, but Hitachi/Old_IBM seem to be able to do only 1+ percent failure rate versus WD of 4+%. The reviewers at Newegg place any of these 5-platter drives as good as WD red 2-platter drives.

My own experience is that I never had an IBM/Hitachi failure, and over half of my HD in the past have been IBM/Hitachi. WD failed me just once out of so many, but Maxtor and Seagate failed me over 75% of the time.
 

Ahjohng

Dabbler
Joined
Apr 4, 2015
Messages
34
By chance, I found the following report. Notice Hitachi fares much better than others. Amazingly the Hitachi 4TB have 5 platters.

https://www.backblaze.com/blog/best-hard-drive/

According to the RAID calculator on failure rate, the MTTDL is associated with data failure not drive failure. As I understand it, data failure can be corrected by ZFS.

http://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/

So, it is up to the MTTF specified by these manufactures, and all of them brag about 1M hours which translate to 114 years. The probability of M failures out of N arrays can then theoretically be calculated according to the following.

** f(N, M) = Combination(N, M) * (1 - e^(-t / T))^M * (e^(-t / T))^(N - M)

Where

** f(N, M) = probability of M failures out of N arrays, M >= 0, M <= N
** t = time concerned
** T = MTBF
** Combination(N, M) = N! / M! / (N - M)!

For N = 6,

** t = 5 years: f(6, 1) = 20.677%, f(6, 2) = 2.318%, f(6, 3) = 0.139%
** t = 10 year: f(6, 1) = 32.498%, f(6, 2) = 7.449%, f(6, 3) = 0.911%
** t = 15 year: f(6, 1) = 38.314%, f(6, 2) = 13.470%, f(6, 3) = 2.526%

We all know the MTBF number claimed by each manufacturer is somewhat BS. The actual number is worse. However, an HD with fewer platters fares better. Lower operating temperature helps a lot. Holding these drives firmly in better mechanical structures is essential. Make sure to bolt down all four screws.

The idea is that when one fails, one must replace it immediately. The probability of another failure in the meantime becomes the issue. Given the analyses, I have to agree with all of you that RAID-Z2 is the de facto best way to go.

Edit: Corrected formula
 
Last edited:
Status
Not open for further replies.
Top