Which Enterprise Class HDD? Or not necessary?

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Hi all,

I'm curious what everyone's experience so far is with various options of HDD's with current release FreeNAS (11.3) and ZFS in general.

I'm genuinely curious about the following:

Enterprise class:

Ultrastar/Gold (WD, HGST, Hitachi) vs EXOS (SG) vs MG (Toshiba)
I was seeing a different in how SG outputs SMART information. Does this really matter?
Is it truly worth using enterprise level drives in a system with mirrors (no parity based redundancy)?

Reliability and lower chance of any failure of any kind would be the primary attributes wanted (not speed); if that's the case, are Enterprise class drives the way to go? Or would a NAS oriented drive be basically as good? Not expecting 10 year life span. Goal is closer to 3~4 years of spin time (36k hours).

If you had one fail, and replaced it, would you cringe to mix an Ultrastar and EXOS drive of same capacity together in a pool as replacement?

Consumer class:

All the typical desktop targeted drives, NAS drives, etc. But avoiding SMR drives (WD RED for example) like the plague. If you were going to gamble on these class drives (WD blues, greens, blacks, reds; white labels; Seagate baracuda, ironwolf), which ones would you ideally gamble on? Obviously the NAS oriented drives (if CMR) are probably more ideal, but are people really gambling on these white lable shucks or standard non-NAS oriented consumer desktop drives (assuming a redundancy scheme is used)?

Renew/Refurb:

Do any of you gamble on refurbished enterprise drives? These seem all over and people use them, but this just seems really risky. Other than getting one, doing in depth testing, sector testing, etc, which takes days probably and then trusting them with data? Any commentary from a parity based redundancy scheme or mirror scheme point of view when using such drives?

Very best,
 
Last edited:
Joined
Oct 18, 2018
Messages
969
Reliability and lower chance of any failure of any kind would be the primary attributes wanted (not speed); if that's the case, are Enterprise class drives the way to go? Or would a NAS oriented drive be basically as good? Not expecting 10 year life span. Goal is closer to 3~4 years of spin time (36k hours).
Hey, so I'll start with the caveat that I have not run enough different types of drives to say with certainty ho well different drives perform. There might be studies out there WRT to how much longer a NAS certified drive actually lasts as compared to a consumer grade drive. I'm sure someone on the forums has more experience than I do.

I wall say though that one way to approach this topic is from more of a financial perspective and not as much of a data integrity perspective. What I mean by that is that if you build a robust backup system for any data you do not want to lose you can dramatically minimize the risks associated with a failed drive, or even a failed pool. For example, in my situation I have a cheap backup machine which is the receiving end of ZFS replication and runs constantly. If something were to happen to my primary system, my backup has me covered. I also keep off-site backups in the form of rotated drives. While not perfect, if my house burns down I don't lose everything. Any time I generate new, very important data, I swap the drives. While my solution isn't necessarily the most robust, it does help me think about drives more from a cost perspective and less from a data loss perspective.

If you have only a single NAS and a single copy of your data disk, failure will always introduce much more risk to your data. That may inform whether you invest in more expensive drives. I think there is a big difference in someone using renew/refurb drives in a single-system, single-copy environment vs someone who follows the 3-2-1 rule more strictly.

All that being said, I very much share your curiosity over drive performance hours in different environments. If someone posts a study I'd love to read it. If I manage to make time and find one, I'll be sure to post it.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
are people really gambling on these white lable shucks
I see plenty of folks doing this and reporting good success.

or standard non-NAS oriented consumer desktop drives
I have not seen many folks doing this and it's just asking for trouble... those disks aren't engineered to run in a chassis with more than a couple of disks and aren't intended to spin 24x7. You'll end up replacing them far more often to the point where you saved nothing on purchase price and the risks you'll be taking with a higher failure frequency are not likely worth it even with RAIDZ3.

are Enterprise class drives the way to go? Or would a NAS oriented drive be basically as good?
The quoted failure rates for enterprise drives aren't that different from NAS drives... although you may find the TBW for the drive lifetime will be much higher on enterprise drives, so for longer lifetime in high writing environments, maybe select enterprise.
 
Last edited:

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
If you have only a single NAS and a single copy of your data disk, failure will always introduce much more risk to your data. That may inform whether you invest in more expensive drives. I think there is a big difference in someone using renew/refurb drives in a single-system, single-copy environment vs someone who follows the 3-2-1 rule more strictly.

All that being said, I very much share your curiosity over drive performance hours in different environments. If someone posts a study I'd love to read it. If I manage to make time and find one, I'll be sure to post it.

Agreed; the philosophy and personal choice on what one wants in terms of uptime and legitimate backup strategy plays into this financially, depending on the importance of the data. I think we can all agree on this idea, since most people run full throttle single data sets on fast drives with zero redundancy on their machines that they're playing games on or just just doing general computing with. I think anyone on these boards is likely looking at it from a data integrity standpoint with remote access (local at the very least) since they're interested in or likely using FreeNAS and ZFS file structure, regardless of using parity or mirror approach to data integrity.

That said, for the purpose of the discussion, I think I'm interested in the statistics and experiences and perhaps any good documentation regarding the integrity odds with FreeNAS, ZFS and the various quality of drives mentioned. I realize this is without context and context matters, but that's where studies are more important to cover the non-context based decision making if there's metrics out there to know about.

Personally I backup my crucial data to optical (Bluray, M-DISC) as my 3rd copy and it lives in a fireproof/waterproof safe. My living data is on my client machine(s) and is the first primary copy as the data is generated. My 2nd set of the same physical data is on my NAS with redundancy (mirrors, I don't use parity, just straight mirroring). That's my implementation of 3-2-1. So for context, I already have a backup plan for important data (optical media; and its the 3rd physical copy). My main goal at the moment is to explore the idea of what I want to use in my NAS in the future, in terms of drive quality. I choose to use mirrors instead of parity based arrays, mainly because resilver/rebuild is faster upon failure, migrating pools or swapping drives is less of a chore, and I would much rather have a 1:1 copy if a failure occurs that I can literally access from another system (like spin up Linux with ZFS utils and access the data immediately on the working drive, should there be catastrophic hardware failure on the server and a drive go down with it). So instead of running a bunch of low capacity HDD's with some parity, I would rather run few drives with very high capacity in 1:1 mirror for redundancy. And again, that's the 2nd physical copy of the data. Since it's a mirror, each data is actually 3 or 4 times physical copied, but this only is redundancy for drive failure, not hardware write faults (but using ZFS and ECC memory should mostly keep that from being a reality outside of truly astronomical odds).

I see plenty of folks doing this and reporting good success.

I have not seen many folks doing this and it's just asking for trouble... those disks aren't engineered to run in a chassis with more than a couple of disks and aren't intended to spin 24x7. You'll end up replacing them far more often to the point whwre you saved nothing on purchase price and the risks you'll be taking with a higher failure frequency are not likely worth it even with RAIDZ3.

The quoted failure rates for enterprise drives aren't that different from NAS drives... although you may find the TBW for the drive lifetime will be much higher on enterprise drives, so for longer lifetime in high writing environments, maybe select enterprise.

That's interesting; I'm curious why some are ok with white label drives (what are those really? white label is usually a binned drive from a higher tier that didn't make the cut, so I wouldn't expect it to earn a higher tier category like a NAS/Enterprise class drive, so are these white labels really just consumer based desktop discs at the end of the day?). I realize there was a time they were opening these externals and getting WD Reds basically. But I think that's pretty much over. Now it's all just white labels and pin mods or molex adapters (scary). I'm not sure there's any data, just anecdotal evidence, that a white label drive and a brand new consumer level drive are any different? But that's why I'm asking at least.

Personally I seem to find more examples out there (maybe not here on this particular forum) of people getting the least expensive drives possible and putting them into a NAS with some redundancy and being ok with that, regarding consumer grade desktop drives. I'm really interested if this practice is really any different than using a white label gamble drive, or a refurbished enterprise class drive, at the end of the day (I realize getting metrics or a study on this is virtually improbable).

Very best,
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,504
are people really gambling on these white lable shucks
I am--I have about a dozen of them in my server (six each 8 TB and 12 TB). Some of the 8TB disks were Reds rather than white label. These are not to be confused with eBay "white label," "refurbished" disks from who-knows-where, which I've done once and never will again (eight disks out of six failed within a year).
pin mods or molex adapters (scary).
Fortunately don't need to deal with either of these with my backplane, but it wouldn't be an issue if WD hadn't come up with their idiotic "standard" repurposing of a defined pin.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hi all,

I'm curious what everyone's experience so far is with various options of HDD's with current release FreeNAS (11.3) and ZFS in general.

I'm genuinely curious about the following:

Enterprise class:

Ultrastar/Gold (WD, HGST, Hitachi) vs EXOS (SG) vs MG (Toshiba)
I was seeing a different in how SG outputs SMART information. Does this really matter?
Is it truly worth using enterprise level drives in a system with mirrors (no parity based redundancy)?

Reliability and lower chance of any failure of any kind would be the primary attributes wanted (not speed); if that's the case, are Enterprise class drives the way to go? Or would a NAS oriented drive be basically as good? Not expecting 10 year life span. Goal is closer to 3~4 years of spin time (36k hours).

If you had one fail, and replaced it, would you cringe to mix an Ultrastar and EXOS drive of same capacity together in a pool as replacement?

Consumer class:

All the typical desktop targeted drives, NAS drives, etc. But avoiding SMR drives (WD RED for example) like the plague. If you were going to gamble on these class drives (WD blues, greens, blacks, reds; white labels; Seagate baracuda, ironwolf), which ones would you ideally gamble on? Obviously the NAS oriented drives (if CMR) are probably more ideal, but are people really gambling on these white lable shucks or standard non-NAS oriented consumer desktop drives (assuming a redundancy scheme is used)?

Renew/Refurb:

Do any of you gamble on refurbished enterprise drives? These seem all over and people use them, but this just seems really risky. Other than getting one, doing in depth testing, sector testing, etc, which takes days probably and then trusting them with data? Any commentary from a parity based redundancy scheme or mirror scheme point of view when using such drives?

Very best,
After being burned by Seagate with their really awful drives from circa 2011-12, I settled on HGST as my favorite manufacturer, followed by Western Digital. HGST has been acquired by WDC... so there really aren't very many players in the hard disk game. Ah, well.

At work we have several older HGST 2TB SATA 2.x (3Gb/s) drives that are over a decade old. Some have more than 50,000 hours on them. We've replaced them, but they just kept plugging away until we did.

I've come to the conclusion that hard disks are like automobile tires -- in the long run it's cheaper and safer to pay up front for quality merchandise. I buy new, enterprise-class drives with a 5 year warranty -- no refurbs, shucks, or used disks.

FreeNAS/ZFS works best with 4Kn drives. And I don't need or want the new-fangled 'Power Disable Feature'. So nowadays I like the HGST DC HC500-series disks that meet both criteria:
  • DC HC520, 12TB, part number OF30141
  • DC HC510, 10TB, part number OF27607
  • DC HC510, 8TB, part number OF27613
I recently purchased a pair of the OF30141 drives for my alternating backup schema.

For details, see this document on WDC's website:

 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
I am--I have about a dozen of them in my server (six each 8 TB and 12 TB). Some of the 8TB disks were Reds rather than white label. These are not to be confused with eBay "white label," "refurbished" disks from who-knows-where, which I've done once and never will again (eight disks out of six failed within a year).

Fortunately don't need to deal with either of these with my backplane, but it wouldn't be an issue if WD hadn't come up with their idiotic "standard" repurposing of a defined pin.

Interesting thanks; so you're comfortable with whatever the white label is it seems. I agree they are not refurbished, or at least, should not be relative to other "white label" things out there. I just wonder what those white label discs are, since they stopped putting higher quality drives in there knowing people were shucking them and cannibalizing sales perhaps for their better binned drives. Either way, I view the white labels are basically desktop class drives with no real expectation. Though I would love to know more about them, if they're still truly quality drives and not simply whatever didn't make it to the next bin level.

I've come to the conclusion that hard disks are like automobile tires -- in the long run it's cheaper and safer to pay up front for quality merchandise. I buy new, enterprise-class drives with a 5 year warranty -- no refurbs, shucks, or used disks.

I'm leaning this way more and more. In the past I used quality drives, then as capacities went up, I started using less expensive drives. Most of them are still working, but not without errors or issues. The worst drives I've wasted money on were the "green" drives in the early 2000's. Since then, I've only bought NAS/Enterprise class drives and prefer getting good hardware and expectations up front with warranty. I've yet to have to invoke warranty, but part of that is likely due to buying this class equipment in the first place. My data is not special, I simply look at it from a convenience factor because as mentioned by another post, our time is not worthless and I don't want to constantly waste time fooling with low tier drives that will be prone to error and failure at a higher rate.

-- Don't the DC HC500 series use power disable feature, so they require the new 3.3 revision or an adapter?

I'm not sure if all enterprise class drives use internal ECC for the drive memory or not, but I would like to know more about that. That makes a lot of sense after all, and it doesn't need a huge cache for it, just enough to do the throughput.

The idea of the drive itself possessing ECC drive memory, the system having ECC memory and the checksum/healing nature of ZFS makes for a pretty good pathway for data integrity that can have redundancy of various layers.

But, I'm curious, if this is merely overkill outside of crucial environments, and that non-ECC hardware and just basic redundancy as long as there's more than one physical copy of the data regardless of what it is, is sufficient for most people and application. I've certainly had drives fail and the data dumped. While I didn't lose anything, the convenience was enough to make me want to pay a bit more to avoid higher likelihood of the occurrence. But things change.

++++++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++++++++++

Interesting project on enterprise drives:


Very best,
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
-- Don't the DC HC500 series use power disable feature, so they require the new 3.3 revision or an adapter?
Not all of them do -- the three models I listed above do not have the 'Power Disable Feature'.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Not all of them do -- the three models I listed above do not have the 'Power Disable Feature'.

Got it; was just reading the WD documentation that model numbers include drives that have and don't have the feature. That's annoying to have to look that up when buying a drive, but it's better to know than not know I suppose!

Very best,
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Got it; was just reading the WD documentation that model numbers include drives that have and don't have the feature. That's annoying to have to look that up when buying a drive, but it's better to know than not know I suppose!

Very best,
Yes, it is annoying. And on top of that, you have to beware of SMR drives, too!
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Yes, it is annoying. And on top of that, you have to beware of SMR drives, too!

Ugh, yes. As soon as I saw the SMR labels on some WD drives I could only shake my head... then I had to start immediately doing verification of various drives to avoid being duped into getting any SMR drive for this application.

Very best,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's interesting; I'm curious why some are ok with white label drives (what are those really? white label is usually a binned drive from a higher tier that didn't make the cut, so I wouldn't expect it to earn a higher tier category like a NAS/Enterprise class drive, so are these white labels really just consumer based desktop discs at the end of the day?).

This is ridiculous on the face of it.

White label drives are the drives that are provided to OEMs, and are integrated into servers and arrays. The major differentiation is that they generally do not carry an "end user" warranty, meaning that if your Dell branded Seagate drive has a problem, you get it handled under your Dell warranty, and if you ask Seagate about it, they will say "no warranty on that." But Dell and Seagate generally have an agreement to handle batch warranty submissions on favorable terms.

White label drives may also contain custom firmware or other tweaks for OEMs. Generally speaking, the quality is going to be better than your retail drives, because the drives are headed into environments where Real Money comes into play, because Dell has to send a tech on site to do the replacement, so Dell may pay for a higher tier drive.

On the flip side, it is also common for repaired drives to be stripped of their original label and be labeled with a white label, but this will not be resold as a new drive.

Observationally, a lot of us who work professionally with big cheap storage raised our eyebrows when cutting edge high platter count high margin helium drives started showing up in external USB drives some years back. What seems to have happened is that SSD has significantly eroded the low end HDD market to nearly nothing, and even high end drives are hard to sell. If a company makes too many high capacity drives, it may be more desirable to sell those at a discount in USB enclosures and recover the manufacturing cost than it is to let them rot on a shelf waiting for a buyer.

So the thing is, it's a question of whether or not the terms are suitable to you. If you don't mind paying $329 for a WD Red 12TB retail drive with a 3 year warranty, that's fine. But I'm fine getting 12TB Easystores from Best Buy at $189. I send them down to the electronics shop, where half a dozen guitar picks pull open the shell without any trace of damage, put the parts in a baggie, which goes back inside the shell, which goes back into the box, which then goes into storage for two years, just in case a drive fails - because the externals have a 2 year warranty.

So if you make an array of 12, the cost is $2268 for shucked or $3948 for retail. With the $1680 I save, I figure it's possible I might lose one or two to no-warranty but I still come out ahead.

It doesn't really matter if the drives are whitelabel or colored. That mainly goes to warranty coverage. What matters is that you can save a lot of money if you pick carefully and test stuff thoroughly.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Observationally, a lot of us who work professionally with big cheap storage raised our eyebrows when cutting edge high platter count high margin helium drives started showing up in external USB drives some years back. What seems to have happened is that SSD has significantly eroded the low end HDD market to nearly nothing, and even high end drives are hard to sell. If a company makes too many high capacity drives, it may be more desirable to sell those at a discount in USB enclosures and recover the manufacturing cost than it is to let them rot on a shelf waiting for a buyer.

So the thing is, it's a question of whether or not the terms are suitable to you. If you don't mind paying $329 for a WD Red 12TB retail drive with a 3 year warranty, that's fine. But I'm fine getting 12TB Easystores from Best Buy at $189. I send them down to the electronics shop, where half a dozen guitar picks pull open the shell without any trace of damage, put the parts in a baggie, which goes back inside the shell, which goes back into the box, which then goes into storage for two years, just in case a drive fails - because the externals have a 2 year warranty.

So if you make an array of 12, the cost is $2268 for shucked or $3948 for retail. With the $1680 I save, I figure it's possible I might lose one or two to no-warranty but I still come out ahead.

It doesn't really matter if the drives are whitelabel or colored. That mainly goes to warranty coverage. What matters is that you can save a lot of money if you pick carefully and test stuff thoroughly.

Thanks; these are good points to consider.

I think when you scale things up, it makes sense to gamble on the "white label" drives for the reasons you stated, that's a lot of savings and the gamble looks to pay off there. Indeed scaling up to 10+ drives that starts to make a lot of sense and with a parity situation, is ok to risk, as the idea of losing 4+ drives at a single time is low.

However, if you were selecting two or four drives, and that's it total, then maybe the gamble is less worth the risk? Again, I'm far more interested in not losing time or being inconvenienced to having to spend days testing cheap drives to see if they're worth attempting to use seriously, and I'd rather have drives that are up front going to be overall better life span rated. But maybe that's incorrect and all marketing hype?

If you don't mind sharing, how often are you having to replace any of the Easystores you've gotten? Do they report good health? Performance? I see lots of information out there of people doing it. Much less information about the end results after periods of time. I respect the idea of you having experience in this field professionally and doing this practice and am very interested in your thoughts on this in general. I assume your PSU's being used on these are compliant to SATA 3.3 for the pin or do you use an adapter for the power? And do you have any information between the Easystore vs Elements (Easystore is $215 for the 12TB, same cost for Elements 12TB and only $259 for a 14TB Elements).

I'm not at all interested in the Red Pro drives for their cost. If I'm going to buy a retail drive, I'm far more interested in the Datacenter Ultrastar drives or similar class from another manufacturer. And your point about SSD effecting this is very valid, but I don't think any of us are looking at any non-SSD drives in the 1TB or even 2TB capacity range these days. When affordable SSD hit 4TB and higher, if they ever do, that will be quite a significant thing for this market and for how people build (if they're good modules and not just the cheapest capacity per cost stuff).

Very best,
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
However, if you were selecting two or four drives, and that's it total, then maybe the gamble is less worth the risk? Again, I'm far more interested in not losing time or being inconvenienced to having to spend days testing cheap drives to see if they're worth attempting to use seriously, and I'd rather have drives that are up front going to be overall better life span rated. But maybe that's incorrect and all marketing hype?

So I do this professionally and I've been doing this professionally for many years, I've still got 40MB and 80MB HDD's and a massive 800MB 8" HDD cracked open is a display piece on my office wall. Just to set some background here.

I had dozens of Seagate Wren class full height 5.25" hard drives that were simply hard to kill. Then came the 2GB 3.5" Seagate Barracudas. These things failed at an unfavorable sideways glance, but if you got them in a really good enclosure that kept them cool, they were okay. And here's the lesson. Looking back on 30 years of drives, the Seagate Hawks which ran at 5400RPM were massively more reliable.

Doing USENET servers, there was a lot of pressure to get fast I/O systems, but I consistently found that a larger system with more slow spindles was going to be lower maintenance than a system with a smaller number of 10K or 15K (remember those?) spindles. Those drives would die, relatively quickly. I've still got some Seagate Hawk 2XL's running in a few ancient machines, which puts them at maybe 25 years old.

You don't and can't know up front what specific drives are going to be problem-free with better lifespans. These things run in cycles. IBM had the Deathstars and they were bought by HGST was bought by WD. WD Blacks were made by WD. Seagate had a hell of a time back around 2010 with their 1.5TB and 3TB drives which were failmagnets.

But I can tell you that the 5400RPM (5900 now sometimes) drives tend to wear better over time.

If you don't mind sharing, how often are you having to replace any of the Easystores you've gotten?

Of the 40 units in several pool sets, zero replacements. Twelve 8TB units purchased Black Friday 2018 have about 15K hours. The remainder are 12TB units with about 20 of them purchased Black Friday 2019 and the remainder since then.

Do they report good health? Performance? I see lots of information out there of people doing it. Much less information about the end results after periods of time. I respect the idea of you having experience in this field professionally and doing this practice and am very interested in your thoughts on this in general. I assume your PSU's being used on these are compliant to SATA 3.3 for the pin or do you use an adapter for the power? And do you have any information between the Easystore vs Elements (Easystore is $215 for the 12TB, same cost for Elements 12TB and only $259 for a 14TB Elements).

Well, professionally, here's my perspective. In most businesses, engineering sends requests to management and management pays the bill. As the owner, though, I play both roles. I do not ask for things just because I want them, because the boss, man, he's harsh and he knows when to say no.

So, for example, when SSD's became vaguely affordable back in 2010-2011, I bought a bunch of 60G's, 120G's, and 240G's on various Black Friday specials and used them in RAID1 on hypervisors. Now I *could* have tried buying "enterprise grade" or "data center" SSD's, but they were three to four times more expensive, and there's a lot of stuff we run that doesn't need that endurance. So I've been looking at actual usage patterns and fitting endurance to actual needs for years, and it has saved a huge amount on storage. But I'm also willing to put all of the storage in RAID1 so that there aren't failures, so I actually get greater reliability out of two consumer SSD's in RAID1 than one enterprise SSD by itself.

The HDD thing is similar. I had some terrible experiences with the 1.5 and 3TB Seagates, but I still bought a bunch of 4TB's and ran them out past 50K hours before I replaced them with 8TB WD's. I still win in the long run.

These are not home systems, they are going into data centers, so there are no power supply or wiring issues. The backplanes do what they are supposed to.

I don't remember what the deal with the Elements is.

I'm not at all interested in the Red Pro drives for their cost. If I'm going to buy a retail drive, I'm far more interested in the Datacenter Ultrastar drives or similar class from another manufacturer. And your point about SSD effecting this is very valid, but I don't think any of us are looking at any non-SSD drives in the 1TB or even 2TB capacity range these days. When affordable SSD hit 4TB and higher, if they ever do, that will be quite a significant thing for this market and for how people build (if they're good modules and not just the cheapest capacity per cost stuff).

So for ZFS the one thing to note is that if you're going for hard drives, bigger is better. Leave lots of space free and ZFS can go fast.

I am kind of disappointed that hard drive manufacturers gave up. For awhile in the mid-201x's it seemed like 2.5" was really the thing of the future, you could do SSD or HDD, pick yer poison. However, WD and Seagate both stopped innovating on 2.5" CMR HDD's. For years I was stuck buying 1TB WD Red 2.5's if I wanted a HDD datastore in a hypervisor. Finally the 1TB SSD price is near enough in the last year that we stopped buying 2.5" HDD's.

SSD has a ways to go to compete with capacity HDD's, but that's coming too. Considering the form factor, you can pack a ton of flash in the same space as a 3.5" drive.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
a smaller number of 10K or 15K (remember those?) spindles. Those drives would die, relatively quickly.

But I can tell you that the 5400RPM (5900 now sometimes) drives tend to wear better over time.

Of the 40 units in several pool sets, zero replacements. Twelve 8TB units purchased Black Friday 2018 have about 15K hours. The remainder are 12TB units with about 20 of them purchased Black Friday 2019 and the remainder since then.

But I'm also willing to put all of the storage in RAID1 so that there aren't failures, so I actually get greater reliability out of two consumer SSD's in RAID1 than one enterprise SSD by itself.

So for ZFS the one thing to note is that if you're going for hard drives, bigger is better. Leave lots of space free and ZFS can go fast.

I am kind of disappointed that hard drive manufacturers gave up. For awhile in the mid-201x's it seemed like 2.5" was really the thing of the future, you could do SSD or HDD, pick yer poison. However, WD and Seagate both stopped innovating on 2.5" CMR HDD's. For years I was stuck buying 1TB WD Red 2.5's if I wanted a HDD datastore in a hypervisor. Finally the 1TB SSD price is near enough in the last year that we stopped buying 2.5" HDD's.

Thanks; very good information. I appreciate you taking the time to share that.

I do recall drives that were faster, we used to dream of SCSI systems and 10k drives, but thankfully never got into any of it. As you pointed out, most of this stuff is gone. The market still centers on 5.4k~5.9k and 7.2k drives. It makes sense. Things going faster, physically, will wear down faster too.

Also a great point that two inexpensive discs in RAID1 will be better overall than a single drive that costs about that with zero redundancy. Granted, that's the idea of RAID anyways.

I was just reading about ZFS and that it basically operates best with lots of overhead on the drives. Makes sense for how it heals and for performance.

Agreed; it's funny that the 2.5" stuff basically went away beyond small externals.

I look forward to quality 4TB SSD's in the future; if they're affordable. That will make a lot of this more interesting from a hardware perspective.

I appreciate your thoughts. I will have to try some of these White Label drives.

Very best,
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Oh and on that topic, there are stupid ways to take apart the Easystores and smart ways. I was just talking about this with a client...

In my opinion, you want actual legitimate guitar picks to open these enclosures. I use about eight Fender "heavy" picks on the flat side of the drive. You will often see people using the baby blue crappy cell phone triangular picks. These break easily and aren't well-designed. Use real guitar picks. I then jam two more in at the top curve, right alongside each other, and then jam something in between the two picks to "pop" the case, totally damage-free.

Then you take off the support screws, PCB, and light pipe, stick them in a Ziplock baggie, and toss it inside the enclosure.

The upside to this is that if there is a problem with the drive, you can put the drive back in the enclosure, and take it back to the store within the normal return window to exchange for a brand new drive, or RMA it out to Western Digital and probably get an old janky repaired drive back. Because even though the drive itself isn't covered under warranty, the USB-enclosed drive is warranted for two years, and Best Buy now puts the serial on the receipt so you get two years from date of purchase. But you do have to keep all the enclosures because they all have serial numbers on them.
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
so you're comfortable with whatever the white label is it seems

I am comfortable with HGST HE8 from 8TB Elemental. 10TB is WD Red Plus Air now (used to be WD Red Plus He), and I think 12TB is HGST HE again but I'm not certain.

It's definitely not "whatever". There's a degree of prep that goes into shucking drives.

For what it's worth, I use a cut-up credit card to shuck and it works just as well as guitar picks - but "proper guitar picks" is good guidance for sure.

By the time my white label He8 fail, I expect their brand-name equivalent 5-year warranty would be over anyway. If 1 or 2 fail early, the savings will have made up for it. And, white label spin @ 5400, which reduces vibration and heat: Both drive killers. I expect they may last longer than retail. Let you know in 10 years how that worked out.
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
Oh and on that topic, there are stupid ways to take apart the Easystores and smart ways. I was just talking about this with a client...

In my opinion, you want actual legitimate guitar picks to open these enclosures. I use about eight Fender "heavy" picks on the flat side of the drive. You will often see people using the baby blue crappy cell phone triangular picks. These break easily and aren't well-designed. Use real guitar picks. I then jam two more in at the top curve, right alongside each other, and then jam something in between the two picks to "pop" the case, totally damage-free.

Then you take off the support screws, PCB, and light pipe, stick them in a Ziplock baggie, and toss it inside the enclosure.

The upside to this is that if there is a problem with the drive, you can put the drive back in the enclosure, and take it back to the store within the normal return window to exchange for a brand new drive, or RMA it out to Western Digital and probably get an old janky repaired drive back. Because even though the drive itself isn't covered under warranty, the USB-enclosed drive is warranted for two years, and Best Buy now puts the serial on the receipt so you get two years from date of purchase. But you do have to keep all the enclosures because they all have serial numbers on them.

Thanks, will check this out.

Thinking of the 12TB options. There's the Easystore & Element version. My understanding is that they're all the same, just sold under different distributors.

12TB WD Elements (Shuck) ($215) 55.8GB/$
12TB WD Easystore (Shuck) ($218) 55.0GB/$
14TB WD Elements (Shuck) ($259) 54.1GB/$
12TB SG EXOS ($282) 42.6GB/$
14TB SG EXOS ($303) 46.2GB/$
12TB WD Ultrastar (HC520) ($320) 37.5GB/$
14TB WD Ultrastar (HC530) ($362) 38.7GB/$

Anything cheaper that I'm finding is refurb/renew, which I'm very wary of getting a drive that has had uknown things done to it.

Very best,
 

MalVeauX

Contributor
Joined
Aug 6, 2020
Messages
110
I am comfortable with HGST HE8 from 8TB Elemental. 10TB is WD Red Plus Air now (used to be WD Red Plus He), and I think 12TB is HGST HE again but I'm not certain.

It's definitely not "whatever". There's a degree of prep that goes into shucking drives.

For what it's worth, a use a cut-up credit card to shuck and it works just as well as guitar picks - but "proper guitar picks" is good guidance for sure.

By the time my white label He8 fail, I expect their brand-name equivalent 5-year warranty would be over anyway. If 1 or 2 fail early, the savings will have made up for it. And, white label spin @ 5400, which reduces vibration and heat: Both drive killers. I expect they may last longer than retail. Let you know in 10 years how that worked out.

Interesting thanks!

Are you certain the Elements is still dropping HGST and RED Plus drives though? Most what I can find are pointing towards that stopped last year and that now everything is basically white label?

I would love to hear about these in 10 years, hopefully also my own experience with them too!

Very best,
 

Yorick

Wizard
Joined
Nov 4, 2018
Messages
1,912
Are you certain the Elements is still dropping HGST

For 8TB, yes, see separate thread. Just shucked another three this summer.

For 10TB I have reports of Red Plus Air and He, with Air more recent.

For 12TB I don't know. This is why CrystalDiskInfo first, followed by one shuck of a drive, followed by careful identification (you'll see visually whether He or Air and the ID of the drive is still on the label, separate from the part number), followed by shucking "all of them" if previous steps were satisfactory.
 
Top