Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.
Resource icon

List of known SMR drives

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

Ericloewe

Not-very-passive-but-aggressive
Moderator
Joined
Feb 15, 2014
Messages
17,203
- Sequential resilver, random I/O is murder on SMR.
It's not upcoming, it's been in FreeNAS for a while now. It only gets you so far, though, and is a disaster if the drive's firmware is trash and doesn't know how to deal with unwritten sectors.

Work to make ZFS understand HA-SMR / HM-SMR would need to deal with Garbage Collection somehow, to free zones that had most of their data deleted from them. That's either full-on BPR, or an indirection layer (which would grow and grow?) like is used for device removal. Possible additional work, linked on the HiSMRfs paper's page, could be to identify "hot" data, and write that to designated "hot" zones, so that fewer zones see changes.
I'm going to throw this out there, take it for what it's worth and not as gospel:
If you're that concerned about small writes all over the place that you're thinking about garbage collection, no HDD is going to cut it and SSDs are the clear way to go. Once you take that into account, you can reduce your awareness problem to "the allocator needs to know what the layout of the disk is and be able to write to where it wants stuff to go". Since the shingled areas are a couple hundred sectors long, let's say 1 MB per shingled region, all that needs to happen to mitigate 90% of the performance cost is to allocate data with a preference for 1 MB offsets.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,379
For me, the issue is not whether OEMs decide to reduce the cost of manufacturing disks by increasing areal density via SMR. For many desktop drives or archival drives, that's a perfectly good way to decrease cost while taking advantage of cache, CMR, and lastly SMR sectors. Basically, integrating what Apple called Fusion Drives into one tidy package. Rather the issue is one of surreptitiously incorporating SMR into products without allowing the customer to know that it happened and in some cases trying to deny it.

It's perfectly evident that incorporating SMR has some very real downsides associated with it. Whether it's faulty firmware, insane timeouts (while the device is undergoing CMR cache transfers to SMR), etc. these drives are not meant for NAS applications where the reads and writes can be frequent, the files may range from tiny to huge, etc. and the system cannot rely on the CMR cache being written to SMR sectors while the host/NAS is twiddling its thumbs. So while the use case for incremental archival storage may be minimally affected by SMRs drawbacks, the drawbacks are there, they are real, and the customer should be allowed to make an informed choice.

I hope that affected users consider to make a big stink about this as every vendor seems to be racing to the bottom re: cost while keeping their current prices level, i.e. goosing profitability while also poisoning customer goodwill. In the short term, this makes the case stronger for re-using older, known-good drives rather than purchasing new and risking getting a SMR-lemon.

This is exactly the kind of stuff that a perfect market should sort out (i.e. CMR drive = premium) but the assumptions behind perfect markets include the consumer having perfect information. Hence, it wouldn't surprise me if the oligopoly running this business will find itself subject to regulation at some point in the future, as more and more consumers raise a stink with their political representatives.
 
Last edited:

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836
If you're that concerned about small writes all over the place that you're thinking about garbage collection, no HDD is going to cut it and SSDs are the clear way to go. Once you take that into account, you can reduce your awareness problem to "the allocator needs to know what the layout of the disk is and be able to write to where it wants stuff to go". Since the shingled areas are a couple hundred sectors long, let's say 1 MB per shingled region, all that needs to happen to mitigate 90% of the performance cost is to allocate data with a preference for 1 MB offsets.
That wasn't the use case I had in mind. Take a gander at this poster: https://sc17.supercomputing.org/SC17 Archive/tech_poster/poster_files/post204s2-file2.pdf

Zone size will vary with drive, but let's go with 256MB for now. Also, quick refresher for others hopping into the thread, a zone pointer is either at the beginning of the zone (zone empty), or somewhere along the zone (zone partial), or nonexistent (zone full). Zones can be written to only sequentially.

The use case I am thinking of is a fusion pool with one or more raidzN of HA-SMR drives (HM-SMR, same difference just more stringent), and SSDs for metadata and small files - for some value of small, maybe anything 1MB and under, depends on workload.

Storage behavior could be:
- Write once, delete never. I don't need GC. But Lord Ha Mercy if my storage behavior changes.
- Write once, delete once; write daily, delete never. This describes the behavior of a backup application, for example: Incremental files are written once, and deleted at N days; full backup file is written daily as the incremental from day N is incorporated into it. Here, I will want to periodically / eventually do some GC. Not a LOT of GC, but I need it regardless. Otherwise, all those partial 256MB zones left over from deletion / CoW will cause me to have no space left, after many moons of backups. Say my backup file is 8.1GB, I'll use 33 zones. When I delete it, best case with some pre-allocation as the incremental is being written, 31 zones will be empty again and can be reclaimed right away (very simple GC), and 2 zones (start and end) will be partially full.

How much GC do I actually need to do? Now that bears testing. In HA-SM, I might be able to just rely on TRIM for this use case, and have no special-case code at all. For HM-SM, I'd need to do the whole thing manually.

With the caveat that I am no storage expert. I'm thinking out loud.
 

Newfoundland.Republic

Neophyte Sage
Joined
Jul 2, 2019
Messages
648

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836

no_connection

Senior Member
Joined
Dec 15, 2013
Messages
466
Well there goes any trust and respect WD ever made. The RED drives I have are great but this is the kind of BS that will put a company on a blacklist. Which kinda sucks since I liked WD.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836
the kind of BS that will put a company on a blacklist
That'd be a reasonable stance to take. I'm not that absolute about things: My drives are in a Node 804, which means heat matters. 5400 rpm is better for that than 7200 rpm, or at least I think so, and that means WD Red or something shucked. At 8TB per drive, I'm still safe - for now.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,379
Ever since I started using helium drives, I no longer worry much about heat. All my HUH7210 drives are running at 29-31*C despite being the 7,2000 RPM variety. Granted, I have three 120mm fans pulling air over 8 drives, but you get the idea. My older, non-helium 4TB HGSTs ran hotter.

I just bought another CMR spare so I now have two. Given the market oligopoly in place now and the statements from WD, I doubt anything will change. The MBAs likely see the CMR / SMR split as a ideal differentiator to allow market and price segmentation. So the case for buying used, known-good CMR drives only got stronger.

While some technological advancements are still being made (HAMR, MAMR, et al) this seems to be a business that storage executives are treating as a cash cow.

High-volume buyers like Backblaze (BB) likely have already adjusted their storage pod software to better work with all the SMR-related issues. At least theoretically, a lot of drives in a server or pod would allow your server to shunt writes from “busy” drives doing GC, CMR-SMR transfers, etc. to drives that are not.

Current 4U Servers at BB hold 60 drives with a high “cold” data content so impacts of SMR at the likes of BB could be minimal even without active querying re: what drive is busy and which is not. For folk that are running disk arrays with fewer drives, there is no option to avoid the impact of SMR-related slowdowns, however, even if the server software can query drives for their status and all that. There simply aren’t enough drives around to write parity, etc data to.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836
BackBlaze is transitioning to 8TB drives, aren't they? Which means they're not at the mercy of DM-SMR.

If BackBlaze wanted to use archival disks, their best bet would actually be HA-SMR. They control the software and can control the zones the drive writes to. Much more elegant than trying to guess at what DM-SMR is doing at any given moment.

HAMR is a big technological advancement. That and similar tech is what is getting us beyond 20TB per drive, to 50TB by 2026 by Seagate's roadmap.

My understanding is that SMR is a way to get more density out of a drive (around 15%) without needing to change the underlying recording tech. Changing recording tech is very, very hard - it's difficult to make the heads much narrower than they already are. But the gains from SMR are limited. The expensive, hard, time-consuming R&D for better recording tech still has to happen. Which means we will always see drives with SMR, and without. Those without will use more platters for the same amount of storage.

Thank you for that note about Helium drives, that makes sense to me. I have HGST HE8, from a shuck, and other than their seeking noises I can't hear them at all, even when I am right next to the case. They are quiet and cool, love those things.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836
I found a list of Toshiba drives that shows SMR/CMR status
Looks like the 4TB P300 desktop, and a range of 2.5" drives. Those are in the list already.
 

no_connection

Senior Member
Joined
Dec 15, 2013
Messages
466
It would be maybe a tiny bit less of a deal if SMR from WD came with appropriate price decrease, but they don't.
 

Yorick

Dedicated Sage
Joined
Nov 4, 2018
Messages
1,836
came with appropriate price decrease
Pricing is set by what the market will bear. If people stop buying the WD Red 2-6TB, and prefer IronWolf and N300 instead, then WD might adjust their pricing down to entice buyers.

If they sell at their current price, why would they adjust the price.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,379
Based on the hard drive life stats (see here for the current Q3 2019 data), Backblaze is phasing out 4TB drives and phasing in 12+ TB drives. 8TB drives are a relatively small cohort in the chart.
 

danb35

Wizened Sage
Joined
Aug 16, 2011
Messages
12,177

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,379
Pricing is set by what the market will bear. If people stop buying the WD Red 2-6TB, and prefer IronWolf and N300 instead, then WD might adjust their pricing down to entice buyers.
Unlikely. There is an oligopoly controlling pricing and the supply. SMR will be used to squeeze more profit from the market segments where people notice the difference. The low end of the market simply will make do with a slower transfer experience.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,379
FWIW, Backblaze is purposefully staying away from SMR drives due to the issues they present in a NAS environment.

The one thing that doesn't add up in that interview is Mr. Ellis' claim that a single Tier-1 location buys 500,000x more drives than Backblaze, whose annual consumption (per that interview) is somewhere around 50-60,000 drives (four Vaults a month at 1,200 drives each). The math doesn't add up.

For the 500,000x claim to be true, 10's of billions of drives would have to be installed annually and the entire HDD industry only shipped 15MM high-capacity drives in Q1 2020; projecting forward, that gets us to 60-80MM drives in 2020, (the upper margin assumes generous growth). I'd expect Tier-1 and like facilities to mostly buy high-capacity drives due to $/TB, space, heat, power, etc. considerations.

But even if we take total drive shipments into consideration, that only brings us to ~265MM drive shipments for 2020. So, each and every tier-1 location on the planet is supposed to be buying 10's of billions of drives annually? Mr. Ellis is likely off by at least four orders of magnitude. So disappointing given how BackBlaze is usually well-researched. Anyhow, the interview is still worth reading as a lot of the qualitative information in it seems much more aligned with reality.
 

Lexx

Newbie
Joined
May 2, 2020
Messages
3
be nice if there was a model list of the SMR disks on the first page (unless there is one elsewhere) as not everyone lists cache size on the hard disk been sold (ebay definitely does not show it correctly on filters)
 
Top