WD Red SMR Drive Compatibility with ZFS

Forza

Explorer
Joined
Apr 28, 2021
Messages
81
I think that supporting Zoned storage would be very useful not only for SMR, but perhaps mainly for flash storage since flash storage is also write/append mode only. Using zoned storage reduces slowdowns, changes the need for fstrim and reduces the need for overprovision. Why should ZFS not support this?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
Mechanical HDDs will continue to exist as long as there is a significant delta between the $/TB cost of a HDD and that of a SSD. The two options have coexisted for decades now and SSDs have yet to break out and become so cheap as to generally choke off HDDs in the broader market.

Instead, we see mechanical stuff getting obsolete in mobile applications (iPod vs. iPhone) fast storage (SSD array vs 16k SAS), and for aesthetic reasons (Apple iMac has no room for a HDD). Try finding a portable camera using a HDD being made today.

Fusion pools are a transition that have been in use for generations, to greater and lesser extents. Hence the RAM buffers on all HDDs, SSD caches, as well as past OEM software efforts in this field (example: Apple Fusion drives when Apple still shipped mixed HDD/SSD hardware). As long as there is a significant $/TB cost delta between HDDs and SSDs, I expect fusion pools to persist.

As for SMR, I doubt SMR will gain "acceptance" in the ZFS world outside of host-managed or host-aware SMR drives. ZFS is nosy by design, so a drive that goes "rogue" for extended periods of time to do its own thing without notifying the host is simply not acceptable. Having a SSD write cache to queue up multiple sector writes ahead of time could also be interesting for both bursty loads, better HM- and HA-SMR integration as well as potentially beneficial from a fragmentation POV. Get the benefit of SSD write speeds and then the system files the stuff away in a orderly manner as the write cache thresholds are met.

The Apple fusion pool allegedly went a step further, noting the use patterns of files and pre-positioning high-use files on the SSD for read and write operations while lower-intensity files went to the HDD. That's probably where I would take ZFS if I had my druthers re: integrated fusion pool development - note what gets used a lot automatically, move said content to the "high-use" SSD VDEV within a pool so the admin doesn't have to set aside separate Flash pools for databases and the like.

Now, while the Apple fusion pool sounds like a great idea in theory, the practice left some unhappy users in its wake. Implementation in a COW system whose user expectations re: data integrity are high is unlikely to be simple.
 
Last edited:

rvassar

Guru
Joined
May 2, 2018
Messages
971
I've been in the flash business for 14 years and disagree on three levels:

I have 25 years R&D experience, including 16 years at Sun Microsystems. I've actually wandered upstairs to ZFS creator Jeff Bonwick's office just to chat with him about a certain pre-Cretaceous R&D server we shared an interest in. This is called an "appeal to personal authority", and generally confers no weight in technical discussions.

I will further claim "disagreement inflation", specifically:

3) The hybridization of ZFS storage keeps on improving. Mixed SSD and HDD (fusion) pools are now viable in addition to standard L2ARC and SLOG functions. ZFS enables efficient replication of active flash pools to low cost HDD pools.

This is an example of ZFS adapting to new technologies. I made the statement that it needs to adapt to new technologies, you provide evidence that it indeed is adapting. Adaptation needs to be a continuous effort, but I can identify no real disagreement here.

2) ZFS works very well on flash SSDs. It provides all the snapshots, clones and replication. It manages the data integrity and compression well. Its also pretty good with QLC; it aggregates writes and increases the endurance.

I made no mention of any of this, and do not disagree. My point was it does not work well with SMR, and may not work well with future advancements in magnetic recording technologies that may allow MRT to continue to remain relevant vs. sub-5nm flash memory. We've already seen the MRT vendors attempt to "submarine" SMR into the NAS market. I have no confidence they won't repeat this behavior with other unsuitable technologies over the next decade.

1) 16TB SSDs cost >$5K, while equivalent HDDs cost $500. 10X is a huge cost difference. The flash vendors do not want to drop their prices and can't afford to ship the volume of bits that are currently shipping on HDDs. The high capacity HDD market keeps growing. (the sub 2TB HDD is shrinking and dying). Sports car companies aren't really interested in the school bus market... it's unprofitable for them.

That's right now. I was projecting to 2030 when Seagate expects to start having trouble competing against solid-state storage technologies. I admit I'm still waiting for my flying car that folds into a briefcase. :smile:


Managing large amounts of data is complex..... it need reliability, application-specific performance and economics. ZFS needs to keep evolving, but there are many willing participants that like the tools that it provides.

Agreed.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
I have 25 years R&D experience, including 16 years at Sun Microsystems. I've actually wandered upstairs to ZFS creator Jeff Bonwick's office just to chat with him about a certain pre-Cretaceous R&D server we shared an interest in. This is called an "appeal to personal authority", and generally confers no weight in technical discussions.

I will further claim "disagreement inflation", specifically:



This is an example of ZFS adapting to new technologies. I made the statement that it needs to adapt to new technologies, you provide evidence that it indeed is adapting. Adaptation needs to be a continuous effort, but I can identify no real disagreement here.



I made no mention of any of this, and do not disagree. My point was it does not work well with SMR, and may not work well with future advancements in magnetic recording technologies that may allow MRT to continue to remain relevant vs. sub-5nm flash memory. We've already seen the MRT vendors attempt to "submarine" SMR into the NAS market. I have no confidence they won't repeat this behavior with other unsuitable technologies over the next decade.



That's right now. I was projecting to 2030 when Seagate expects to start having trouble competing against solid-state storage technologies. I admit I'm still waiting for my flying car that folds into a briefcase. :smile:




Agreed.
Glad to hear that you are part of the community. Before flash i spent 20 years in the network R&D business... similar, but not the same.

We seem to be in violent disagreement. Agree that we need to look at SMR for the future... it does tend to reduce $ per TB by 10-15%.

As an example of a technology that could help address SMR performance issues is dRAID. It removes the stress off individual drives during resilver operations and balances load across more drives. I can imagine there will be more tweaks needed.

Can we have a beer bet on Seagate and 2030? Cheers
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
Might be time to start question if WD is really a safe partner to work with.
If you have anything WD you better yank it from the wall if you still have time.

With more data on the issue it seems WD take responsibility here which is good news.

Sorry for the off topic just wnted to update on this.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
It’s good that the company is taking ownership of the issue. Part and parcel of putting out products should include periodic updates / checks to ensure they still work as intended. Companies can and do set aside reserves for these kinds of issues.

WD may have also learned that owning the issue is a better stance in the long run than denying it because customers only get more agitated the more the company dissembles. Streisand, et al.

another factor may be that companies are becoming more careful re appearing too monopolistic in their behavior. The HDD market arguably is ripe for a anti-trust investigation.
 

rvassar

Guru
Joined
May 2, 2018
Messages
971
it does tend to reduce $ per TB by 10-15%.

As an example of a technology that could help address SMR performance issues is dRAID. It removes the stress off individual drives during resilver operations and balances load across more drives. I can imagine there will be more tweaks needed.

I've thought the same, what can be done to remediate the performance loss in a 100% write scenario? I toyed with the option of borrowing a hot-spare, and round-robin'ing the write load between them, then merging, but that doesn't cover the existing drives during resilver, or make it go any faster. It might get you to a data-safe state faster, and that's about it. The multi-actuator drives that just came on the market raise the interesting possibility of splitting the shingling between actuators, but you're still looking at writes getting limited by the rotational rate. I just don't see a good SMR solution in the offering.

Can we have a beer bet on Seagate and 2030? Cheers

With one caveat... There can be no Fab disrupting wars in the western Pacific. :wink:
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
I've thought the same, what can be done to remediate the performance loss in a 100% write scenario? I toyed with the option of borrowing a hot-spare, and round-robin'ing the write load between them, then merging, but that doesn't cover the existing drives during resilver, or make it go any faster. It might get you to a data-safe state faster, and that's about it. The multi-actuator drives that just came on the market raise the interesting possibility of splitting the shingling between actuators, but you're still looking at writes getting limited by the rotational rate. I just don't see a good SMR solution in the offering.



With one caveat... There can be no Fab disrupting wars in the western Pacific. :wink:
I'll agree that HDDs will gradually move to applications where performance requirements are much lower.... and writes are aggregated to larger sizes.
IOPS-sensitive apps will move to SSDs... as has been the trend for 15 years. HDDs will be preferred for capacity and cost per TB. School busses can be slow as long as they are safe.

I can't agree to caveats in a beer bet! One of the reasons why flash won't win is that the fabs cost so much and are vulnerable to "disasters". Its not worthwhile to take the risk if the price of flash drops too much. It's a modest growth business. Data keeps growing faster.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
That’s the key issue to me. In order to meet the 2030 goal, the rate of fab construction for flash assy would have to significantly outstrip data growth in order for supply to meet demand down the road.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,691
Yes, but data growth is very high (20% CAGR). https://www.idc.com/getdoc.jsp?containerId=prUS47560321

In 2020: "Even though SSDs held a 28 percent unit shipment advantage in 2020, HDD shipments accounted for five times more capacity, at over 1 zettabyte compared to 207 exabytes."

Working backwards. I'd guess 2030 storage capacity to be 6 Zettabytes. Flash volumes would need to grow 30X and flash prices would need to drop by 90% or more. The economics of those two are quite difficult since flash revenues would only increase by 3X.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,828
There is no but. We are in violent agreement. :smile:

Data growth is high, so SSD capacity growth would have to be significantly higher to supplant HDDs as a destination for said data. Barring rapid commercialization of new data storage approaches, I don't see an escape from the current HDD and SSD oligopoly.
 
Joined
Jan 4, 2014
Messages
1,644
EDIT: I had three WD Red SMR disks in the mix, the last which I replaced yesterday. It's been a painful and protracted process with the loss of a pool along the way. WD is 'happily' replacing these drives. ... The catch is that I pay the postage to send the disks back. It's cost me AUD$57 to return disks with a total original purchase cost of between AUD$750 and AUD$900.

Once I posted the SMR drives, which by the way had to be shipped back to WD in Malaysia, the turnaround to receive replacement CMR drives was just under three weeks. I registered the replacement drives on the WD support portal and noticed warranty periods ranging from 429 to 530 days and not the standard three years (1,095 days) for new drives. I queried this with WD overnight. It turns out that the formula used for the warranty on the replacements is the remaining warranty on replaced drives plus three months (90 days).

If you have WD SMR drives that you want to have replaced and it's a matter of days or weeks till the warranty expires on any of those drives, you'll want to act promptly. You won't be able to replace any drives with an expired warranty. Act now and you'll at least get an extended three-month warranty.

While the replaced drives aren't subject to the same warranty as new drives, they look new to me rather than refurbished, which if the case, is an added bonus. I'll begin burn-in testing the drives in due course.

PXL_20210713_110834499.jpg
 
Last edited:
Top