Thoughts on WD Ae "Archive" Drives?

Status
Not open for further replies.

ewhac

Contributor
Joined
Aug 20, 2013
Messages
177
http://products.wdc.com/library/SpecSheet/ENG/2879-800045.pdf

NewEgg posted a sale on 6TB Western Digital Ae drives, which I might well have jumped on had my car not thrown an expensive fault over the weekend. However, WD describes these as "archive" drives. In particular, from the somewhat off-putting footnote on the first page:
The WD Ae hard drive is best suited for cold storage, backup and data archiving where data is stored on disk but rarely if almost never read again yet may be critical at some future point, prime examples being legal data or photo backups.

While a lot of the data on my NAS is "cold" (photos, videos, etc.), the thing does indeed get used. What are these things actually doing? What is the sense of the community on using these drives on a live server?
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
The drive is rated for very low use and it looked good to me until I did some figures on my use. I have one array that only gets used once a week to make a backup and they would work for that but not in the regular storage because I use it too much.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
However, WD describes these as "archive" drives. In particular, from the somewhat off-putting footnote on the first page:

While a lot of the data on my NAS is "cold" (photos, videos, etc.), the thing does indeed get used. What are these things actually doing? What is the sense of the community on using these drives on a live server?

These drives are Shingled Magnetic Recording (SMR). Great for writing sequentially, such as backups low rewriting, low usage. Great for power up, backup, then power down.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It might make a good backup drive.

In my case I use a single Seagate 8TB SMR Archive drive for one rotation of my backups. Due to the shingling, that backup is pretty slow, down to 30MBps writes. Got closer to 90MBps when I first starting using it. But I am guessing that fragmentation has reduced the speed. Not a problem for me, I don't really care how long the backups take, (I run backups monthly), as long as my ZFS scrubs tell me the data is safe.

So if this 6TB WD Ae drive was cost effective compared to the 8TB Seagate SMR Archive, (using $ per TB), I might use a WD Ae drive as the next disk in my rotation.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
These drives are Shingled Magnetic Recording (SMR). Great for writing sequentially, such as backups low rewriting, low usage. Great for power up, backup, then power down.
Are you sure about these being shingled?

I am not doubting you, just want to understand, (since my backups might get slower over time).

If you can dig up your reference, I would appreciate it.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
I cannot find where I got my shingled info. :confused: At this point they do not look like the are. They have spec's that are lower for various categories. This can work with cold storage processing with low run time.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Are you sure about these being shingled?
I am not doubting you, just want to understand, (since my backups might get slower over time).
If you can dig up your reference, I would appreciate it.
The thing you need to do with the shingled drives is delete the partition and create a new partition so the data can be written in a linear way instead of modifying the data that is already in place on the drive. It is the modify operation that makes them slow because of the way a rewrite is done on a shingled disk.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
The WD Ae drives are not SMR. I did my quick research on this earlier this week. But they do have some non-standard low level formatting of the platters which is why they have 6.1TB, 6.2TB, and 6.3TB versions all sold as 6TB. Trying to find details is very difficult.

I thought about purchasing these drives but thought the risk was too high since the specs for these are rather low. The head load cycle count is low and the MTBF is low. The drive is meant to go to sleep for long periods of time and not be accessed hence "Cold Storage" advertising of this drive.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
I thought about purchasing these drives but thought the risk was too high since the specs for these are rather low. The head load cycle count is low and the MTBF is low. The drive is meant to go to sleep for long periods of time and not be accessed hence "Cold Storage" advertising of this drive.
Same here. I looked at those low numbers and thought they wouldn't be worth the money even with as inexpensive as they are.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
The thing you need to do with the shingled drives is delete the partition and create a new partition so the data can be written in a linear way instead of modifying the data that is already in place on the drive. It is the modify operation that makes them slow because of the way a rewrite is done on a shingled disk.
Yes, I know. But simply deleting the partition is unlikely to clear the SMR groups. I wish there was a medium level format routine for these drives, (there maybe, I've not bothered to look yet). Perhaps writing ZERO to the entire drive would do it.

However, for my use case, backups, I am able to keep multiple backups on the SMR disk by simply snapshotting it after the backup. So any full disk cleaner is no use to me at present.

This leads back to one of my original thoughts, an internal de-fragment utility. Just collect the free space all in one place. Or a reduced number of spaces. Use that for new writes, occasionally defragmented as needed. Oh, gee, copy on write!
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
Oh, gee, copy on write!
That is why I say eliminating the partition and creating a new one should start a new file system and there would be no fragmentation. A clean volume with no data so there would be no fragmentation and no slow performance.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
That is why I say eliminating the partition and creating a new one should start a new file system and there would be no fragmentation. A clean volume with no data so there would be no fragmentation and no slow performance.
In my case I use a single Seagate 8TB SMR Archive drive for one rotation of my backups. Due to the shingling, that backup is pretty slow, down to 30MBps writes. Got closer to 90MBps when I first starting using it. But I am guessing that fragmentation has reduced the speed.
So I've done some reading on SMR and how it works and this is by far the worst type of storage to have for frequently changed data. I'm not condeming it's use if you have one already however I would never recommend it on a NAS environment. File fragmentation is inherent with this design. The design is quite smart in order to pack more bits into a given area but what a nightmare. If you destroy the partitions and recreate them then you have effectively destroyed any previously recorded data for this discussion and thus all data recorded after this point is new, I say this because the FAT will have no known previous data to worry about re-recording so it should operate nice a fast again as it did when it was new. With that said, it won't last long. I don't think I'd destroy the partition each time I needed the drive to move faster again.

So how could we combat a slow drive you say and try to make it fast again? Since these SMR drives act somewhat like a SSD (data organiziation) you could use a built in Garbage Collector that would automatically rewrite your data in contiguious format and that would free up the unused sections for a large free space at the end of the drive to write new data. I would have expected the engineers to have thought of this already so there must be some challenges implementing this operation. One challenge could be the serious shortening of the life of the drive. But something like this would need to be done internal to the drive, I don't see how it could be implemented from the outside.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Uh, deleting the partition is, in my opinion, useless to clear a SMR drive.

Remember, the DRIVE controls where data blocks are written. It writes any newly written data anywhere it wants, simply freeing up the old data blocks. Meaning when I write to block 10,000 from the SATA port, it's almost certainly never going to be block 10,000 internally. And it is likely to change every time block 10,000 is written. From what I understand, we can think of it like wear leveling, (it's not wear leveling, just acts like it).

Further, due to the shingle re-writes, it's possible that static, un-changing data wiil be moved. That may be because a shingle group has blocks freed up at the begining. Thus the need to write to this free space, to put readily writable free space at the end of the shingle group.

From what I understand, my 8TB SMR drive has a 20GB un-shingled block of data. It uses this as write cache. I don't know if it uses this cache place for temporary location of blocks that are moving, but it could. Just like a ZFS SLOG would be.

All that said, I could be wrong. (I thought I was wrong once, but I was wrong...)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996
I see what you are saying and I'm curious where this LBA map is physically stored.

From what I understand the 20GB block of data is for use as you described.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Uh, deleting the partition is, in my opinion, useless to clear a SMR drive.

Remember, the DRIVE controls where data blocks are written. It writes any newly written data anywhere it wants, simply freeing up the old data blocks. Meaning when I write to block 10,000 from the SATA port, it's almost certainly never going to be block 10,000 internally. And it is likely to change every time block 10,000 is written. From what I understand, we can think of it like wear leveling, (it's not wear leveling, just acts like it).

Further, due to the shingle re-writes, it's possible that static, un-changing data wiil be moved. That may be because a shingle group has blocks freed up at the begining. Thus the need to write to this free space, to put readily writable free space at the end of the shingle group.

From what I understand, my 8TB SMR drive has a 20GB un-shingled block of data. It uses this as write cache. I don't know if it uses this cache place for temporary location of blocks that are moving, but it could. Just like a ZFS SLOG would be.

All that said, I could be wrong. (I thought I was wrong once, but I was wrong...)
I'm under the impression that SMR drives generally stick to their LBA mappings (caches excepted), unlike SSDs, which treat LBAs as it suits the controller.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,079
It almost makes me want to get a SMR drive to test, but I don't think the drive is choosing where to put data in the way a SSD does.
The SSDs use that technology because of wear leveling but the SMR drives don't have the same reason for deceiving the system.

Sent from my SAMSUNG-SGH-I537 using Tapatalk
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,996

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
You have all made good points. I doubt we will get a definitive answer anytime soon.

Unless I have a software problem, I'll continue to add, (and remove when running low on space), incremental backups to my SMR. But, if I do have to re-create my pool on that disk, I will report any speed change.
 

farmerpling2

Patron
Joined
Mar 20, 2017
Messages
224
You have all made good points. I doubt we will get a definitive answer anytime soon.

Unless I have a software problem, I'll continue to add, (and remove when running low on space), incremental backups to my SMR. But, if I do have to re-create my pool on that disk, I will report any speed change.

I ran across an article (can't find it now, of course :confused:) about ZFS filesystem's COW would possibly cause less rewrites because of copy-on-write. The article was from some conference, if my memory serves me right...

Ahhh, here it is:

http://storageconference.us/2014/Presentations/Novak.pdf

and here is another:

http://open-zfs.org/w/images/2/2a/Host-Aware_SMR-Tim_Feldman.pdf

Seems like some people want to use them for production, but they are overwhelmed by the cheap price and forget the lower durability stats...

If you had them in a RAIDZ1 for BACKUP's using snapshots, that only came online three or four times a day and were powered down (or in lowest power usage standby) the rest of the time, they would seem to be perfect for that.

Keep in mind that the durability rating by some manufacturers, is much lower than a NAS (~1,200,000 - 2,500,000 MTBF) drive for the SMR (~500,000 - 2,500,000) drives.

That is how I would use them... I would give them a good beating to make sure they work well. Do a bunch of snapshots then try to use them to access data from some previous past to see performance and reliability.

You could always go RAIDZ2, if you really want to use them as reliable backups...
 
Last edited:
Status
Not open for further replies.
Top