SMR Hard Drives - Do you think they are proper NAS drives?

Status
Not open for further replies.

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I'm starting this thread because I'm just a bit curious and concerned about this new type of hard drive for NAS use. I invite comments (good or bad) and maybe my concerns are not warranted, I'm sure someone may say this. These are drives on the market so it may not be of concern.

So SMR technology is out and making great strides at allowing more densely packed data on magnetic media however the technology has some odd tradeoffs which make me think that now we must also factor how the drives will be used. The SMR recording technique (very interesting). These types of drives are specifically designed for Write Once Read Many applications or Archiving (how they are generally labeled). If you change data frequently then these drives are not for you in my opinion, but lets be clear, changing frequently to me would be running a jail on one of these drives. Anyone who uses them, I'd love to see some testing done on them to see how they function in a NAS environment. They are definitely not for a normal computer system use.

The way the data is recorded or is rewritten is the concern and but it's an ingenuous idea and very fascinating.

I have not done any research on testing of these drives for throughput or reliability so that will be interesting to read about. Seagate produced the 5TB and 8TB drives that I'm aware of.

I'm curious if anyone has used these type of drives in a FreeNAS device and how they function.

-Cheers
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Generically, for NAS, I'd say "depends on your workload and performance requirements".
Regarding FreeNAS specifically, I'd say there's potential. Since ZFS is CoW, it can significantly reduce the performance degradation inherent with data updates. As we approach a full pool, this behavior probably tends to be lost, leaving ZFS slower than traditional file systems. Here's the interesting part: If ZFS is made aware of the blockwise nature of the medium, it can intelligently choose where to drop the data. This presupposes that the drive controller isn't doing the whole process in parallel or is at least coordinating it with ZFS.

I think Seagate's SMR drives use SSD-like controllers that abstract all the inner workings away and present a fake LBA, with the actual locations on disk being tracked by the controller. This introduces a problem similar to RAID, where ZFS could be intelligently managing the entire storage subsystem top to bottom, but is instead at the mercy of sub-subsystems that are limited by their embedded nature in processing power, amount of data available and opaque behavior.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
I think SMR is pretty cool but I'm still hesitant right now, maybe because it's just too new of a technology.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I think SMR is pretty cool but I'm still hesitant right now, maybe because it's just too new of a technology.
I can imagine some growing pains with the details of drive controller design, like we saw with SSDs.

Come to think of it, SMR drives sound perfect for surveillance applications.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I have a Seagate 8TB SMR, (likely version 2), used with my FreeNAS for backing up a
4 x 4TB RAID-Z2 pool. The continuous write nature of my backups means I get about
30MBps write performance using RSync. Read performance can be upto 150MBps.

For these types of drives with complex firmware, using ZFS checksumming gives a
chance to detect firmware issues. So I run a scrub before or after each backup. No
problems so far.


My thoughts on internal workings of ZFS and SMRs, is that TRIM / DISCARD should be
used. That way the disk drive's firmware can basically run optimization routines and keep
all the used space together. Thus, when writing new data using ZFS's copy on write nature,
it would be to a free SMR track in which the next, (overlapping), track would also be free.
Thus, no need to back up this second track before writing the first track.

Using that idea, we might get much higher write performance. But the firmware of the drive
currently does not support TRIM / DISCARD.

Note that from what I read, this 8TB SMR drive has a 20GB write cache of non-SMR disk
space. So for burst writing, you can probably see less latency. But whence that cache is full,
it has to be at least partially flushed before new writes would be allowed.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Come to think of it, SMR drives sound perfect for surveillance applications.
No, for surveillance applications, my personal experience says these drives don't have enough
continuous write speed. (Unless you use fewer sources / cameras per drive...)

Further, the irregular write performance would mean the source needs a buffer to compensate
for delays in writing. Other drives designed for video applications have specific changes in
their firmware to avoid long delays in writing.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, TRIM/DISCARD isn't supported on the drive, so that doesn't even warrant further discussion.

Second, someone in the forums let me do some tests on his system that was 8TB SMR drives. The entire zpool (10 disks total?) was Seagate 8TB. Performance of the zpool for reads and writes was very good.... with an exception.

Each disk seems to have a 20GB 'scratch space' on the disk. Writes to the disk go to the scratch space immediately and you get throughput around what you'd normally expect for spinning rust. However, once you fill that 20GB scratch space up, performance takes a nosedive. Write speeds were very bursty (few seconds of high throughput, then many seconds of almost no throughput strictly from the writing aspect.. reads were slower as they were competing for disk resources with the writes). Writes were only 30-40MB/sec, per disk. If you have 10 disks in a RAIDZ2 then you're really looking at 300MB/sec even after you run out of scratch space.

For backups, they should work exceptionally well.

For users that have large movie collections where they rip their DVD/Blu-Ray disks and keep them on the server for a very long time, they should work very well.

For users that want to run VMs, have very high performance needs or are involved in workloads that require high iops, I wouldn't recommend them.

I, personally, would never use these disks in a hardware RAID. If you have issues like corruption of the scratch space, it may go totally unnoticed because hardware RAID isn't as robust as ZFS. ZFS on the other hand will start identifying corruption all over the disk. I would expect SMART monitoring (and potentially SMART testing) to warn you, but as I don't have solid evidence of that at the present time I wouldn't try to rely on it as much as I rely on ZFS to find corruption.
 
Status
Not open for further replies.
Top