Seagate 8TB Archive Drive in FreeNAS?

Status
Not open for further replies.

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
If these drives are intended for archiving data but are not recommended (not even by Seagate) for attachment to a RAID controller, then isn't the only way to ensure the safety of the archived data to have two or more such drives to which the files are copied separately? (And it just occurred to me: with drives that are mirrored -- whether using FreeNAS or hardware RAID -- are there periodic checks for consistency between the two or more sets of data?)

Well, with ZFS, the whole point is that integrity is guaranteed. If you replicate it or even if you just copy it over to a second ZFS pool (as long as the network insures data integrity), it'll be independently guaranteed to not degrade. Performing a regular comparison between two pools is a bit redundant, but can be done "by hand" (meaning, not handled by ZFS).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
I have a Seagate SMR 8TB, (and I am guessing v2), in a FreeNAS environment. Just got it a few days ago.
My goal was for backups, as a separate, single disk ZFS pool.

Because it's not a regular performer, (having to internally clean up), in my opinion, it's not a good fit for
any form of RAID-1 or higher. I guess Concats, Stripes and JBOBs might work out okay.

Due to it's un-usual design I really wanted a checksumming file system like ZFS. That way I have a chance of
finding bad data due to firmware errors. So I plan on performing scrubs before every backup.

My discussion was started here;

https://forums.freenas.org/index.php?threads/seagate-archive-8tb.28416/
 

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
Please see Robert Novak's slide deck on SMR devices with ZFS. There are other presentations that were interesting too, but Novak's is particularly interesting (to me) in that it describes several use cases where ZFS may be the only file system that can leverage these devices and provide the highest performance (with work).

Seagate has been pretty visible and engaged in ZFS events. Seagate's Tim Feldman has a deck on Host-Aware SMR that is interesting too, he presented that at the OpenZFS dev summit.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
Sure, consensus seems to be that while we're still in drive-controlled logic space it's a crapshoot for the brave and those that don't mind dramatically wavering performance. Host-controlled and integrated into ZFS's technology is really what we're going to want before these can get utilized properly. (Not that I wouldn't consider laying down the cash for one as a TimeMachine volume or something; they're so inexpensive it seems silly *not* to see if it works out acceptably. Faster than Google Nearline is the only yardstick it has to beat, really.
 

centuriond

Dabbler
Joined
Nov 27, 2012
Messages
12
I was thinking more for Raid rebuild times. In the article I posted, the rebuild for a Raid 1 is something like 57 hours for the 8TB SMR drives vs 20hrs for the non-SMR (PMR?) drives.
I suppose since the reads are essentially "free", the only thing which might worry me is the heavier load on the other drive(s). It's certainly worth looking into since I'd like to halve the number of drives I have in my system to drop heat and power (and increase performance by moving to a Raid10-like setup).
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
Thanks for the links, fascinating reading.

Hope I get, (reliable), firmware updates for my SMR drive that improve performance.
 
Last edited:

sweeze

Dabbler
Joined
Sep 23, 2013
Messages
24
I don't know that drive firmware is going to bring you much performance enhancement. I'd be waiting for host-based improvements rather than drive-based improvements, since the real hold-up is that the disks are treated like any other and they require more smarts in how the block device itself is handled and I don't know that firmware will ever address that satisfactorily when it's so dependent on how the filesystem of the device is managed?
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
I don't know that drive firmware is going to bring you much performance enhancement. I'd be waiting for host-based improvements rather than drive-based improvements, since the real hold-up is that the disks are treated like any other and they require more smarts in how the block device itself is handled and I don't know that firmware will ever address that satisfactorily when it's so dependent on how the filesystem of the device is managed?

I wouldn't count on host-based improvements either. Unless spinny manufacturers standardize on a common shingled drive layout, you are requiring your host to know intimate details about the specific hard drive being used. Better shingled spinny firmware, taking cues from the developments in SSD firmware over the years, plus some beefy caches, possibly even flaches[1], could help improve the random write performance regardless of host/filesystem on top. Most reads will end up being random IO on the platters, but this is not nearly as bad as random writes. In short, I don't think the situation is dire, but this is a relatively new technology. Give it a couple years and I think you'll see the performance much improved.

[1] I have just coined this term, "flache", to represent a "flash-based cache". Trademark pending ;)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
If the firmware supported TRIM / DISCARD, then ZFS, (or other TRIM / DISCARD aware FS), could optimize
writes. Especially ZFS & BTRFS, because they use COW, (Copy On Write), to free space on the drive.

Meaning the drive can perform de-fragmentation of the actually used data, allowing free blocks to be free
Shingle Tracks or entire Zones. Thus, no need to backup next track in a zone when writing current track in a
zone, if the next track is known to be free.

Hopefully I described that well enough.

Note that Seagate's SMR drives already have a look up table, (or something similar), for the data blocks.
Meaning if I write to block 10,000, their is no certainty that it's actually stored on block 10,000 in the drive. In
fact, all drives, (spinning or Flash), have that for sparing out bad blocks. SMRs take it to a whole new level.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
So far, my Seagate 8TB SMR drive is working out great for backups. All these worked without errors;
  • Full backup of my FreeNAS data pool, (about 1TB at present)
  • Repeat full backup after dropping Pool version to one common between FreeNAS & ZFS on Linux
  • Incremental using Rsync
  • Another incremental using Rsync
I performed a ZFS scrub before and possibly after each backup, no errors.

Please note that speed was variable. One time Rsync told me 60MBps. Another time it was 30MBps. I probably
should have keep track and possibly logged the backups, (like I do for my normal backups). But, speed was never
a real consideration. I just wanted a single device for backing up my 4 x 4TB in RAID-Z2. Later I will script, (and
log), the FreeNAS -> 8TB backups.

Read speed is much better and can be as high as 150MBps, (if I recall correctly).

But, back to the original question;

Seagate 8TB Archive Drive in FreeNAS?​

My limited experience says no, except like what I do. I would not think RAID-1 or higher would be a good fit for
this SMR drive. (Who knows what they do in the future?)

Last, my drive does not support time limited error recovery;

Code:
# smartctl -l scterc /dev/ada7
smartctl 6.3 2014-07-26 r3976 [FreeBSD 9.3-RELEASE-p13 amd64] (local build)
Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org

SCT Commands not supported
 
Last edited:

wpirobotbuilder

Dabbler
Joined
Apr 21, 2014
Messages
16
There doesn't need to be any special support on the ZFS side -- the drive manages access to the platters, not the host (like all previous SMR drives).

As far as usage, it's designed for a very specific use case, so don't use this as a VM datastore. Good for low-frequency access, like backups or (non-critical) media libraries.

Probably should only use it in a mirrored configuration.
 

marbus90

Guru
Joined
Aug 2, 2014
Messages
818
Probably should only use it in a mirrored configuration.
BS.

The URE rate is 10^14 still -> read error at least every 12TB. chances of a mirror rebuild failing are ~65%. I'd only use it in raidz2 or raidz3 for redundancy even during rebuild. My design would be 4x 11disk z3 + 1 spare in a 45drives.com chassis -> >200TiB usable with 2 parity disks per vdev (one parity disk is needed just to be able to read all that data correctly).

7200rpm desktop Seagates seem to be holding up better in those chassis as well compared to WD Red 5400rpm NAS drives.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
There doesn't need to be any special support on the ZFS side -- the drive manages access to the platters, not the host (like all previous SMR drives).

As far as usage, it's designed for a very specific use case, so don't use this as a VM datastore. Good for low-frequency access, like backups or (non-critical) media libraries.

Probably should only use it in a mirrored configuration.

Correct. This drive does not need any host support. ZFS in FreeNAS 9.3 or ZFS on Linux
0.6.3 works just fine with it.

My comment about not using it in a RAID was for general purpose NAS. Write performance
would be very irregular. Read performance might just be okay. For example, multimedia services.
 

Lix

Dabbler
Joined
Apr 20, 2015
Messages
27
As this drive does not support time limited error recovery, how well will it behave in a mirror configuration (Looking for an educated guess)? I see that a lot of people have WD Green drives in their pools, anyone that can weigh in on how they behave in a similar setup?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,175
As this drive does not support time limited error recovery, how well will it behave in a mirror configuration (Looking for an educated guess)? I see that a lot of people have WD Green drives in their pools, anyone that can weigh in on how they behave in a similar setup?
They'll work fine until they have a read error. At that point, the system will grind to a halt until either ZFS gets tired or the drive gives up. It can get messy if trying to read several bad sectors (not that it should ever come to that).
 

Lix

Dabbler
Joined
Apr 20, 2015
Messages
27
Would a 3-way mirror help? Creating a Pool per HDD would be wasting RAM based cache I guess and the fact that you loose redundancy is sad as well.
 

robo989

Cadet
Joined
May 8, 2015
Messages
6
Rebuilds on these have been vastly exaggerated. The StorageReview site is off...rebuilds are 15-20MB/s.
Full RAIDz2 rebuild of 8 drive array verified by myself.
 

Lix

Dabbler
Joined
Apr 20, 2015
Messages
27
Thanks for your reply. Would you buy them again, if you had the chance to go back in time? :)

Have you had any issues?
 
Status
Not open for further replies.
Top