Seagate Archive disks in mirror?

Status
Not open for further replies.

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
I'm currently using a 3TB SAS disk as an archive for files I seldom access and only read from.
I've tried looking in to the Seagate Archive disks and FreeNAS/ZFS, but I haven't been able to find much recent (this year) information.

I'm aware of the SMR and missing TLER, but what I'm wondering is whether or not this is an actual issue with ZFS?
My plan is to buy two 8TB drives, create a mirror vdev, create a new pool and then dump the data from the 3TB drive on that.

Write performance is not really a concern from this pool, as data will be dumped on it once and then read once a blue moon.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
ZFS won't mind, but you'll still have the obvious limitations inherent to the drives.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
Alright, guess there's nothing else to do than to just try it then :)
Thanks!
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I still use my single Seagate 8TB Archive drive for backups. Have had it for 2 years or so. Still working as of 2 weeks ago. It's setup as a single disk ZFS pool, since I wanted to make sure any firmware mis-reads or writes were found, (none so far). (The firmware in these drives is WAY more complex that regular drives.)

In my humble opinion, a 2 way ZFS Mirror of these drives should work just fine.

With the caveat of less than ideal writing and reading speeds. Remember, a simple write is not guarenteed to be contiguous on these drives. Regardless of what ZFS intended. So even a simple, apparently contiguous, read may cause head seeks, potentially lots of them. Thus, I throw in less than ideal read speeds.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
I'll receive the drives tomorrow, I hope it won't be an issue. Data on these will very rarely be deleted, so I hope that will keep the seeks to a minimum.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Data on these will very rarely be deleted, so I hope that will keep the seeks to a minimum.
ZFS does copy on write, which includes directory updates. So there will be holes in the written data.

But, if it's just archive data, then I would not worry about it. Even if they were used for played media file storage, it should be fine.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
It's archived media actually. I'll start burning in the drives tomorrow, if I remember, I'll update this post with some experiences after I've moved my data. No promises though :)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It's archived media actually. I'll start burning in the drives tomorrow, if I remember, I'll update this post with some experiences after I've moved my data. No promises though :)
Don't forget to setup SMART disk tests and ZFS Scrubs. Even with low use data, it's better to know when a disk actually dies than find out when you want or need the data.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
Thanks. I will.
I'm running some SMART tests right now, which I would normally follow with a run of badblocks. I'm just not sure that's a good idea on an SMR disk. Any input on that?
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Thanks. I will.
I'm running some SMART tests right now, which I would normally follow with a run of badblocks. I'm just not sure that's a good idea on an SMR disk. Any input on that?
You may want to limit the badblocks test(s) to a time limit. Perhaps 24 hours. If it finishes sooner, great. If not, and no bad blocks so far, maybe stop it.

My thoughts are that bad block testing may slow down after the disk has been completely written to once.

Other than that, I would not worry about stressing the drive. My own drive gets pretty stressed during backups. First, I run a scrub, which reads every used block. Then the actual backup writes all new data, and copy-on-writes directory entries that got changed. So, pretty intensive drive activity. This is the reason why my external eSATA enclosure has a fan, to keep the drive from over-heating.

Note that my NAS has less than 3TB used. So copies to an 8TB backup disk seem a bit silly at present. I use snapshots on my backup drive. Thus, I have multiple backups on it. Only started that methodology perhaps 8 month's ago. So I have maybe 5 or 6 full backups through snapshots, (which share any un-changed files). At present, I probably have 5.5TB in use on my Seagate 8TB SMR / Archive disk. (Oh, and I don't backup my NAS's snapshots, which give history of my client backups...)
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
It's been running for 9 hours now and it's 68% done with the 0xaa pattern.
I'm with you on the idea of stopping it after a while, but I think I'll let it run the full range, since the last one is 0x00. I'm hoping that will "reset" it to "unused", but SMR being what it is, who knows how it behaves.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
It's been running for 9 hours now and it's 68% done with the 0xaa pattern.
I'm with you on the idea of stopping it after a while, but I think I'll let it run the full range, since the last one is 0x00.
...
Sounds good.
...
I'm hoping that will "reset" it to "unused", but SMR being what it is, who knows how it behaves.
I wish these SMR / Archive disks supported TRIM / Discard. That way the OS can tell the drive when NOT to backup adjacent tracks when over-writing.
Plus, I wish they had some kind of optimization tool. One that caused all the holes to be closed, so that all the free space was contiguous.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
Same here. If you could initiate some garbage collection or something that would be great, but I guess there's a reason they're so dirt cheap :)
 

droeders

Contributor
Joined
Mar 21, 2016
Messages
179
My own drive gets pretty stressed during backups. First, I run a scrub, which reads every used block.

A bit off-topic, but why wouldn't you scrub the pool after the backup instead of before? You could be scrubbing data that will no longer exist after the backup, and not scrubbing data that will be there after.

I have a similar strategy, but I backup, run a SMART long test, and then scrub - in that order.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Same here. If you could initiate some garbage collection or something that would be great, but I guess there's a reason they're so dirt cheap :)
Hi Lukeren,

I am using the Seagate Archive drives in the following configurations:
3 disk in RAIDZ1
I have 2 sets which I use for offline archiving. One is on a backup computer and the other is in a safe deposit box.
I do swap the set every few weeks.

I used to do recursive replication over LAN, but found out the drives would take a much longer time to replicate to because of the recursive mode -R, all destroyed snapshots on the source will be destroyed on the backup and SMR limitation will start to show.
Because of it I created a script which does incremental replication without deleting old snapshots.
That way replication completes much more quickly.
Overall, I find Archive drives to be very good.
The sets are also encrypted and they suffer performance due to my backup computer (core duo) which doesn't have AES so CPU are doing all the work.
I have run replication and scrub on non encrypted array and the drives could easily be maxed out. I think throughput ranges from 180MB/s down to 100MB/s per drive over the entire capacity range.
SMR aside, they seem to be more responsive and have higher throuhgput than my 4TB RED, but they are not quite the same structure though.
 

lukeren

Explorer
Joined
Apr 13, 2017
Messages
62
Hi Lukeren,

I am using the Seagate Archive drives in the following configurations:
3 disk in RAIDZ1
I have 2 sets which I use for offline archiving. One is on a backup computer and the other is in a safe deposit box.
I do swap the set every few weeks.

I used to do recursive replication over LAN, but found out the drives would take a much longer time to replicate to because of the recursive mode -R, all destroyed snapshots on the source will be destroyed on the backup and SMR limitation will start to show.
Because of it I created a script which does incremental replication without deleting old snapshots.
That way replication completes much more quickly.
Overall, I find Archive drives to be very good.
The sets are also encrypted and they suffer performance due to my backup computer (core duo) which doesn't have AES so CPU are doing all the work.
I have run replication and scrub on non encrypted array and the drives could easily be maxed out. I think throughput ranges from 180MB/s down to 100MB/s per drive over the entire capacity range.
SMR aside, they seem to be more responsive and have higher throuhgput than my 4TB RED, but they are not quite the same structure though.

Thanks for the info! Sounds good.
I plan to just dump media on these for archival purposes, so my hope is that I won't run into any issues regarding SMR.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
A bit off-topic, but why wouldn't you scrub the pool after the backup instead of before? You could be scrubbing data that will no longer exist after the backup, and not scrubbing data that will be there after.

I have a similar strategy, but I backup, run a SMART long test, and then scrub - in that order.
Uh, why would I want to run backups to a disk that's lost data?

After the scrub completes, I check and if it's got errors, I abort before the actual backup starts.

And yes, I see that a scrub after would be nice. My log file does capture the output of zpool status Backup after the backup. This helps if something went wrong during the backup. (Metadata has 2 copies, even on a single disk pool, so it's recoverable. And critical Metadata has 3 copies.)
 

droeders

Contributor
Joined
Mar 21, 2016
Messages
179
Uh, why would I want to run backups to a disk that's lost data?

You wouldn't. However, I'd much rather catch any errors after my backup completes than have a false sense of security when I take it off-site.

Additionally, if I scrub it last, I will catch any original bit-rot/lost data, plus it will scrub any new data I backed up.

Different philosophies I guess...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
The important thing is that backup disks need to be scrubbed (and SMARTed) regularly.

Are you running smart tests on your drives? :)
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
...
Additionally, if I scrub it last, I will catch any original bit-rot/lost data ...
Actually, any data that was deleted or replaced with newer data, won't be scrubbed. Meaning you could have silent corruption that get's lost. Obviously, if it's deleted or replaced, it was not worth anything. EXCEPT that I will have absolute proof that something is happening to my disk. I'll know and you won't :).
 
Status
Not open for further replies.
Top