2.5" SMR hard disk for weekly backups?

ibds

Dabbler
Joined
Nov 17, 2019
Messages
14
Hi,

I have a question regarding the use of a single 2.5" SMR hard disk for a weekly backup.

I currently have different backups (cloud, internal and external replication), as well as a weekly backup where I mount a hard drive locally as a single-drive pool and remove the drive after the backup has been completed and store it externally (I just like to play it safe :wink: ).

So far I have been using 3.5" CMR disks (WDC WD40EFRX and WD80EFAX). For various reasons I am considering using 2.5" hard disks instead (e.g. Seagate Barracuda ST5000LM000). I need at least 4TB and the problem is that 2.5" hard disks are then always SMR disks.

I am aware of the problem with ZFS and SMR disks, but I wonder if this in principle only applies to pools consisting of several disks or also to pools consisting of a single hard disk?

No resilvering can take place, but the disks are still heavily loaded when writing for the first time and possibly also when creating backups.

Is it possible to circumvent the problem by limiting the write rate or something else? Since the backup disk usually remains in the server until the next day, it is not a problem if the backup needs more time.

I would be grateful for an answer.
Dieter
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
I suppose that using SMR to host a ZFS pool is always bound to end badly…
It might maybe work if the hard drive is simply used to hold flattened ZFS streams as backup (zfs send […] > /dev/MY_SMR_DISK/backup_YYMMDD).
 

ibds

Dabbler
Joined
Nov 17, 2019
Messages
14
I've actually forgotten to mention: the backup is a replication task.
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Surely if you are throwing a lot of data at an SMR disk you will outrun the cache at which point the disk will lilkey start having to read sectors to write sectors nearby.
OS doesn't matter. Once you outrun an SMR drive performance drops off a cliff
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
How would SMR and ZFS play nicely together ? ...

Make a new pool, write little bits of data to it at a time, leaving plenty of time for the cache to flush in between (maybe something like copying a few photos or editing a few documents would seem to work fine for a while at least).

How is it a problem when ZFS tries to work with SMR ? ...

Run a job that involves more data to be written than can be held in cache (maybe this starts to sound like the kind of thing a backup job might resemble)...

Then, when the cache is full, your writes start to time out and eventually the drive gets marked as non-responsive and ZFS kicks it out of the pool... quite a problem if it's the only drive in that pool.

Conclusion:

SMR disks are not great for ZFS generally even though they may seem to work under some conditions/workloads.

SMR disks are specifically not great for (full) backups on ZFS
 

ibds

Dabbler
Joined
Nov 17, 2019
Messages
14
Thanks for the feedback. I will stick with the 3.5" CMR disks then.

Does anyone actually have any experience or reports on what data rates an SMR drive drops back to once the cache is full? I can only find very vague information about this, which varies from 100 MB/s to less than 1 MB/s. Probably it's not possible to say exactly, because it depends on how many shingles (adjacent tracks) have to be rewritten, or?

If someone knows more about it, I would be glad about information.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
I do use a Seagate 8TB Archive SMR disk, (3.5"), as one of my backup disks. It is formatted as a single disk ZFS pool. In general, it works fine for my backup application, as I am not time constrained in how long the backup takes.

This particular drive does not drop out of it's pool during the minute or more when it becomes unavailable due to shingle re-writes. Backups to this disk do take much longer than to my 12TB CMR backup disk.

But, other than the above, I can't give you many details in regards to speed.



On the other hand, Western Digital's Red SMR drives DO exhibit a failure condition with ZFS. It's fatal, and in reality a firmware bug as best I can tell.

This is one of the things that caused serious questions & issues on WDs change out of the Red CMR for Red SMR.

Basically the WD Red SMR drives return sector not found when reading a sector that has not been written. It appears that ZFS will bundle 2 or more reads together even if they are separated by unwritten sectors. This is so that a single read command can be sent to the disk drive, and thus, optimize overhead.

But, it's a bug in the firmware, (in my opinion). Any CMR drive will let you read blocks that have not been written. If the SMR drive has not allocated space on the shingles for a block, it should simply return zeros, (or random output). Not an error of sector not found.
 
Top