Building and running with degraded mirrors

Status
Not open for further replies.

johnnychicago

Dabbler
Joined
Mar 3, 2016
Messages
37
Hello --

This is not something I'd do for my main FreeNAS, and not for the backup system either (way too paranoid to do so), but for a convenience backup-of-backup in a third location. It's mostly about the ability to zfs send/receive data to a third location and have it available there without having to think much, except for the installation.

So I have a mainboard with 6 SATA's and around 8 or 10 drives lying around, most of them of the 2 or 3 TB variety. I cannot build combine these into a pool big enough to be of use.

I could do so if I were to stripe them. I realize the risks, and I am happy with them. But the inconvenience I see is that I will never be able to change a disk out from the pool. I'd keep an eye on the smart data, but would have to rebuild the pool if I were to decide that I want to retire a potentially failing drive.

Hence the idea of creating a pool out of vdev's that are degraded mirrors. I would stick 5 drives into the box, create 5 degraded mirrors and use it that way. Should I decide that I want to remove a drive, I'll hang another drive on the sixth SATA port, attach it to the corresponding vdev, wait for the resilvering, then remove the original drive and continue running degraded again.

Has anybody tried this? Been happy with it? Are there obvious or non obvious things I should be aware of before I set this up?
 
Last edited by a moderator:

snaptec

Guru
Joined
Nov 30, 2015
Messages
502
If you have striped single disks you can always add a mirror to a single disk.
If you see a drive could fail you can add a mirror (even through the GUI) and after that detach the first drive.
No need to have a degraded mode from beginning.



Gesendet von iPhone mit Tapatalk
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
I second that you can add a mirror to failing, (but not failed), stripe drive.

Further, if I understand ZFS replacement correctly, a simple single command
would do what you want to do;

zpool replace POOL CUR_DISK NEW_DISK

Whence the replacement is complete, the failing disk, (CUR_DISK above),
is removed from the pool.

Please note that any lost files would have to be manually restored. ZFS does
have 2 copies for metadata, (like directory information), and 3 copies for
critical metadata. And ZFS will guarentee that those metadata copies go to 2,
(or 3), different vDevs, (in your case, disks), IF you build the pool and populate
the data with all the drives from the beginning.

So as long as a disk does not fail with too many bad blocks or completely,
you would be reasonably okay. On too many bad blocks or complete failure,
you loose the entire pool. Somewhat okay if it's a backup, and you have other
copies.

Last, if you can, leave one free disk slot to allow for easy replacement. If you
do, (like you said, use 5 SATA ports for your pool, leaving 1 SATA port free),
it's easier to replace any failing disk.
 

Jailer

Not strong, but bad
Joined
Sep 12, 2014
Messages
4,976
The only thing that would worry me about this setup would be that unless you are building in a hot swap chassis you're going to have to power down the system to attach a replacement drive. Your plan involves doing this when a drive starts showing signs of failure. The likelihood that the drive might not power back up would seem pretty high to me. But since it is just a backup of a backup determining if the risk is worth it is entirely up to you.
 

johnnychicago

Dabbler
Joined
Mar 3, 2016
Messages
37
Thanks a lot guys, very good tips :)

A few things I am currently wondering about - and I haven't been too explicit in my original post.

1 - If I had any corruption from a disk dying before it could be mirrored - would that iron out during scrubs and zfs replications?

2 - The bit about a potentially dying drive not being able to get up is a very valid one. And one I planned to discard immediately, since I planned to lease one SATA channel available for the spare, and figured I could immediately plant it in, so I could convert stripe into mirror when needed without having to reboot.

Then I realized this machine should run 24/7 (for cost reasons, mostly. This is not modern gear, and I read 150 Watts off my meter when the hardware runs. That's 150EUR/year)

Mmmmh.

Maybe a RAIDz1 would be the better setup here.

I'm glad I asked, and thanks for the replys :)
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
the inconvenience I see is that I will never be able to change a disk out from the pool.
In case it's still unclear, if you have a spare drive port, you can replace a disk without removing the disk you're replacing. Check the documentation for replacing disks to grow a pool - the same process works for replacing a failing disk.
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,600
...
1 - If I had any corruption from a disk dying before it could be mirrored - would that iron out during scrubs and zfs replications?
No.

It's a purely manual operation. Specifically you will be told by zpool status -v which files have been lost. You then delete the files, run a scrub to verify you have found all the bad files, (repeat delete & re-scrub as needed). Whence everything is "clean", you restore the files you lost. All manually.

Please note you may get dozens, (or hundreds, or even thousands), of "correctable" errors, even on a striped pool. This is somewhat normal since its metadata that has multiple copies, which are automatically repaired when detected, (either due to a read or scrub).

You can even make a ZFS dataset more resilient by making that specific dataset use 2 copies for its actual data. NOT as good as Mirroring or RAID-Zx, but good enough when most data does not need any redundancy. Thus put in a different ZFS dataset without multiple copies.
...
Maybe a RAIDz1 would be the better setup here.

I'm glad I asked, and thanks for the replys :)
Yes, some protection is better than none. RAID-Z1 is not recommended for larger disks, (>1TB), due to the potential problems from the remaining disks during re-silver, (aka disk replacement's re-synchronization).

You are welcome.
 
Last edited by a moderator:
Status
Not open for further replies.
Top