Convert from RAID1 to Normal?

Status
Not open for further replies.

poizun

Cadet
Joined
Nov 11, 2012
Messages
7
Good evening, all.

I have a quick question. My current setup is a 2x2TB RAID1 array and a 2x1TB RAID1 array. That's been working beautifully so far, but I've since run out of space on the 2TB and realized that I would actually benefit more from the space than RAID1-ing the 2TB disks (my important stuff I can actually fit on the 1TB volume).

Poor planning, I know. I pretty much want to go from the 2x2TB RAID1 to 1x2TB and 1x2TB normal disk volumes. Is there an easy way to do that? I did not see any way to do that without losing all the content; even thought it's not that important, I'd rather not go through the process of copying from one drive to another, etc. Plus, I wouldn't have the space to copy everything over to the other volume anyway.

I had considered the following:
  1. Pull one of the 2TB drives and let it run in gimped mode
  2. Assuming everything runs well, format the pulled 2TB drive and copy everything over
  3. Trash the original volume and rebuild all the shares/paths using the new disks
  4. Repurpose old disk so there's 4TB of total space in the volume
I haven't done this before and I'd rather not have to go through it if there was some easy built-in FreeNAS way to do it without losing all of the data.
Any input is GREATLY appreciated. Thanks!
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
You can do this easily in the CLI. The command you are looking for is zpool detach:
[PANEL]zpool detach pool device

Detaches device from a mirror. The operation is refused if ther eare no other valid replicas of the data.[/PANEL]
You could even add the datached device as an additional vdev to the original pool -- basically converting RAID1 (mirror) to RAID0 (stripe) without losing data.
 

poizun

Cadet
Joined
Nov 11, 2012
Messages
7
You can do this easily in the CLI. The command you are looking for is zpool detach:
[PANEL]zpool detach pool device

Detaches device from a mirror. The operation is refused if ther eare no other valid replicas of the data.[/PANEL]
You could even add the datached device as an additional vdev to the original pool -- basically converting RAID1 (mirror) to RAID0 (stripe) without losing data.


You rock! Thanks for such a quick response! Now, this solution IS putting it into a RAID0 though, right? I mean there's no way to just have it as some extended disk? What I'm trying to get at is if one disk fails on my RAID0, the entire volume fails, right?
 

Dusan

Guru
Joined
Jan 29, 2013
Messages
1,165
No, zpool detach will just remove the device from the mirror. You can then do whatever you want with it, including adding it back as a new vdev (RAID0). However, you can also create an entirely new pool on the detached drive.
You are right, one drive failing in RAID0 will fail the entire pool.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
No, zpool detach will just remove the device from the mirror. You can then do whatever you want with it, including adding it back as a new vdev (RAID0). However, you can also create an entirely new pool on the detached drive.
You are right, one drive failing in RAID0 will fail the entire pool.

Just to clarify Dusan's comment. A failure of a vdev is a failure of the pool. There is no "RAID0" per se for ZFS. The closest thing is to have multiple disks that each are their own vdev. In that case, a failure of any 1 disk will fail that vdev, so the entire pool fails. And a failed pool means a complete loss of all data in the pool.
 

poizun

Cadet
Joined
Nov 11, 2012
Messages
7
Just to clarify Dusan's comment. A failure of a vdev is a failure of the pool. There is no "RAID0" per se for ZFS. The closest thing is to have multiple disks that each are their own vdev. In that case, a failure of any 1 disk will fail that vdev, so the entire pool fails. And a failed pool means a complete loss of all data in the pool.

Interesting. I'd like to thank you both for your responses.

Just so I understand correctly, there's no way to 'extend' a volume with another drive without striping it or allowing it to be RAID0-ish in the sense that if a drive fails the pool fails (unless I add more drives, of course). My best bet would be to then add a separate pool/volume if I wanted to keep failures contained then (again using only one extra drive)?

Thanks!
 

poizun

Cadet
Joined
Nov 11, 2012
Messages
7
Just to clarify Dusan's comment. A failure of a vdev is a failure of the pool. There is no "RAID0" per se for ZFS. The closest thing is to have multiple disks that each are their own vdev. In that case, a failure of any 1 disk will fail that vdev, so the entire pool fails. And a failed pool means a complete loss of all data in the pool.

Also, that is a solid guide for noobs you have in your sig. Just finished going through it and learned some more about how everything works. You can probably ignore my last post. Your PPT answered my question. :)
 
Status
Not open for further replies.
Top