HBA failure

Status
Not open for further replies.

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241
Currently my MoBo has 10 sata ports and 8 hdds attached to it directly. The storage is approaching the 80% full stage (according to docs you shouldn't fill more than 80% and never go above 90% right?). Hence I am thinking of expansion. The question that comes to mind is:
  • Suppose I connect (say) 24 drives onto the motherboard with help of raid cards in IT and a few expander cards. I want to get rid of the existing zpool and I would want 4 vdevs 6 drives each in raidz2. How can I copy the data from one to the other.
  • Would I need another MoBo or backup?
  • Can I create another zpool on the same MoBo with the existing one still attached copy the data and destroy the first one and then reuse the original hard disks for another vdev?
  • Say I use sff 8087 connectors. What happens if one of the hba cards fail? It says in documentation if 1 vdev fails then the entire zpool is lost.
May sound a bit noobish; but I have no experience of working with HBA tbh. I have seen a few videos and written guides of flashing LSI to IT mode. They in itself look quite extensive. Any solutions are welcome.
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778
Can I create another zpool on the same MoBo with the existing one still attached copy the data and destroy the first one and then reuse the original hard disks for another vdev?
Yes. You can take a snapshot and use replication, which will preserve the full filesystem structure, including nested datasets, or just rsync if you're reorganizing.
What happens if one of the hba cards fail? It says in documentation if 1 vdev fails then the entire zpool is lost.
Your pool would be unavailable until the HBA was replaced, then it should come right back.
 

Bhoot

Patron
Joined
Mar 28, 2015
Messages
241

This would apply if the disks are connected to different sff8087s right? Suppose I have a 4u case with 24 disks (6sff cables 4 disks each). If I choose to run vdevs off single back plane (in this example say 8disk raidz2 off 2 sff8087 back planess) then HBA failure will lead to vdev failure and hence the zpool failure?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
This would apply if the disks are connected to different sff8087s right? Suppose I have a 4u case with 24 disks (6sff cables 4 disks each). If I choose to run vdevs off single back plane (in this example say 8disk raidz2 off 2 sff8087 back planess) then HBA failure will lead to vdev failure and hence the zpool failure?
There is a difference between drives missing as opposed to drives failing. FreeNas is not going to wipe the drives or the pool by itself if the drive(s) are missing. Sure, it may "freak out", but once the drives are available again it will proceed as normal.

Now, if those drives had failed and were not readable (or wiped somehow) then you got issues... :oops:

TBH, in your case I would rather simply replace each of the 8 drives; one at a time with a larger one, let it resilver then wash and repeat the process until all have been replaced. Then the vdev will "autoexpand" and you now have the extra space.
 
Status
Not open for further replies.
Top