Degraded pool after installing RAID cards

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
I was trying to install a new RAID card to have all drives on HBAs instead of using the internal SATA ports (I did this for virtualization purposes)

But I forgot to export the pool, and when I booted back up, two drives weren't showing up, and the pool had degraded status. I eventually got all drives to show up, but the two drives that didnt show up earlier, are listed as UNUSED, and not a part of the pool.

What is the best way to proceed? Should I rebuild, knowing that I have no redundancy whatsoever during the rebuild, then export the pool, and make sure all drives show up in freeNAS before I import them again?

Or is there something else I can do to get the drives back in the pool without doing a full rebuild?

The pool is a 10x10TB WD Red in an encrypted ZFS2 pool.

The rest of the specs are as follows:

MB: Supermicro X9SRL-F
CPU: E5-2650 v2
RAM: 128GB 1600MHz ECC DDR3
HBAs: 2x LSI 9211-8i in IT mode
Cache drive: Intel DC P3700
NIC: Intel X540-T2
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
The correct term is HBA. Do not refer to them as RAID cards as it's confusing and misleading.
As for the disks showing up as unused, it sounds like maybe you had the data ports in raid mode?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
By ZFS2 I am guessing you mean RAIDz2? Also what do you mean by caching drive? L2ARC or SLOG?

I would double check the SMART data on all disks before rebuild but if all is in good shape and you have backups, go for it.
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
By ZFS2 I am guessing you mean RAIDz2? Also what do you mean by caching drive? L2ARC or SLOG?

I would double check the SMART data on all disks before rebuild but if all is in good shape and you have backups, go for it.
Thank you, and yes, I'll stick to referring to them as HBAs. I mean RAIDz2, and by caching drive, I mean both L2ARC and SLOG (two partitions, 20GB SLOG, overkill, I know, and 256GB L2ARC. The rest is OP).

SMART data is good, and I have backups for all essential data.

So I should just go for the full rebuild of both disks? There is no point in trying to mount the drives without rebuilding? The disks are intact, but they are out of sync with the rest of the pool. No data has been written to the pool.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Thank you, and yes, I'll stick to referring to them as HBAs. I mean RAIDz2, and by caching drive, I mean both L2ARC and SLOG (two partitions, 20GB SLOG, overkill, I know, and 256GB L2ARC. The rest is OP).

SMART data is good, and I have backups for all essential data.

So I should just go for the full rebuild of both disks? There is no point in trying to mount the drives without rebuilding? The disks are intact, but they are out of sync with the rest of the pool. No data has been written to the pool.
Without digging into it, it sounds like maybe the sata chipset was in RAID mode of some kind and had some formatting that is only meaningful to the soft RAID (wether or not is was actually in RAID). Potentially recoverable but meh...

You could try listing the partition table with fdisk -l. Does that still work here or are we using gpt.. were using gpt. I don't remember the exact command...
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Resilvered one drive last night (not too bad performance, 18 hours for a full resilver of my RaidZ2, 100TB Raw space, 67TB formatted space, and 35TB used). Resilvering the second drive right now.

I believe one of the HBAs are bad. I think that the HBA might have given one of the drives partitions new gptids. The "unused" drive, which was a part of the array had a different gptid when checking under "status" on the pool in freenas.
 
Top