Replaced SAS/SATA Expander now boot hangs.

Joined
May 31, 2022
Messages
5
first off my system
  • Tyan S7012 Mobo
  • duel E5630 CPUs
  • 40gb ECC RAM
  • Intel Server adapter (10gb Twinax connected straight through to my desktop PC)
  • On board 1gb nic (connected to switch>Router>WAN)
  • lsi 9260-8i raid controller
  • Lenovo 03X3834 16 Port 6GBPS SAS/SATA Expander
  • Boot Drive is a 150gb SSD
as for storage, I have 15 6tb 7200rpm of mostly HGST make but I have replaced a couple of drives with WD ones of the same spec as needed. I set up 7 2 drive RAID 0's on the controller with one drive global spare. I created a storage pool and added all 7 to it.

I had an Intel RES2SV240 24port Expander in when I first made the pool and everything worked fine, but sadly one day not so long ago the heatsink on it just plume fell off so I replaced it with the Lenovo. I also had to replace A single failed drive which degraded my pool. I made both of these changes at the same time. The failed drive was replaced by the spare and the new drive was set up as the new global spare.

Now when I boot my system it hangs spamming this message

metaslab.c:2563:metaslab_unload( ): metaslab_unload: txg 199835, spa boot-pool, vdev_id 0, ms_id 95, weight 7c0000000000001, selected txg 199716 (602950 ms ago), alloc_txg 0, loaded 643613 ms ago, max_size 2147483648
it's worth noting that the metaslab_unload: txg # changes with each line, as does the ms_id, the weight changes slightly as well only the "c" changes to an "8"

I am very new to trueNAS and haven't the foggiest idea what any of this means besides that I'm likely hosed and going to be restoring a backup very soon with out some help..
 
Joined
May 31, 2022
Messages
5
update
I rebooted the system and watched the messages a little closer and saw
Code:
BIOS drive C: is disk0
BIOS drive D: is disk1
BIOS drive E: is disk2
BIOS drive F: is disk3
BIOS drive G: is disk4
BIOS drive H: is disk5
BIOS drive I: is disk6
BIOS drive J: is disk7

zio_read error: 97
zio_read error: 97
zio_read error: 97
zio_read error: 97
ZFS: i/o error - all block copies unavailable
ZFS: failed to read pool DataStore directory object

I brought the system down for repairs before the hot spare came online. So I was just down a logical disk and my ZFS was degraded. I installed the new expander and drive, booted into my controller's setup fixed the broken RAID, rebooted again to confirm the changes and the problem started.
Is this situation salvageable?
if not would it be better to let my RAID controller handle the pooling of the disks in a RAID10, or should I blow out all the RAID config on the controller and let ZFS handle each physical disk instead of a host of virtual drives? I really just need one big iSCSI volume, which TrueNAS handles so nicely i'd hate having to go back to windows server it's so much more than what I need.
 
Top