isopropyl
Contributor
- Joined
- Jan 29, 2022
- Messages
- 159
I got them directly from Supermicro's site under FirmwareYou have wrong firmware version. See the following resource.
LSI 9300-xx Firmware Update
Hey Community, If you are using an LSI 9300 HBA with FreeNAS or the soon-to-be TrueNAS CORE, you may experience some performance issues causing the controller to reset when using SATA HDDs. After working with Broadcom, we’ve come up with a...www.truenas.com
Also that states it's for a LSI 9300-xx, not my 3008..?
I have been running this on the HBA since release, and have had 3 of the same HBA running it (1 on my old machine, and these 2 although one is inactive it was previously in this system) and have not run into any issues. Maybe performance bottleneck that I am not aware of? But as far as disconnects or anything I have yet to notice any issues.
That being said, he does state in the post that it only affects SATA drives. The only SATA drive currently attached is my boot ssd for TrueNAS. So I assume I'd run into major issues if the HBA was causing issues with that drive. All the pool drives are SAS HDDs.
I'll throw a post confirming this in the discussion there for the hell of it though..
Edited my post. Thanks for pointing it out. They seem to all be there.. for some reason the shell seems to be cutting off the bottom of both outputs I pasted because why would it not just copy everything I highlighted.You have listed 13 out of your 16 drives.
Anyways both codeblocks above have updated info that was missing before in my post.
Are 2 of My Drives Failed? (See Edit: Moving Data Onto To New Vdev, To Remove Old)
EDIT: SEE FROM POST #106 FOR UPDATES (https://www.truenas.com/community/threads/are-2-of-my-drives-failed.111640/post-777996) I find the way the GUI and CLI list the pool very confusing when looking at spare drives, and failed/failing drives. From what it looks like, 2 spare drives are in...

This is what I had gathered, but I also was unsure if I was reading it correctly as I find it a little confusing how it is all displayed.It appears that da12 and da13 are the spares and are being used to replace da9 and da14 respectively, if I'm reading the output correctly (I added the device idents and I've never paid attention to an output that had spares active). This would be drives K4K0BBJB and K4J3EXNB (I have the full multi_report output).
That is very strange. I did not even know you can manually assign spares like that. It was not something I have done (at least to my knowledge haha)But what I do not understand is it looks like the Spares were manually assigned, the drive it replaced is not "UNAVAIL" and the mirrors are not "DEGRADED".
And exactly, that's what I did not understand. Nothing showed as 'UNAVIL', 'DEGRADED', 'FAILED' anywhere. Normally when I get a drive failing, I get an alert and e-mail about it (or SMART concerns), and then it also shows the pool as being degraded, and then it seems to auto swap in one of the spares, and resilver. Then I would replace the drive, and it would resilver, and I would run a scrub, and all would show good again.
That is very reassuring to hear if true, however yes I would really appreciate further input from someone who is very knowledgeable here before I go ahead and just detach it.As far as I can tell, you can manually detach the spares HOWEVER, WAIT until someone tells you that it's safe. Don't do it because I mentioned it, I don't consider myself the expert on this topic.
Also the proper way of doing it?
I assume Web GUI > Pools > Pool Status
Under the respective mirrors, I would click da12 and da13 and click detach?
Please correct me if I am wrong.
(Also confirmation that it is da12 and da13 would be highly appreciated just for a sanity check to ensure it's the correct drives.)