DIY all flash/SSD NAS (CrazyNAS issues)

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Hi there

Since the original thread (DIY all flash/SSD NAS - not going for practicality) is now demoted to read-only I've chosen to start a new one - hope this doesn't violate any community guidelines. If so - I'm very sorry.

Well.

I've accumulated around 100 SSDs of which 87 is Samsung-only drives. Not being some sort of brand snob, but I figured it's better to 'match' drives.

Decided to brush of dust and take this project on again, but I've run into som issues in the meantime.

First: some hardware-pr0n (teaser only)

334539664_1147512812595901_3595459603888869998_n (1).jpg
334203899_996740498400231_7950791570023660280_n.jpg
334756774_754186149753167_6390930568432239482_n (1).jpg

But I'm having some issues and I'm concerned that I've might have fried or broken something.

Started testing with one drive on each backplane. Created a pool and did some testing. Everything seemed fine, so I started to fill in the drives with all four backplanes still connected.

At some point I ran into issues and decided to reboot - nothing seemed off.

Then I powered of the system and after moving some cables around to tidy things up, I tried to power the system back on.

Nothing. A click and nothing else.

Then I remembered: I've, up and until now, been using a 450 Watt SSF PSU. Bad decision, I think.

Tried some variations of powering off, connecting the backlanes while the system was still on, fewer disks. Nothing. Nothing seemed to work reliably. Tried almost everything except installing a more powerful PSU.

Finally, did just that. But now, I'm having stability issues. Not all disks are detected. If they are, they are not brought into TrueNAS and my pools are corrupted or just not showing up.

Thought I'd fried something, and had almost given up.

With all four backplanes connected to Molex-power, but not the HBA (in essence taking the SAS-SAS, expander, cables out of the equation) the drives started showing up. Still throwing errors and not being reliable at all.

So...

My first test, was with all four backplanes connected (in pairs, to the HBA with a SAS-SAS expander cable to the backplane next down in cascade - don't know if that made any sense?)

Like so:

HBA
⮑ Backplane #1 ➞ SSD
    ⮑ Backplane #3 ➞ SSD
⮑ Backplane #2 ➞ SSD
    ⮑ Backplane #4 ➞ SSD

If I remove the SAS cable and connect them to the HBA in pairs, like so:

HBA
⮑ Backplane #1 ➞ 24x SSD
⮑ Backplane #2 ➞ 24x SSD

They both seem to work as intended using any combination of HBA -> SAS-port (of which there are three on each expander).

But, as soon as I connect them like this - in essence using any SAS-SAS cable (cascaded)

HBA
⮑ Backplane #1 ➞ 24x SSD
    ⮑ Backplane #2 ➞ 24x SSD

Or

HBA
⮑ Backplane #3 ➞ 24x SSD
    ⮑ Backplane #4 ➞ 24x SSD

Nothing works.

Did I fry something? Are the cables broken? Are the backplanes broken?

Sorry if I've upset any of you with my behaviour :oops:
To be honest, I'm quite ashamed/sorry, if I've broken this fine system with my stupid, stupid decisions :confused:

P.S.: Have attached an assortment of screenshots:
334194601_3044506339188460_8838344760285247341_n.png

334102101_751233696538562_5572361981490114207_n.png

333660725_3136968106603746_4711374805891462119_n.png

333869610_636408851640294_6434623504724067934_n.png

337159938_8955261991211942_8990451530432467475_n.png
 
Joined
Jun 15, 2022
Messages
674
Aside: Suggested Changes:
Hostname: CrazyNAS
Vdev Name: Dozer

Alternate Option: Change it to LEET:
Hostname: Cr4zyn45
Vdev Name: D0z3r
 

Arwen

MVP
Joined
May 17, 2014
Messages
3,611
Hi there

Since the original thread (DIY all flash/SSD NAS - not going for practicality) is now demoted to read-only I've chosen to start a new one - hope this doesn't violate any community guidelines. If so - I'm very sorry.

...
Solely on the subject of that old sub-forum, and your post, that had nothing to do with you.

The old General Forum was out-dated and people mis-posted in it, when better places existed. So the forum moderators made them R/O.


As for your NAS, sorry, I have no comment.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The old General Forum was out-dated and people mis-posted in it, when better places existed. So the forum moderators made them R/O.
Just to help with that, I added a post to the old thread pointing to this one.

I suppose that if we're saying 2 in parallel can run, but 2 in series fail, then power is out of the equation from a disk perspective.

Have you ever had it running successfully in series and it's just now failing? or is it something that has never worked despite @jgreco's assurance that it doesn't matter how you plug them together?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I assume each backplane works individually?

If they do, then let's move on to two backplanes in a simple chain.

For each expander card, assuming they're oriented with the words "Supermicro" or the "SuperO" logo at the top and the SAS ports facing the right, your HBA should connect to the bottom port. Then the middle port on "Backplane #1" should be connected to the bottom port on "Backplane #2"

Don't connect anything to the "top port" anywhere. Let's see if this fires it up.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Aside: Suggested Changes:
Hostname: CrazyNAS
Vdev Name: Dozer

Alternate Option: Change it to LEET:
Hostname: Cr4zyn45
Vdev Name: D0z3r

When it's running - if I ever get to that - I'll certainly change the hostname. I assume by vdev-name you mean pool-name, right?

Solely on the subject of that old sub-forum, and your post, that had nothing to do with you.

The old General Forum was out-dated and people mis-posted in it, when better places existed. So the forum moderators made them R/O.


As for your NAS, sorry, I have no comment.

That's ok. Thanks for clarifying none the less

Just to help with that, I added a post to the old thread pointing to this one.

I suppose that if we're saying 2 in parallel can run, but 2 in series fail, then power is out of the equation from a disk perspective.

Have you ever had it running successfully in series and it's just now failing? or is it something that has never worked despite @jgreco's assurance that it doesn't matter how you plug them together?

That is correct. But now it seems like nothing is working. I have no idea what's going on and to what degree I screwed something up.

Yes. My initial testing - to make sure all four backplanes at least worked - was done with all four connected and by installing a single 120 GB Kingston A400 SSD on each. Creating a Z1-vdev and pool and doing some SMB and iSCSI-testing. No issues.

It was when I removed the Kingston drives, installed the Samsung-ones and doing a power cycle - presumably not having a PSU powerful enough I started to run into problems.

To be honest, I'm not entirely sure about the Kingston-config, but I'd like to think, that I connected the HBA into port 1 (the lower one, labeled "SAS Port: PRI_J1" in the User's Manual for the backplane) on each 'primary' backplane. Then I connected the secondary backplanes from port to (from the primary) into port 1 on the secondary.

I know, there was a state / point in time, where they were just connected - and it did work because I decided to move the drives around just for fun / testing. Just like jgreco described. But, I can't remember what configurations excactly was tested. Sorry

I assume each backplane works individually?

If they do, then let's move on to two backplanes in a simple chain.

For each expander card, assuming they're oriented with the words "Supermicro" or the "SuperO" logo at the top and the SAS ports facing the right, your HBA should connect to the bottom port. Then the middle port on "Backplane #1" should be connected to the bottom port on "Backplane #2"

Don't connect anything to the "top port" anywhere. Let's see if this fires it up.

To be honest, I'm not entirely sure by now.

Tested that config yesterday. No luck.

I've begun to systematically test each config - only using the cables from the HBA.

I'll report back with my findings.

Right now I see to possible last ditch efforts:
  • Use another RAID-card, just to see if the disks passes through
  • Remove all drive - no biggie, just an annoyance - and then test with just one drive on each backplane.
I have a nagging suspecion that some of the drives might be causing problems, but I'll have to exclude variables and approach this in a systematic manner.

There are a lot of variables at play right now:
  • The backplanes didn't work properly at all, and loading them with drives revealed the issue
  • I've fried something with my powercycles and/or plugging/unplugging cables in a 'live' system
  • Twisted the cables, so that nothing works
  • Broken the connectors on the backplanes
  • Fried the HBA
  • Fried SSDs (from power surge or what-ever) a causing halts
  • A boat load of other issues, I'm not smart or competent enough to realize.
Stay tuned. I'm filling in the Excel spreadsheet to gain some, if any, insights ‍♂️
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
#UPDATE

Well, decided to skip all combinations and go back to the original configuration: all four backplanes attached. Two directly into the HBA and the secondary backplanes via the expanders.

Tested each backplane with one Kingston A400 SSD isntalled and with one cable directly into the HBA.

Skærmbillede 2023-03-24 kl. 15.22.21.png


Each time, one drive showed up. Good. Both "HBA"-cables seem to work.

Decided to move on and include the "expander"-cables. Expected two drives to show up.

Skærmbillede 2023-03-24 kl. 15.35.35.png


Well then. Decided to go for gold and connect all four backplanes with one SSD.

IMG_9056.jpg


The result?

Skærmbillede 2023-03-24 kl. 16.06.41.png


All four drives are detected. Success then, I guess?

I seriously have no idea what's going on. But I have some additional information:
  • At some point, when filling the backplanes with Samsung-drives, the "live-detection" halted and one of the backplanes status LEDs turned red. After a minute or to, all LEDs on that particular backplane turned red. Decided to reboot and the everything seemed normal.
  • After setting up the first to backplanes (one pool: CN-SSD, consisting of two raidz2 vdevs) I noticed a SMART-error. I can't remember if thats before or after all Hell broke loose.
So. Assuming that everything now seems to be in working order (tried moving the Kingston SSDs around on each backplane) I have some thoughts:
  • Can mismatching firmware or SSD revisions/models be causing issues?
  • Can a single SSD with SMART-error be throwing everything off?
  • How can I (or should I) burn-in / test the SSDs in a sane manner?
    • All SSDs has been reset / formatted / written over with zeroes using the macOS terminal diskutil zeroDisk command
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Well. The saga continues.

I'm seriously considering scrapping the project - too much noise, too much power, too much hassle.

But, if I'm being honest. That's kinda half the fun, right... right?!

Anywho.

Have done some more testing. It bothers me like nothing else, that the configuration works with all four backplanes with one SSD on each. Disk shows up and all that jazz.

But the moment I begin to plug in more drives, issues arises.

But first, some pr0n:
IMG_9061.jpeg
IMG_9060.jpeg
IMG_9064.JPG
IMG_9066.jpeg
IMG_9068.jpeg
IMG_9069.jpeg

And also, a video:


The first backplane (the one where all the SSDs light up) went smooth. I have a total of 32 drives of that particular model (MZ7LN-something). 24 filling up one backplane (the rightmost one) and another eight on another backplane (the second from the left). Those are the ones that light up.

One the same backplane - the second from the left - I have installed 7 other drives (MZ7LF-something), that also show up without any issues.

Those 32 (MZ7LN) and 7 (MZ7LF).

All along I kept one of the Kingston drives in thos backplanes I hadn't begun to put the other SSDs into. Like some sort of sanity check to keep an eye on the totalt number of detected drives.

Then came the last ones - MZ7TE. Those are the SSDs on the first backplane (the leftmost) and the third backplane (second one from the right).

I first made the mistake of assuming - after doing some quick SMART-checks on Windows using a USB-adapter and CrystalDiskInfo - that alle drives were fine and installed them in one go (system powered of) beginning from the leftmost backplane and installing the remaining on the third backplane.

The issues at hand:
  • Either some drives are causing issues and making boot (the "f/w initializing devices"-thingie) taking forever. It's almost a given, that if it takes longer than usual, no drives are detected and I get a "Adapter at Baseport is not responding"
  • Perhaps it's an adress-issue thing? I've decided to check if I can install the drives one by one and by doing so identify problematic drives and/or ports on one of the backplanes.
  • All ports seems to work. Did some testing with the Kingston drives and moved them around to see if they where detected, no matter which port they were installed in.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Just a thought - I'm still seeing the "Avago MegaRAID SAS-MFI" line in the BIOS. Are the HBAs running IR mode firmware?

Not that I expect it to be a magic bullet, but is it potentially causing something to hang up in the BIOS/boot device enumeration and might be cured with IT firmware and possibly no BIOS?
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Just a thought - I'm still seeing the "Avago MegaRAID SAS-MFI" line in the BIOS. Are the HBAs running IR mode firmware?

Not that I expect it to be a magic bullet, but is it potentially causing something to hang up in the BIOS/boot device enumeration and might be cured with IT firmware and possibly no BIOS?

Actually, I was wondering the very same thing.

It was supposed to be already flashed into IT-mode, I see no keyboard shortcut and the drives show up in TrueNAS - so I just assumed it was.

Forgive my n00b’ness, but how do I check it?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Actually, I was wondering the very same thing.

It was supposed to be already flashed into IT-mode, I see no keyboard shortcut and the drives show up in TrueNAS - so I just assumed it was.

Forgive my n00b’ness, but how do I check it?

I believe sas2flash -listall should give you the information you need.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
LSI's firmware is full of semi-arbitrary limits to the number of devices. I think it's 128 devices (including HBA and expanders) for the 9211.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
I believe sas2flash -listall should give you the information you need.

Well. Following this guide How to flash a LSI SAS 3008 HBA (e.g. IBM M1215) to IT mode got me this result...

iKVM_capture (2).jpg


Must be doing something wrong. Someone else in the comments writes, that one should use the sas2flsh.efi instead. Perhaps I should give that a try.

EDIT:

Gave it a try. No success...

iKVM_capture (3).jpg


LSI's firmware is full of semi-arbitrary limits to the number of devices. I think it's 128 devices (including HBA and expanders) for the 9211.

Still, the thing works as expected without the previously mentioned drives?

Also. The HBA I use in this project is a 9340-8i. Don't know how different it is from the 9211 though? :smile:
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
The HBA I use in this project is a 9340-8i. Don't know how different it is from the 9211 though?
Different enough... that may well be one of those MegaRAID adapters with no real IT firmware available.

But also means you should be using sas3flash instead of sas2flash.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Different enough... that may well be one of those MegaRAID adapters with no real IT firmware available.
It should be possible to crossflash, since it's just a SAS3008.
But also means you should be using sas3flash
Not if it's in RAID mode, which probably means a different set of tools. Still, it should show up in sas3ircu.
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Tried updating the firmware on the problematic drives. Very cumbersome, but was able to have six drives connected, so it wasn’t a lengthy process. However, now some of the (problematic) drives show up ‍♂️
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Ok. Latest update - and this is a long one - so sorry in advance.

IMG_9336.jpeg


Decided to 'start over' and remove all drives from the setup. The idea was to test each SAS/SATA-port to see if any of them was causing any troubles.

IMG_9337.jpeg


Started with one drive on the left most backplane (#0.0) and moved it 'up' the backplane (#0.1, #0.2, #0.3 and so on) to see if it was detected or not.

IMG_9338.jpeg


Decided to use the 7 'known good' drives (MZ7LF128HCHP-000L1, or just MZ-7LF's from now on).

IMG_9340.jpeg


This is the naked bunch of 32 MZ-7LN's, also 'known good' drives from the last time around.

IMG_9341.jpeg


This is the naked bunch of 30 MZ-7TE's. Those prevoiusly named 'known bad' or 'problematic' drives. Theese have been flashed to the latest firmware according to the Lenovo-utility.

In fact all drives have been checked and updated.

IMG_9342.jpeg


7 drives in (the MZ-7LF's) on the notepad.

IMG_9343.jpeg


Had TrueNAS running to see if the drives showed up or not. When a drive wasn't detected it was instantly removed and replaced with a 'known good' drive. The idea was to see if it was the drive itself or the port that caused issues.

Some drives in, I decided to just go for it and populate the backplanes one drive at a time.

IMG_9344.jpeg


After some time, it seemed I had found the 'bad ones' - some MZ-7LF's and eight or so MZ-7LN's. This confused me a little, since all the MZ-7LN's have previously been 'known good' - they showed up without issue. In fact, the MZ-7LN's was the first drives installed, way back, and I had the idea those were good and the MZ-7TE's was all 'bad' or something.

Perhaps the issue is something else?

IMG_9345.jpeg


It would now seem as if I have found the cause of the issue. The last backplane (#3) is the culprit. But, not so fast!

IMG_9346.jpeg


It semed weird to me, because the middle two backplanes (#1 and #2) are the new ones and the ones to the left and right (#0 and #3) are the old ones. Known to be working without issues.

IMG_9347.jpeg


I decided to try another batch of drives entirely. Have around 18 MZ-7TD's on the shelf. Those are on hold because the PCB within is 'full size'. Menaning, they're not small like all the others but take up almost the same space as the 2.5" enclosure itself. Though the drives themselves works without issue, they kinda ruin the 'aesthetic' I'm going for.

IMG_9350.jpeg


But no. The backplane would hang and light the red error LED. Meaning, something was up.

IMG_9351.jpeg


Even with the drive removed, the LED stays on and I lose all hotswap capabilty on the other drives.

IMG_9354.jpeg


Same issue with another 'known good' drive on the last backplane.

We are now at excactly 58 drives detected and no matter what I do, I can't seem to get it to accept any more drives - 'known good' or not.

IMG_9353.jpeg


Tried swithcing them around - moving the cables only, meaning: backplan #3 is now backplane #2 in the chain so to speak. Made no difference.

IMG_9357.jpeg


Finally, insanity check. Moved all drives from backplane #3 (former #2) to backplane #2 (now #3).

All 58 mounted drives show up just fine.

That was the hardware. Now, what about the 'software'-side of things?

Well. Got a weird error I don't quite understand. Tried to move one backplane off another and put two backplanes into the same expander.

Like this:

  • HBA
    • Port0
      • Backplane #0
        • Backplane #1
        • Backplane #2
    • Port1
      • Backplane #3
It didn't like that at all and threw this error at me:

iKVM_capture0.jpg


Ok. So 'cascading' means just that and I can't use an expander to 'split' the traffic. Sorry. Didn't mean to offend you, dear HBA.

Kept and eye on the HBA-bios-thingie from then on.

iKVM_capture2.jpg


Excactly 58 drives, no issues. The percentage-thingie flies by and detects 58 drives. Move along.

iKVM_capture3.jpg


Oh. So you meant to install a 59th drive and expected me to comply? F-you and let me take forever to get there.

iKVM_capture4.jpg


Minutes later... I'm still on 100% you think you done? You think, this is fine? No!

iKVM_capture1.jpg


Well. This I had seen before. When I somehow violate the backplanes/HBA/what-ever-gods, it takes forever to load the drives (0%, 3%, 7%, goes on, slowly, until it reaches around 98% and then hangs. Finally reaches 100 % and gives me the screen you see above).

Remove the culprit/extra drive, and voila!

iKVM_capture6.jpg


iKVM_capture7.jpg


Things are back in order. You may proceed.


Soo... you've made it this far. First of all, thank you!

I still have no idea what's going on here.

Am I hitting arbitrary 60 devices limit? (58 drives + to backplanes on an extender, I figured)

To be clear. All SAS/SATA-ports seem to be working just fine. If this is the limit and I can go no further I'll accept it, but.

The spec for the 3008-controller says '1000 devices' and to the best of my understanding (not much to go by, but hey) the expanders should cascade just fine. Like:
  • HBA
    • Port0
      • Backplane #0
        • Backplane #1
          • Backplane #2
    • Port1
      • Backplane #3
In fact, this is probably the only config I haven't tried.

Anywho.

If 58 drives is the limit, then the rest of the project kinda falls. As it is, right now, I'm a 120 W 'idle'. That's much too much the make it worthwihile.

The total spending is way beyond what one (two or perhaps even three) 8 TB SATA drives would have cost me.

Just to be clear: I'm not blaming anyone on this forum for my spending habits. I knew going in, that this was beyond practical.

But, at least I figured I'd have a... wait for it... CrazyNAS to show for it.

[ADDENDUM #1]:

Oh! Forgot to mention: all drives work fine. If i remove a drive and replace it with one from the pile of unused drives, the new drive is deteced. If I keep doing this with random drives I'll suddenly have placed all drives from the 'idle pile' in the setup - showing up just fine - but I'm still unable to add extra drives. I'm not 'permitted' as it seems to exceed 58 drives on the backplanes.

[ADDENDUM #2]:

Just trid this config:
  • HBA
    • Port0
      • Backplane #0
        • Backplane #1
          • Backplane #2
    • Port1
      • Backplane #3
Didn't work. Got the 'total number of enclosures connected has exceeded the maximum ...'-error.

Also. I can move the drives around just fine, as long as I keep it to 58 drives:

IMG_9358.jpg
 
Last edited:
Joined
Jun 15, 2022
Messages
674
Describe the problem to ChatGPT and ask it to diagnose potential causes; it will scour the web as it was as of 09/2021, which should be recent enough. If it tells you to stick a fork in the toaster....maybe skip that step. :grin:
 

Allan_M

Explorer
Joined
Jan 26, 2016
Messages
76
Describe the problem to ChatGPT and ask it to diagnose potential causes; it will scour the web as it was as of 09/2021, which should be recent enough. If it tells you to stick a fork in the toaster....maybe skip that step. :grin:

Actually, I've already done that very thing. Here's the result:

There could be several reasons why the IBM M1215 HBA is not detecting all 96 drives that should be supported by the configuration of the HBA and backplanes.​
Here are a few possible reasons:​
Firmware or driver issues: Ensure that you have installed the latest firmware and drivers for the IBM M1215 HBA. Sometimes, outdated firmware or drivers can cause issues with detecting all of the connected drives.​
System limitations: Ensure that the system you are using is capable of supporting the maximum number of drives that you are trying to connect. For example, some servers have limitations on the number of PCIe lanes available, which can limit the number of HBAs that can be installed.​
Power supply limitations: Ensure that your power supply is capable of providing enough power to support all of the connected drives. If the power supply is insufficient, some of the drives may not be detected.​
Cable or connection issues: Check that all of the cables and connections between the HBA and the backplanes are securely attached and functioning properly. Faulty cables or connections can cause issues with drive detection.​
Faulty hardware: If none of the above solutions work, it's possible that one or more of the HBAs or backplanes is faulty. Try swapping out different components to isolate the issue.​
It's worth noting that with a large number of drives, it's not uncommon to experience issues with drive detection or connectivity. Troubleshooting these issues can be complex, so it may be helpful to consult with a hardware or storage expert to help diagnose and resolve the issue.​

This is after some back and forth.

Also. I found this link: LENOVO ServeRAID M1215 Sas/SATA Controller

I don't know if it's relevant or not, but towards the bottom: "Max Storage Devices Qty 64"

64 is a somewhat familiar number, it's a power of two, but that makes 58 seems so arbitrarily. Like, it's 'missing' 6 of something.

Incidentally, there are excactly six empty ports on the SAS-expanders. But, accordingly to chatGPT the expanders themselves do not count as storage devices:

In general, a SAS backplane does not count as a connected storage device against the maximum number of drives that can be supported by a RAID controller or HBA.​
The backplane is simply a device that allows multiple hard drives or solid-state drives to be connected to a single interface, and it does not itself contain any storage capacity. Each individual hard drive or SSD connected to the backplane is considered a separate storage device and will count towards the maximum number of drives that can be supported by the RAID controller or HBA.​
So, in the case of the IBM M1215 HBA connected to four Supermicro BPN-SAS2-216EL1 backplanes, the total number of storage devices supported would be 96, and this would include all the individual hard drives or SSDs connected to the backplanes. The backplanes themselves would not count towards this limit.​

I then keep asking for suggestions and it returned the list mentioned above.
 

Etorix

Wizard
Joined
Dec 30, 2020
Messages
2,134
If 58 drives is the limit, then the rest of the project kinda falls. As it is, right now, I'm a 120 W 'idle'. That's much too much the make it worthwihile.
Remarkable work to try and keep this totally impractical but wonderful build going.

Screenshots show "AVAGO Mega-RAID SAS-MFI BIOS", which does not look like the recommended IT firmware. Could that be the issue?
A pesky RAID controller to be replaced by a plain 9300-8i…
 
Top