SOLVED Two LSI cards DO work in a Supermicro X8SIE system

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
EDIT: TL;DR - Two LSI cards do work in a Supermicro X8SIE-LN4F system!

I have discovered that a pair of LSI HBA cards does not work in a Supermicro X8SIE-LN4F system. The cards themselves work fine, but network connectivity becomes intermittent when they're both installed. I don't know why, but a reasonable explanation may be that one of the LSI cards (a Dell H200 in this case) connects to the Intel 3420 PCH along with the 4 Intel 82574L LAN chips, and this makes a hash out of things. That's my best guess so far... (The other card, an IBM M1015, connects directly to the CPU via an x16 slot - see attached diagram.)

I had hoped to modify my 'test' AiO system (for details, see 'my systems' below) to use a mirrored pair of Intel DC S3500 SSDs as the datastore for both ESXi and the FreeNAS VM. In the same way that @joeschmuck used a Dell H310 HBA, I planned to configured this RAID1 array on a Dell H200 flashed to IR mode. This test system currently boots ESXi 6 from a USB stick, with FreeNAS installed on a local SSD, and it seemed to me that the RAID1 system, being redundant, would be safer and more robust.

I planned to configure the other LSI card (an IBM M1015 in IT mode) the same way I've used it for roughly a year, passing it through to the FreeNAS VM via VT-d.

Everything went swimmingly at first. I installed the Dell H200 into the PCIe x4 slot, attached the two SSDs and configured them in a RAID1 array, and installed ESXi 6 from an ISO image via IPMI. The system booted up and I thought I was "cookin' with gas", as we say here in the South.

But I quickly discovered that the X8SIE's network connectivity would drop out intermittently... in fact, more often than not it wouldn't work at all! This was puzzling, as I'd been successfully using all 4 NICs connected to a LAG group on a Dell PowerConnect 2816 switch for close to a year. Since using all four NICs didn't work, I tried using each NIC separately, and when that didn't work I tried pairing them in several different ways -- all without success. I experimented with BIOS settings, too, all to no avail. After a couple of hours of troubleshooting I finally gave up and restored the old hardware configuration.

I don't think this 'Two-LSI-cards-in-an-X8SIE' configuration can be made to work... but I'm posting this in the hope that one of you hardware gurus will prove me wrong! :)
fritz-system-diagram.jpg
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
I have seen in my systems where a Perc H700 and Perc H200 d0 not want to co-exist nicely. It would somehow reduce the actual physical RAM on the system (by ~ 8GB...) and throw some NVRAM errors during POST. However, a LSI 9260-8i and H200 would be fine...

Come to think of it @maglin had a similar issue with a Perc H700 and 4-Port Intel NIC...

Originally f0r my AiO I had two SSDs connected in a similar fashion to a Perc H700 (which would not play nicely with a Perc H200). Then I went with a LSI 9260-8i for the boot drives. Later I decided to change it to a Perc H200 in IR.

All of those worked fine (except the H700 + H200 combo), but since I wanted to free up a PCIe slot (only have two to work with) I went with two 128GB Samsung mSATA drives in a "Dual mSATA SSD to 2.5-Inch SATA RAID Adapter". I have used these for years in other ESXi (5.0/5.5) boxes without issues and it allows me to use the Motherboard SATA Port instead (which are 3Gbs and I didn't care to pass-through those to FreeNAS anymore).

Granted that it doesn't have a BBU, but neither does the H200. The LSI-9260-8i gets pretty hot IMHO so that was also a negative for me.

Couple ideas/suggestions:
  1. If possible, try different PCIe slots (don't really think it will help though)
  2. Flash the IBM M1015 back to original and use that for the Boot Drives, then pass the H200 through to FreeNAS
  3. Flash the H200 back to Dell's Firmware (if is currently using LSI's) and leave the M105 with LSI's to pass to FreeNAS
  4. Sometimes certain cards just won't work right together in the same system no matter what. I tried just about everything with the H700 & H200 combo; flashing each card with Dell's and LSI's Firmware... Heck I even flashed the H700 to LSI 9260-8i with no avail.
    • Side note, I bricked a H200 Mezzanine during the process... Chose the wrong card with MegaRec and somehow changed the H200 to *think* it was a SAS2108 instead of a SAS2008... o_O
  5. If you want, I may be able to send you a few different cards to tryout in combo to see what works best.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I have seen in my systems where a Perc H700 and Perc H200 d0 not want to co-exist nicely. It would somehow reduce the actual physical RAM on the system (by ~ 8GB...) and throw some NVRAM errors during POST. However, a LSI 9260-8i and H200 would be fine...

Come to think of it @maglin had a similar issue with a Perc H700 and 4-Port Intel NIC...

Originally f0r my AiO I had two SSDs connected in a similar fashion to a Perc H700 (which would not play nicely with a Perc H200). Then I went with a LSI 9260-8i for the boot drives. Later I decided to change it to a Perc H200 in IR.

All of those worked fine (except the H700 + H200 combo), but since I wanted to free up a PCIe slot (only have two to work with) I went with two 128GB Samsung mSATA drives in a "Dual mSATA SSD to 2.5-Inch SATA RAID Adapter". I have used these for years in other ESXi (5.0/5.5) boxes without issues and it allows me to use the Motherboard SATA Port instead (which are 3Gbs and I didn't care to pass-through those to FreeNAS anymore).

Granted that it doesn't have a BBU, but neither does the H200. The LSI-9260-8i gets pretty hot IMHO so that was also a negative for me.

Couple ideas/suggestions:
  1. If possible, try different PCIe slots (don't really think it will help though)
  2. Flash the IBM M1015 back to original and use that for the Boot Drives, then pass the H200 through to FreeNAS
  3. Flash the H200 back to Dell's Firmware (if is currently using LSI's) and leave the M105 with LSI's to pass to FreeNAS
  4. Sometimes certain cards just won't work right together in the same system no matter what. I tried just about everything with the H700 & H200 combo; flashing each card with Dell's and LSI's Firmware... Heck I even flashed the H700 to LSI 9260-8i with no avail.
    • Side note, I bricked a H200 Mezzanine during the process... Chose the wrong card with MegaRec and somehow changed the H200 to *think* it was a SAS2108 instead of a SAS2008... o_O
  5. If you want, I may be able to send you a few different cards to tryout in combo to see what works best.
Thanks for the suggestions! And for your kind offer of loaner cards to test.

I had already thought about switching cards between the (only) two PCIe slots, i.e., move the Dell H200 w/ RAID1 bootable datastore to the x16 slot directly connected to the CPU and move the M1015 to the PCH-connected x4 slot. My 'theory' for why this might work? There wouldn't be any boot ROM shenanigans going on via the PCH unit, just the dumb IT-mode connection to disks. This is the first thing I'll try this morning. But I'm not gonna get my hopes up...

Is there a Dell Phase 19 or later FW for the H200? I think I remember reading that ESXi supports 19 or later; I'm using LSI's P20.

I'm intrigued by the StarTech dual mSATA SSD converter. How does ESXi see it? As a single drive? How do you configure and manage the RAID array?
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Is there a Dell Phase 19 or later FW for the H200? I think I remember reading that ESXi supports 19 or later; I'm using LSI's P20.
Will have to look at the card and see what version is on it. *Thinking* it may have 19, but with Dell's 7.x...

I'm intrigued by the StarTech dual mSATA SSD converter. How does ESXi see it? As a single drive? How do you configure and manage the RAID array?
Yeah, it is seen as a single drive. No real management of RAID except to set the Jumper on the card itself. Granted that I will not get alerted (but neither will I with a Perc H200 unless I am watching it boot or go to more trouble), but I do backup the ESXi configuration so can always use that in case of emergency. Which is actually what I used to restore when I switched drives/controllers.

While not an "Enterprise" Solution it works for my use-case and just gives a little bit of a warm fuzzy as far as redundancy. A Hardware Raid with BBU would be the best method, but is overkill for me. Funny thing was that when I first installed ESXi on it, ESXi did show it as a SSD. When I loaded the previous configuration it reverted to Non-SSD (guessing that is how the Perc H200 showed it and I did not notice). Not a big deal, I just told ESXi that it was a SSD.

Word of warning, stay away from these (I bought 4 of each and basically they all suck):
https://www.amazon.com/gp/product/B00TWFIOF2/?tag=ozlp-20
https://www.amazon.com/gp/product/B00BGEVV2A/?tag=ozlp-20
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Will have to look at the card and see what version is on it. *Thinking* it may have 19, but with Dell's 7.x...
I flashed my H200 to LSI's version 20 in the three-step process you described here on the forum, using (mostly) your files... thanks! :)
Yeah, it is seen as a single drive. No real management of RAID except to set the Jumper on the card itself. Granted that I will not get alerted (but neither will I with a Perc H200 unless I am watching it boot or go to more trouble), but I do backup the ESXi configuration so can always use that in case of emergency. Which is actually what I used to restore when I switched drives/controllers.

While not an "Enterprise" Solution it works for my use-case and just gives a little bit of a warm fuzzy as far as redundancy. A Hardware Raid with BBU would be the best method, but is overkill for me. Funny thing was that when I first installed ESXi on it, ESXi did show it as a SSD. When I loaded the previous configuration it reverted to Non-SSD (guessing that is how the Perc H200 showed it and I did not notice). Not a big deal, I just told ESXi that it was a SSD.
Something like this would work well for my own 'non-enterprise' use-case too. It's just that I've got these H200 cards and S3500 SSDs lying around... they need to be put to good use!
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Something like this would work well for my own 'non-enterprise' use-case too. It's just that I've got these H200 cards and S3500 SSDs lying around... they need to be put to good use!
Yeah, I know the feeling. Got like 6 of the S3500 160GB drives, but am favoring the S3710 200GB drives now for my SLOG devices (have like 3 of them). Not too sure what I am gonna do with them yet either, maybe just use them in my laptop or desktops. My stockpile of hardware is building up due to my hardware addiction and the fact that I keep seeing stuff on eBay and thinking "Yeah, you need that"... ;)
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Yesterday I tried swapping the two cards between the slots, without success.

I could try flashing the IBM to IR mode and flashing the Dell to IT mode to see if that makes a difference... but I'm too lazy to do that and I don't think it will work anyway.

Emailed Supermicro technical support and got a reply stating that they've never tested that configuration in their lab and asking what BIOS version I'm using... Duh, of course I'm using the latest BIOS!
 
Last edited:

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
Yeah really not too sure if it will make a difference honestly, some hardware just will not play nicely with other stuff. Offer still stands to send you a variety of loaner cards to mess around with if desired. :)

Oh, I still owe you info on the Dell H200 Firmware I was using in IR Mode too... Let me get that going.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Yeah really not too sure if it will make a difference honestly, some hardware just will not play nicely with other stuff. Offer still stands to send you a variety of loaner cards to mess around with if desired. :)

Oh, I still owe you info on the Dell H200 Firmware I was using in IR Mode too... Let me get that going.
Why, that's mighty nice of you, sir. I'm tempted to take you up on it, just to see if the problem is card-specific or if it's simply a glitch in this particular motherboard. What controllers do you have in mind? I'm looking for a basic RAID1 rig to install ESXi 6 and the FreeNAS VM in an AiO.
 

Mirfster

Doesn't know what he's talking about
Joined
Oct 2, 2015
Messages
3,215
K, so on the H200 it shows this for the info
  • I know it works for ESXi 6.0 U2 since I had it in there to mirror my Boot Drives previously
  • SAS Address was redacted
upload_2016-9-2_10-35-0.png


Off the top of my head, the cards I can send are:
  • Perc 6/IR (yeah old and 3Gbs, but will work for drives less than 2TB..)
  • Perc H200 (running Dell firmware)
  • LSI 9260-8i
  • LSI 9211-8i (currently running IT Firmware P20 - Not sure of the exact version, but feel free to flash it if desired)
  • Perc H700 (Currently in my Hyper-V Server, but slated to be removed today/tomorrow if I get off my lazy butt and finish the migration...)
  • I have breakout cables for them too so you can connect them directly to the drives if needed
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
K, so on the H200 it shows this for the info
  • I know it works for ESXi 6.0 U2 since I had it in there to mirror my Boot Drives previously
  • SAS Address was redacted
View attachment 13413

Off the top of my head, the cards I can send are:
  • Perc 6/IR (yeah old and 3Gbs, but will work for drives less than 2TB..)
  • Perc H200 (running Dell firmware)
  • LSI 9260-8i
  • LSI 9211-8i (currently running IT Firmware P20 - Not sure of the exact version, but feel free to flash it if desired)
  • Perc H700 (Currently in my Hyper-V Server, but slated to be removed today/tomorrow if I get off my lazy butt and finish the migration...)
  • I have breakout cables for them too so you can connect them directly to the drives if needed
Hate to speak too soon and jinx myself, but I may not need a loaner card 'cause I think I've found a solution!

The X8SIE-4LNF has 4 built-in Intel 82547L NICs. On a lark, I disabled NICs 3 & 4 using the appropriate mobo jumpers and - Voila! - everything works now. Go figure.

I'd been using all 4 in a LAG group, but it's no big step for me to reconfigure that to use 2 ports instead of 4. I have a 2-port LAG group for my X10SL7-based AiO, which works fine.

I'll post more details after I've had a chance to experiment...
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Turns out that there is no solution, because there is no problem... other than a classic ID10T error on my part. :oops:

Freshly-installed VMware networking doesn't work with my Dell PowerConnect 2816 LAG group. I have to plug in to a standard port so I can reconfigure the networking with vSphere and then plug all 4 NICs into the LAG ports. An ESXi wizard could probably do this from the console command line... but I ain't no wizard!

In my defense, it's been quite a while since I installed ESXi... :D

Once I figured that out, I re-enabled all 4 NICs on the motherboard and have been merrily re-configuring the AiO with its new SSD-based RAID1 ESXi boot drive & datastore for the FreeNAS VM.

It turns out that two LSI cards do work in a Supermicro X8SIE system!
 
Status
Not open for further replies.
Top