Supermicro w/ dual LSI controllers

Status
Not open for further replies.

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I see a lot of people here on the forums that use the Supermicro cases. We just purchased a couple 846E26-R1200B servers and absolutely love them!

I'm coming from an HP server background, and what we have done with our SQL servers is install 2 to 3 HP P810 controllers. We created multiple RAID 10 volumes on HP DAS. By creating multiple volumes across multiple controllers, we get better performance in SQL. I was trying to apply that same theory to the Supermicro servers using two LSI MegaRAID SAS 93610-8i controllers. It appears that I can only use one controller to control all my drives. We are running both STEC SSDs and Samsung data center SSDs, along with some Seagate Savvio 10k drives.

I'm hoping to get as much performance out of the server to beat the HP DL580 w/ HP P810 RAID controllers.

Has anybody tried running multiple controllers in the Supermicro servers?

Cheers,
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm pretty sure you can use multiple controllers. Have to checked to see that both controllers work individually (i.e. take one out, test, swap cards, retest)?
 

EEE3

Dabbler
Joined
Nov 27, 2013
Messages
12
Assuming you have one of their dual expander backplanes my experience has been that you can only use two controllers if you are using SAS drives. For SATA you can only use one. If you connect both HBA ports all drives will appear on the primary SFF8087 port. You can't "split" SATA devices across the two ports for more bandwidth. I emailed their support folks and they confirmed this.
 

EEE3

Dabbler
Joined
Nov 27, 2013
Messages
12
I don't mean to hijack the OP's thread but I personally have a BPN-SAS2-836-EL2 which is the 16 bay version of what he has.. I've tried connecting two different LSI controllers to PRI_J0 and SEC_J0 as that appendix you linked describes and all the drives always showed up on the primary. I'd be thrilled if I'm wrong and there's some way to do it though.

This is the exact response that supermicro gave me:

"User cannot split the hard drives on backplane. All miniSAS ports on backplane are connection to 16 hard drives internally. It doesn't matter customer using single cable from raid controller to Primary port, Pri_J0/Pri_J1 or Secondary port, Sec_J0/Sec_J1, raid controller will able to detect 16 hard drives.

However, Secondary port only work on SAS hard drive which support multi I/O path."
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
So I found out how to fix this issue. Looks like that backplane definitely doesn't support multiple controllers. Basically, that backplane is considered an expansion backplane, meaning those ports are used just to add extra JBODs. To have multiple controllers, you need either the BPN-SAS-846A backplane or the BPN-SAS846TQ backplane. The difference is one has multiple SATA ports, one for each drive and the other controller is a SAS cable to control the drive per your specified controller (ie the 8i would rive 8 drives).

I've ordered the backplane and it should be here in a few days.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The E26 backplane is basically an LSI SAS expander that breaks out into HDD and SFF8087 ports:

ses0: <LSI SAS2X36 0e0b> Fixed Enclosure Services SCSI-5 device

There are two of these expanders on the backplane. If you attach dual ported SAS devices, one port is attached to the primary expander, one to the secondary. If you attach singleported SAS or SATA devices, then the disk is only attached to the primary.

The backplane absolutely supports multiple controllers, but this gets into SAS design, what your OS/HBA supports, and whether or not you've got dual ported devices. I don't have a ton of answers because I was merely interested in attaching a single M1015 to a small pile of drives. I can tell you that you can plug your SAS HBA into any one of the three primary SAS ports and it'll work just fine. The intended deployment scenario for these things is to have more downstream JBOD's but you can build any valid SAS topology with them AFAIK. The opportunity for things to not work as you'd like increases.

So definitely the 846A (not the TQ, you don't want 24 discrete connectors) is a reasonable way to go if you want to maximize throughput and don't mind having several HBA's.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
Good to know jgreco. I totally agree with you, the TQ model is not the way to go. For the purpose of this server, I'll need multiple HBAs, actually per the DBAs they want one controller per RAID volume. One for the DB, the TempDB, and the Logs. We currently have this setup in our HP DL580 server with HP P810 controllers. I really hope these LSI 9361 controllers can beat them. We aren't running 12Gbps drives, but per LSI they latency is the lowest on the 93xx series.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Per the ZFS gurus you can disregard "one controller per volume"; that makes sense for hardware RAID but for ZFS it is mostly pointless.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
@sheld0r Did you decide what HDDs you are going with for the SAS MPIO config?
I already have the drives, so I'm purchasing the backplane and changing that out. I'm just doing a little research now to see if this backplane will fit, nobody can really confirm not even Supermicro at this time. It looks like it will fit, so I think I'm going to risk it.

I'm using the follwoing:
OS Vol = 2 SAN DISK X110 128GB RAID 1
VOL1= 4 - STEC 400GB SSDs in a RAID 10
VOL2= 4 - STEC 400GB SSDs in a RAID 10
VOL3= 6 - Samsung 845DC PRO 400GB SSDs RAID 10
VOL4= 8 - Seagate Savvio 10k 1.2TB RAID 10

Each volume has a dedicated LSI 9361 controller, and the OS volume is booting off the mobo.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I already have the drives, so I'm purchasing the backplane and changing that out. I'm just doing a little research now to see if this backplane will fit, nobody can really confirm not even Supermicro at this time. It looks like it will fit, so I think I'm going to risk it.

I'm using the follwoing:
OS Vol = 2 SAN DISK X110 128GB RAID 1
VOL1= 4 - STEC 400GB SSDs in a RAID 10
VOL2= 4 - STEC 400GB SSDs in a RAID 10
VOL3= 6 - Samsung 845DC PRO 400GB SSDs RAID 10
VOL4= 8 - Seagate Savvio 10k 1.2TB RAID 10

Each volume has a dedicated LSI 9361 controller, and the OS volume is booting off the mobo.
That should have some decent IOPs for db. Why the four controllers? You don't need but two for the mpio o_O
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
That should have some decent IOPs for db. Why the four controllers? You don't need but two for the mpio o_O
So the DBAs demand the individual controllers for performance reasons. As for the MPIO, don't you think there will be some performance degradation? I can't say for sure, as I never used it. Just did a quick Google on it.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
DBA's are used to dealing with hardware RAID controllers. ZFS is a different beast but you cannot easily teach people who just "know" something (without really understanding it) that the new thing is not the same as what they are used to.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
DBA's are used to dealing with hardware RAID controllers. ZFS is a different beast but you cannot easily teach people who just "know" something (without really understanding it) that the new thing is not the same as what they are used to.
You are so dead on! And changing these old folk DBAs is almost impossible. They are so stubborn!
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
So the DBAs demand the individual controllers for performance reasons. As for the MPIO, don't you think there will be some performance degradation? I can't say for sure, as I never used it. Just did a quick Google on it.
Dual channel SAS drives can connect to two controllers for multipathing. It shouldn't decrease throughput at all (though I remember there being some bad firmware out there), it's for fault tolerance. I have used it in the past as a belt and suspenders approach for remote servers.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
Dual channel SAS drives can connect to two controllers for multipathing. It shouldn't decrease throughput at all (though I remember there being some bad firmware out there), it's for fault tolerance. I have used it in the past as a belt and suspenders approach for remote servers.
I see. I think this is what you're referring to? http://support.microsoft.com/kb/2744261

But good to know, I'm going to try it out in test. I have the exact same box in testing too, which is for our DR. I can run some benchmarks there and see what I find. Any particular tools you guys like to use for benchmarks? I was thinking of using SQLIO or IOMeter. We ran SiSandra Sandra in the past.
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I usually go with iometer at first, but nothings better then throwing some data (copy of a backup) on it and taking it for a spin.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I usually go with iometer at first, but nothings better then throwing some data (copy of a backup) on it and taking it for a spin.
Ya totally, I've already prepped the prod environment and will mimic it in test. Now you said I would get decent IOPs for db. Only decent? :o I thought I had some good performance specs there for some solid IOPs. I worry now. But I guess numbers will show us where I stand.
 
Status
Not open for further replies.
Top