Supermicro w/ dual LSI controllers

Status
Not open for further replies.

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
Ya totally, I've already prepped the prod environment and will mimic it in test. Now you said I would get decent IOPs for db. Only decent? :eek: I thought I had some good performance specs there for some solid IOPs. I worry now. But I guess numbers will show us where I stand.
It all depends on the work load of the db but it should be plenty zippy :). You would leverage more iops by putting the 8 STECs in raid 10 and dividing the storage between the two db (I couldn't tell if VOL1-4 were individual zpools).
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
Being just a home NAS user I wonder how a database setup looks like. I have some general best practice questions:
  • Given an all SSD pool,
  • are there still any write performance gains from using an SSD-backed external intent log? Or is the data drive ZIL then even faster, because writes can be round-robin'ed between multiple drives?
  • is there a notable impact on read performance due to contention caused by data drive ZIL writes, scheduled transaction group writes and random reads? Are SSD latencies low enough to allow for concurrent read and write workloads without external intent log?
  • Given that database systems
  • come with their own in-memory cache, where should the cache be located ideally? In the file system (ARC) or the application (DBS)?
  • come with their own compression capabilities, at which stage would compression be applied? In the file system or the application?
  • mainly get random reads, does it make a sense to disable predictive reads/prefetching? While being nice for file servers how big of a factor is it for database loads?
  • Can databases make use of ZVOLs and if they can, is it any better than using their own block storage on top of datasets? (cf. ZVOLs for iSCSI)
 
Last edited:

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
I'd love to see how the new backplane works out. Please let us know....

I currently have numerous boxes that are BackBlaze knock offs, 45 drives each with 3 LSI 9201's on the MB, and each drive having it's own power and SATA connector (not using the 5 port expander backplanes). The problem is changing drives. Although I LOVE the density, having a similar SuperMicro chassis to what you have(I guess the most I'll get it the 36 bay model) would be awesome, assuming I don't have to shut down the machine to swap drives.

jgreco --- "So definitely the 846A (not the TQ, you don't want 24 discrete connectors)".. why not? I admittedly don't understand.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Anyone who builds large scale servers would agree that large numbers of cheap connectors are (sooner or later ) problematic.
 

RichR

Explorer
Joined
Oct 20, 2011
Messages
77
So are you saying you would get the same performance using a single cable from an HBA to a backplane that holds 36 drives? (seriously, not being a smart-ass, more like a newb being a dumb-ass). I guess the bottom line question is, does FN run well with one of the SuperMicro enclosures that holds 24/36 drives?... I want to get away from this at some point...
 

Attachments

  • IMG_0927.jpg
    IMG_0927.jpg
    28.7 KB · Views: 361

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
jgreco --- "So definitely the 846A (not the TQ, you don't want 24 discrete connectors)".. why not? I admittedly don't understand.

4 SAS lanes are 4 SAS lanes, whether they're discrete or bundled.

The BPN-SAS-846TQ has 24 discrete connectors, so you need 24 single lane SAS/SATA cables. That's 24 cables that you can't afford to bump or dislodge.

The BPN-SAS-846A has 6 SFF8087 ("IPASS") connectors, each of which bundles 4 lanes, so you only need 6 SFF8087 cables, which are engineered for a much better positive-locking experience.

That's why I said that the A is better than the TQ. It's the same thing, except without 24 SATA connectors.

So are you saying you would get the same performance using a single cable from an HBA to a backplane that holds 36 drives? (seriously, not being a smart-ass, more like a newb being a dumb-ass). I guess the bottom line question is, does FN run well with one of the SuperMicro enclosures that holds 24/36 drives?... I want to get away from this at some point...

It sounds like you're confused about SFF8087, though, since the only way you could have a single cable would be if you had an SAS expander backplane like the BPN-SAS2-846EL2 (featured in chassis like the 846BE26). In that scenario, you could have one SFF8087, which has four SAS channels, each of which is 6Gbps, so the overall link is 24Gbps between the HBA and the expander. In theory you can go wider, but I had some trouble with that and didn't try real hard. But in that case, yes, you get a single SFF8087 attaching a bunch of drives.

It comes down to a matter of mathematics, two ways:

Today's spinny rust drives are capable of about 150MB/sec, though the latest 6TB's I have peak around 200MB/sec. So if you take 24 drives * 150MB/sec * 8 bits/byte, you get 28.8Gbps. Of course there's overhead and stuff, but the takeaway is that 28.8Gbps is only slightly bigger than 24Gbps. You are not likely to run into serious contention issues because your host probably isn't able to drive all those disks at 100% utilization all the time anyways.

Today's networks are 1GbE or 10GbE, so if your backend is 24Gbps, that's larger than the 2 x 10GbE which is probably the maximum network config most people would be looking at.

Anyways, so for the 36 drive 847BE26, it actually has two expanders in it, one for the front drives, one for the rear. So from the point of view we're talking about here, the 846 and 847 are equal except that the 847 uses two SFF8087's instead of one. So the increase from 24 to 36 drives doesn't increase contention unless you maybe do something dumb like daisy chain the backplanes (which you can totally do, of course, if you don't mind the contention).

The newer SC846BE2C sports the new SAS3 expander and 12Gbps links instead of the 6Gbps of the SAS2 boards discussed herein.

There's some motivation to avoid excessive cabling in servers as your picture notes. The downside to SAS expanders is that they occasionally don't play nice with SATA drives, but the compatibility matrix is kind of a "who knows."

In the end, to repeat your question:

I guess the bottom line question is, does FN run well with one of the SuperMicro enclosures that holds 24/36 drives?...

FreeNAS has no clue about the enclosure. All it knows is the HBA and a little bit about the SAS expander. I've got an 846BE26 here with 12x Seagate desktop 4TB drives in RAIDZ3 that works just fine. Note that the BE26 is arguably useless in this config, since SATA doesn't use the secondary expander, so if you're building a dedicated FreeNAS box and not using dualported SAS, get the single expander version.
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
FreeNAS has no clue about the enclosure. All it knows is the HBA and a little bit about the SAS expander. I've got an 846BE26 here with 12x Seagate desktop 4TB drives in RAIDZ3 that works just fine. Note that the BE26 is arguably useless in this config, since SATA doesn't use the secondary expander, so if you're building a dedicated FreeNAS box and not using dualported SAS, get the single expander version.
Wouldn't there still be some benefit to having a secondary expander just for redundancy? If one backplane expander were to fail i still think the other expander can take over even with SATA drives (right?). I actually wondered about possibility of mitigating some SAS Expander -> SATA issues by using two expanders with two HBA's.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you want that kind of failover you are going to have to go full SAS. SATA doesn't support failover.

Secondary expanders are pointless for SATA drives (or anywhere that has any SATA components).
 

diehard

Contributor
Joined
Mar 21, 2013
Messages
162
If you want that kind of failover you are going to have to go full SAS. SATA doesn't support failover.

Secondary expanders are pointless for SATA drives (or anywhere that has any SATA components).
Thanks for the info.

I recall something about active-active multiplexers for SATA drives that allow 2 SAS initiators to connect to one SATA drive. They are available (Supermicro AOC-LSISS9252 ) without looking into it any further im going to assume the E26 backplane doesn't have these "built in". At $50 a pop you might as well go NL-SAS.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Thanks for the info.

I recall something about active-active multiplexers for SATA drives that allow 2 SAS initiators to connect to one SATA drive. They are available (Supermicro AOC-LSISS9252 ) without looking into it any further im going to assume the E26 backplane doesn't have these "built in". At $50 a pop you might as well go NL-SAS.

That sounds like a solution looking for a problem. I had no idea such a thing was actually sold.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
If you want that kind of failover you are going to have to go full SAS. SATA doesn't support failover.

Secondary expanders are pointless for SATA drives (or anywhere that has any SATA components).

Way too absolutist, Cyberjock. You can certainly mix SATA into a SAS topology, and it doesn't magically render secondary expanders pointless. It's just not useful for attaching to the SATA or non-dualported SAS drives. A clever person needing to attach a lot of SATA disk shelves might even use a secondary expander purely as a way to expand SAS capacity. A chassis like the 846BE26 and a single -8i controller - you could attach the HBA to both the primary and the secondary ports, attach 24 drives locally, and then STILL have four SFF8087's available for downstream chaining of four more 846BE16 chassis.

In reality you need the secondary expander the instant you have a SAS device you want to multipath in your chassis.

That sounds like a solution looking for a problem. I had no idea such a thing was actually sold.

They're called interposers and they're mostly evil, don't fit in standard trays, and you'll have better luck just buying the NL-SAS as noted. Made more sense back when SATA drives tended to get made in larger capacities than SAS.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I'm running HD Tune at the moment. Is anybody else using it? I was just curious what you guys setup your setting for? I'm new to all this, and trying to determine the performance of our NAS units. At the moment, I've just left things default.

Any thoughts?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My thoughts are that benchmarking is a fools errand unless you have a doctorate's in ZFS because it's a hybrid file system and is sufficiently complex that you can't benchmark "just the file system". So anyone with the knowledge to properly benchmark ZFS wouldn't come to some quaint forum to discuss an operating system that is locked down in features compared to FreeBSD.

I use HD Tune on my desktop, but I don't bother trying to benchmark ZFS. I go with a good starting point for a given setup and if that setup can't perform then I look at where the limitation is and how to make it not be the limitation anymore. People do NOT need 20k IOPS/sec from their pool and 1GB/sec sustained to get amazing performance from a pool.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
My thoughts are that benchmarking is a fools errand unless you have a doctorate's in ZFS because it's a hybrid file system and is sufficiently complex that you can't benchmark "just the file system". So anyone with the knowledge to properly benchmark ZFS wouldn't come to some quaint forum to discuss an operating system that is locked down in features compared to FreeBSD.

I use HD Tune on my desktop, but I don't bother trying to benchmark ZFS. I go with a good starting point for a given setup and if that setup can't perform then I look at where the limitation is and how to make it not be the limitation anymore. People do NOT need 20k IOPS/sec from their pool and 1GB/sec sustained to get amazing performance from a pool.
Fair enough. Might I ask what you set your HD Tune settings for your desktop?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I stopped benchmarking my machines here at home years ago when I went full-SSD. So I don't have any settings to offer you. :(

I don't buy the story that disk throughput matters. It's the number of I/Os that really matter (and finally some websites have started to actually admit this). My system, on bootup, reads about 1GB of data to bootup the OS and load all of the crap that I have auto-load. That's it. 1GB. The OS itself is only about 330MB of that. So how much faster is that SSD that does 550MB/sec than the one that does 250MB/sec? I'm still running an Intel G3 and I have zero intention of replacing it because I already know the improvements are miniscule. The only reason I've replaced SSDs since 2009 when I paid top dollar for tiny ones was because I needed something bigger.

The unfortunate truth is that, just like with ZFS, things aren't so clear cut even on NTFS. Windows Vista+ now caches large amounts of data in RAM (which I always have an overabundance of) and so even then the benchmarking numbers, when put side-by-side don't really mean that since SSD-A is 40% faster than SSD-B that it will make your bootup times, gaming times, etc faster. My system in inferior to a friends box in every way, and he bought one of the fastest SSDs he could buy. Using my SSD in his box and using his SSD in his box only affected bootup and login times by about 1 second. That was it. So my "awefully slow" SSD didn't appreciably matter at all in standard performance.

So I don't do benchmarks anymore (except for CPUs and GPUs). I just use the system and when I've decided its "too slow" then I look at what is going on. There's just no reason to be extreme and be upset because you desktop loaded in 20 seconds instead of 18 seconds and need to do optimizations. :P
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I stopped benchmarking my machines here at home years ago when I went full-SSD. So I don't have any settings to offer you. :(

I don't buy the story that disk throughput matters. It's the number of I/Os that really matter (and finally some websites have started to actually admit this). My system, on bootup, reads about 1GB of data to bootup the OS and load all of the crap that I have auto-load. That's it. 1GB. The OS itself is only about 330MB of that. So how much faster is that SSD that does 550MB/sec than the one that does 250MB/sec? I'm still running an Intel G3 and I have zero intention of replacing it because I already know the improvements are miniscule. The only reason I've replaced SSDs since 2009 when I paid top dollar for tiny ones was because I needed something bigger.

The unfortunate truth is that, just like with ZFS, things aren't so clear cut even on NTFS. Windows Vista+ now caches large amounts of data in RAM (which I always have an overabundance of) and so even then the benchmarking numbers, when put side-by-side don't really mean that since SSD-A is 40% faster than SSD-B that it will make your bootup times, gaming times, etc faster. My system in inferior to a friends box in every way, and he bought one of the fastest SSDs he could buy. Using my SSD in his box and using his SSD in his box only affected bootup and login times by about 1 second. That was it. So my "awefully slow" SSD didn't appreciably matter at all in standard performance.

So I don't do benchmarks anymore (except for CPUs and GPUs). I just use the system and when I've decided its "too slow" then I look at what is going on. There's just no reason to be extreme and be upset because you desktop loaded in 20 seconds instead of 18 seconds and need to do optimizations. :p
I totally see where you're coming from and very much agree. With the cost of RAM nowadays, that makes it easier to have that abundance or RAM that's truly needed for caching.

I was just hoping I could take some of this knowledge and apply it at work. We got a Dell Equallogic SAN that I thought would scream, which is what led me to HD Tune. With 48 600GB SAS 15K drives in a RAID 10 connected to a Nexus 5548, I thought I would get better than 363 IOPs READ and 1984 IOPs WRITE for 4KB random single. I'm pretty sure I have something misconfigured. Sorry I'm bringing in other types of hardware into this forum, but I just get some really good direction and advice here. I go elsewhere and they say call tech support. Of course we don't have that, because they cut costs every which way to Sunday. This jack of all trades crap sucks some days, and enjoyable on other days. Looks like I have some reading and Googling to do.

Cheers,
 

Mlovelace

Guru
Joined
Aug 19, 2014
Messages
1,111
I've always tended to (after burn-in) clone production data and throw it on the new rig. There's no better way to tell if it's up to par then to run actual data on it.
 

sheld0r

Dabbler
Joined
Nov 12, 2013
Messages
33
I'd love to see how the new backplane works out. Please let us know....

I currently have numerous boxes that are BackBlaze knock offs, 45 drives each with 3 LSI 9201's on the MB, and each drive having it's own power and SATA connector (not using the 5 port expander backplanes). The problem is changing drives. Although I LOVE the density, having a similar SuperMicro chassis to what you have(I guess the most I'll get it the 36 bay model) would be awesome, assuming I don't have to shut down the machine to swap drives.

jgreco --- "So definitely the 846A (not the TQ, you don't want 24 discrete connectors)".. why not? I admittedly don't understand.
I forgot to get back to you on this one. I successfully swapped out the backplane, and was able to run 3 LSI 9361 controllers, assigning 8 SSDs to each controller creating 3 separate volumes. I had to add 2 10mm fans pushing 37cfm each(loud mofos) to keep those cards cool. I just simply attached the fans to the case where the vent is, it actually looks like it was made for it. I also have a 10GB NIC shoved in there too, so the heat is just stuck in there. Those fans have surely helped. Thanks to modDIY for providing the custom cabling to utilize the old school floppy power and extend an onboard power slot that was way across the server.
 

Bryan Seitz

Cadet
Joined
Aug 17, 2015
Messages
2
4 SAS lanes are 4 SAS lanes, whether they're discrete or bundled.
It sounds like you're confused about SFF8087, though, since the only way you could have a single cable would be if you had an SAS expander backplane like the BPN-SAS2-846EL2 (featured in chassis like the 846BE26). In that scenario, you could have one SFF8087, which has four SAS channels, each of which is 6Gbps, so the overall link is 24Gbps between the HBA and the expander. In theory you can go wider, but I had some trouble with that and didn't try real hard. But in that case, yes, you get a single SFF8087 attaching a bunch of drives.

Sorry to revive an old thread but I have the BPN-SAS2-846EL1 connected to an LSI 9200-8E via SFF8087 on the backplane -> SF8088 external -> cable -> HBA.
I am only seeing 1x 6Gbps performance which would indicate I am only getting one lane's worth of throughput. Any suggestions on how to improve this or is that the limitation of the chassis/backplane ?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sounds like cabling. Can you try it with a 4/8i card and SFF8087 cabling?
 
Status
Not open for further replies.
Top