24 bay SuperMicro server limits?

Status
Not open for further replies.

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Which do you guys think is best? I'm looking at the 9211 thus far and prefer a card with rear facing ports as the cards I already have are rear facing.
I'd like to get what would be best and then will flash them into IT mode and would like 6 and 8TB compatibility
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
oh, thought I'd add, mind you that I have 9 SFF8087 connectors to connect.
 

JayG30

Contributor
Joined
Jun 26, 2013
Messages
158
FYI, backplanes are costly. It sound to me like what you actually ended up getting was a SC847A-R1400LPB instead of the SC847E16-R1400LP.
You didn't mention what backplane is in the back of the case for the 12 disks. If it is the BPN-826A (should have 3 SFF-8087 connectors) than it for sure is the SC847A.

The BPN-SAS-846A backplane takes 6 iPass (SFF-8087) connections to drive 24 disks in the front. So this provides 6 x 4 =24 discrete channels. No expander is used in that backplane. The cabling will be more cumbersome. but you have more aggregate bandwidth than you would have with an expander backplane like the E1/E2 backplanes (although unlikely that you will need it unless you are using many 15k SAS or SSD drives). I THINK this backplane (and the BPN-826A) supports full SAS/SATA 6Gb/s and you don't necessary need to look at SAS2 backplanes. I don't think they even make a SAS2 A style backplane. If you think about it, it wouldn't make much sense. The SAS2 comes into play with expander backplanes only I THINK. Check with Supermicro however.

The issue I have with the A backplanes is that they end up requiring a lot of SFF-8087 ports, and you have to provide those connectors somehow, which typically means more expensive motherboard with lots of PCIe slots to hold lots of HBA's. The BPN-846A has 6 SFF-8087 connectors for the front, and if you have the BPN-826A in the back, that is 3 more SFF-8087 ports. Total SFF-8087 ports is 9. That would require 5 x M1015 HBA's (subtract anything you can get from your motherboard) or more expensive HBA's. And if you aren't using really fast disks, a SAS2 6Gb/s expander is enough to handle 24 disks with far simpler cabling. If I needed more bandwidth I'd get a SAS3 backplane before dealing with all those cables and extra HBA's personally.
 

JayG30

Contributor
Joined
Jun 26, 2013
Messages
158
Also, a solution to what I outlined above is to get yourself a SAS expander. Should be cheaper than buying all those HBA's. Might take a PCIe slot though, unless you mount it somewhere and power it via a Molex connector.

This has been a big hit on ServertheHome HERE;
Intel-RESV240: http://www.ebay.com/itm/181566524075?rmvSB=true

People have been making offers and getting them for ~$90, some as low as $60.

Two of these and a M1015, H300, H200, or other cheap 2 port HBA should do it. Can probably get all 3 for <$200. And you only need 3 PCIe slots. Just an idea.
 
Last edited:

JayG30

Contributor
Joined
Jun 26, 2013
Messages
158
ALSO, if you feeling frisky and want to experiment in a setup that is relatively new concept, you could consider actually getting a traditional high quality RAID card instead of an HBA. There have been people doing this more and more lately. They setup the RAID card so each device is passed through as a RAID0. The advantage is that they can leverage the battery backup cache on these cards. Sort of like a mini SLOG/ZIL but without the SSD wear issues since it uses DDR3.

A pretty amazing deal mentioned HERE at ServetheHome.
Intel RS25SB008 RAID: http://www.ebay.com/itm/Intel-RS25S...A-MD2-New-Retail-Box-/200913678632?rmvSB=true
Again, people are snagging these at ~$150 with best offers. These things cost a lot more (600-700). 1GB DDR3. The thread also mentions cheap deals on CacheVault keys.

This plus those expanders could be pretty interesting...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
ALSO, if you feeling frisky and want to experiment in a setup that is relatively new concept, you could consider actually getting a traditional high quality RAID card instead of an HBA. There have been people doing this more and more lately. They setup the RAID card so each device is passed through as a RAID0. The advantage is that they can leverage the battery backup cache on these cards. Sort of like a mini SLOG/ZIL but without the SSD wear issues since it uses DDR3.

Yeah, RAID0 with each device is about the stupidest thing I've ever heard of. Do you even read the docs, stickies, etc? We actually call out people that do RAID0 of single disks and tell them that it is not jbod and not to even consider that.

I could say alot more, but this advice is so incredibly stupid that I'm not even going to waste my time writing more.

Good luck to anyone that things this is a good idea.
 

JayG30

Contributor
Joined
Jun 26, 2013
Messages
158
Yeah, RAID0 with each device is about the stupidest thing I've ever heard of. Do you even read the docs, stickies, etc? We actually call out people that do RAID0 of single disks and tell them that it is not jbod and not to even consider that.

I could say alot more, but this advice is so incredibly stupid that I'm not even going to waste my time writing more.

Good luck to anyone that things this is a good idea.

Well, plenty of people have been doing it, even in production. Yes I've read all the "stickies". I know what the advice is. But I've also read well respected ZFS developers talking about this approach and aren't as quick to dismiss it as you are. In fact there is a thread in this very forum that has the well respected member and your forum buddy jgreco discussing it, again without the condemnation that you are stating.

https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
An interesting but unorthodox alternative for SLOG is to use a RAID controller with battery backed write cache, along with conventional hard disks. Normally RAID controllers are frowned upon with ZFS, but here is an opportunity to take advantage of the capabilities: Since the cache absorbs sync writes and writes them as the disk allows, rotational latency becomes a nonissue, and you gain a SLOG device that can operate at the speed the drives are capable of writing at. In the case of a LSI 2208 with 1GB cache, and a pair of slowish ~50MB/sec 2.5" hard drives, it was interesting to note that a burst of ZIL writes could be absorbed by the cache at lightning speed, and then ZIL writes would slow down to the 50MB/sec that the drives were capable of sustaining. With the nearly unlimited endurance of battery-backed RAM and conventional hard drives, this is a very promising technique.

Clearly I'm not the only person that has heard of this type of setup and think it has shown to be a "VERY PROMISING TECHNIQUE" (his words not mine).

You've had this tone with so many people around here. Don't act like people you don't know aren't smart enough to understand the mechanics of ZFS or to read a sticky. I would appreciate it if you didn't talk to me like I'm an idiot, thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@jgreco also doesn't talk about using it as RAID0 disks! He's talking about it for a single disk- the slog/ZIL. Nothing else. NO OTHER DISKS AT ALL.

You are taking what he was talking about and totally twisting it into something that is totally disgusting and grossly incompetent.

It's a *big* *f*cking* *difference* between doing a RAID to take advantage of the write cache of a RAID controller and doing a RAID0 of a bunch of disks, then putting ZFS on that. One is a very unorthodox way to do things, but seems to work. The other is idiotic beyond words.

You don't even understand what he was talking about in that thread, and I sure hope he shows up and tells you that what you *think* he is talking about, and what he actually is talking about, aren't even in the same ball park.

ALSO, if you feeling frisky and want to experiment in a setup that is relatively new concept, you could consider actually getting a traditional high quality RAID card instead of an HBA. There have been people doing this more and more lately. They setup the RAID card so each device is passed through as a RAID0. The advantage is that they can leverage the battery backup cache on these cards. Sort of like a mini SLOG/ZIL but without the SSD wear issues since it uses DDR3.

^^^ That is not talking about a single disk being leveraged for its cache. You are talking about the whole pool. Night and day difference.
 

JayG30

Contributor
Joined
Jun 26, 2013
Messages
158
No, I know what he means. I also know what I wrote.

If you take it as something else, then sorry what I had time to write after a 14 hour day wasn't sufficient for you. I assume anyone that would consider doing it would do their own research to understand. I've done it and been following the approach for a year now. I've read ever single piece of info posted on every forum and mailing list about the topic. I know exactly what it is used for and how to implement it, as I have run it in a test environment before it was ever even posted in this forum.

Once again, you don't need to be a complete ass to everyone on these forums all the time. It really drives many good people away from this place. I've watched you blame users posting threads about FreeNAS problems as being "incompetent" only to find that they were bugs that the FreeNAS group overlooked. Try not acting so high and mighty all the time.

Have a nice night.
 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Guys GUYS! Seriously, calm down, it is a experimental topic we are talking about and not tested and I'm not willing to go that far with my array. Granted being able to utilize a RAID cache would be handy it is not necessary for my environment.

Besides, the attitudes and flaming that you guys were handing out to Jay is really disrespectful and unnecessary. Be adults, be human, there is no need to start trashing someone over what seems to be a miss understanding. Maybe Jay misunderstood, maybe you guys misunderstood, maybe you guys are talking about the exact same thing from different view points.
The point is that the attitudes that seemed to come across in your posts are not necessary. No one gets a chance to learn something new when being verbally assaulted, nor would they want to stick around to listen to the insults.
Discuss the experimental topic, don't trash each other just because you don't agree.

Anyhow.....

JayG30, thank you for posting and offering up the experimental idea. I would be willing to do some testing later but not on this system. I'm looking for stability and reliability.

Jay, to answer your questions earlier, yes I have a BPN-SAS-846A forward backplane and a BPN-826A rear backplane which I guess means I have the 847A model and it came with 5 AOC-SASLP-MV8 controllers which have a 2.2TB limit which is a problem as I'm wanting to run 4TB drives and future compatibility with larger capacity drives.

Many have said that the Dell H200, LSI 9207-8i, LSI 9211-8i and IBM M1015 are good card replacements as they can be flashed into IT mode so as to have direct raw access to the drives.

My question to them and anyone else that found this post was if I was looking at the LSI cards which would be best? the 9207 or the 9211. I read a post stating that the 9207 was newer than the 9211 which doesn't make sense to me and I was curious as to the differences as when I looked them up I didn't see any differences. But just because I didn't see them doesn't mean there aren't any.

On top of that knowing that I have 9 SAS cables that needing to be plugged in are there any other and/or better cards that would support more than 2 SFF8087 connectors.

I did see that you provided me with a link to the Intel RES2SV240. My question is, is there a performance difference between running 1 of these plus one other card vs running 5 cards. I would think that running 5 cards would give one a little more performance.

My other question would then be does this card support 3+TB drives.

I believe they do though I would like to know if anyone has confirmed this as of yet and I have looked at this card before though that was a few months ago and saw that BackBlase uses the same card in their pod builds.

These may seem like stupid questions but since I haven't delt with these cards and haven't spent days on end reading up on these cards and playing with various cards I don't know what card is better than the next. It does look like I'll be spending between 400 and 600 on replacement cards. Something I didn't expect but understand is necessary to connect all my available spaces and am ready to do so.
I also know some cards have been listed as good cards to use, then the question is are they good cards to use when you have 9 SFF8087 plugs to give homes to?


Yes I'm asking a lot of questions and I appoligize if I either seem ignorant or dumb but I'm trying to do what is best for this paticular setup and trying to learn a few new things.
For instance since starting this thread I've learned that some cards support IT mode. Something I didn't know prior. Something very handy for ZFS users


 

Visseroth

Guru
Joined
Nov 4, 2011
Messages
546
Oh, one other thing, SuperMicro did email me back and mention that the AOC-SAS2LP-MV8 supports 3+TB and recommended that card as a replacement for the SASLP-MV8.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
No, I know what he means. I also know what I wrote.

Well, Cyberjock is correct in that I'm referring to attaching a typical hard disk drive to a RAID controller with a nice large cache to use as a SLOG device, which is an unorthodox solution since most people use SSD. It has virtually unlimited endurance and amazing speed compared to an SSD.

Using a RAID controller for pool data disks is generally problematic for a number of reasons, including the lack of early drive failure detection, thrashing of the controller cache, and the fact that there've been some strange problems reported with mfi by people who have tried this. The early failure detection might possibly be mitigated to some extent if one could arrange for MegaRAID Storage Manager to be properly set up and running, but this probably requires a second host and some changes to the FreeNAS firmware. ZFS will still swamp the controller cache, making it painful, and driver issues may also hurt you, especially when it comes time to replace a drive, the very time you need less stress in your life.

Since ZFS, and even moreso FreeNAS, are built around HBA-attached disk, the winning strategy is to attach disks via an HBA and not try to paddle furiously upstream against the prevailing best practice. You will of course be able to report some progress being made if you do decide to try to furiously paddle upstream with a RAID controller, but it will be hard to judge if you're actually making progress moving upstream or if the current is too strong and you're going downstream, and it makes it very hard to watch for rapids or waterfalls taking you by surprise.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Jay, to answer your questions earlier, yes I have a BPN-SAS-846A forward backplane and a BPN-826A rear backplane which I guess means I have the 847A model and it came with 5 AOC-SASLP-MV8 controllers which have a 2.2TB limit which is a problem as I'm wanting to run 4TB drives and future compatibility with larger capacity drives.

Yup. That's a downside to the non-expander backplanes. Of course, the expander backplanes (or backplanes attached to expanders) have their own issues. See below.

Many have said that the Dell H200, LSI 9207-8i, LSI 9211-8i and IBM M1015 are good card replacements as they can be flashed into IT mode so as to have direct raw access to the drives.

My question to them and anyone else that found this post was if I was looking at the LSI cards which would be best? the 9207 or the 9211. I read a post stating that the 9207 was newer than the 9211 which doesn't make sense to me and I was curious as to the differences as when I looked them up I didn't see any differences. But just because I didn't see them doesn't mean there aren't any.

Easily answered, PCIe 2.0 vs PCIe 3.0. Not worth worrying about, in most cases.

On top of that knowing that I have 9 SAS cables that needing to be plugged in are there any other and/or better cards that would support more than 2 SFF8087 connectors.

I did see that you provided me with a link to the Intel RES2SV240. My question is, is there a performance difference between running 1 of these plus one other card vs running 5 cards. I would think that running 5 cards would give one a little more performance.


The RES2SV240 is purely an expander; it will allow you to hook several backplanes up to it and then offers a SFF8087 out to the HBA of your choice. The performance difference is that you then have 8, 12, 16, 20, etc., drives all hanging off a single SFF8087. This is the typical tradeoff with an expander. It isn't a problem to use an expander with 12 HDD's and 6Gbps SATA, because contemporary hard drives transfer at no more than about 200MBytes/sec (~1.6Gbps) and 2Gbps * 12 drives = 24Gbps, the link speed of a 4 lane SFF8087. Get that?

So the question then becomes how much contention do you get as you cram more drives on it, and is that a problem. Answer in practice is probably still "not a problem" but that's the math behind it. Only large systems (or SSD pools) are going to be able to saturate a 24Gbps SFF8087.

Attaching all your -A backplanes directly to HBA's results in lots of HBA's needed and each HBA soaks up about 10-12 watts, but it gives you unimpeded access to each drive without any chance of contention. That is probably overkill if you're using hard disks. For greater HBA port density you could also try the 9201-16i, which, while I'd expect it to work, I haven't tried personally, so this is not a guarantee of compatibility or anything.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Just a thought: wouldn't be better to use a RAM drive (the kind that use DIMM sticks and plugs in a PCIe port) as you have the same benefits as with the RAId controller solution plus you can chose the size of the RAM, etc.?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Just a thought: wouldn't be better to use a RAM drive (the kind that use DIMM sticks and plugs in a PCIe port) as you have the same benefits as with the RAId controller solution plus you can chose the size of the RAM, etc.?

For SLOG? Yes, if it didn't result in the loss of the data on the device when power was lost. The problem with SLOG is that it has to be persistent storage. A RAID controller with BBU qualifies, but it is designed to be an intermediate step for writing out to hard disk, so it won't actually just pretend to be a very small RAM+BBU SSD all on its own.

The problem with SSD is that, especially for applications like storing backups, you can run through a LOT of SSD write cycles very quickly, and the SSD can be relatively slow. All the good SLOG SSD options are incredibly expensive, and rather than paying lots of money for a device that has a very specific purpose in life, I'd prefer to pay somewhat less money for a general purpose device that I could justify stocking spares for, since it turns out that a good RAID controller with a nice BBU cache is also a great plus for VMware.

So give me something like this

http://www.thessdreview.com/our-rev...ve-101-ramdisk-review-500k-iops-ddr3-storage/

at 1/20th the cost, and, well, that could be cool. But until something like that is readily available and rock solid, ... no.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yeah for the SLOG. In general they use a battery to avoid data loss ;)

"All the good SLOG SSD options are incredibly expensive" exactly why I proposed the RAM drive. But it seems there is no good enterprise grade RAM drive at a decent price so my idea isn't that good :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
But it seems there is no good enterprise grade RAM drive at a decent price so my idea isn't that good :p

Well, there is, that's effectively how I settled on abusing a RAID controller. It's a relatively cheap implementation of approximately the right thing, it was just intended for a different purpose.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
Yep ;)

What's the average size of the cache on a RAID controller?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
1GB or less usually. The new LSI 12Gbps stuff has some 2GB offerings but those work poorly with the current FreeBSD driver.

Anyways, the size of the RAM turns out to be somewhat less important than the speed of the backing storage. It effectively gives you a small amount of SLOG that can absorb burst traffic, but shortly thereafter the SLOG's write capacity reduces to the sequential write speed of the underlying device. Still, that is pretty good, since the average HDD writes faster than GbE can transfer, and in theory you could use a RAID0 of several devices to enhance the backing storage speed.

The big trick here is that write latency on the RAID controller is heavily optimized, and while it isn't as fast as a PCIe RAM device might be, it can give better-than-SSD performance (in terms of latency) and endurance in a sub-$1000 package.

Whether or not it is a good idea for any given pool is an entirely different topic of course.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
I see, thanks for the details ;)
 
Status
Not open for further replies.
Top