Confused about that LSI card? Join the crowd ...

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
This is terrible! I think the only thing fitting that description is "organization", I doubt they've put much thoughts into it...

Can't understand why anyone would even attempt that design to start with!

It is really pretty simple. People have learned that a filesystem is something that they layer on top of a disk partition that resides on a storage subsystem, which can be a complicated abstraction involving RAID controllers or virtual disks of various sorts. They've learned to put FFS, or Ext3, or BtrFS, or NTFS, or whatever, on a virtual disk "and it works like this." Fore more info look at this article.

So they see "ZFS" and they think "ah, another filesystem." Except ZFS isn't a filesystem, or at least not JUST a filesystem. It combines a logical volume manager with storage controller features with a powerful filesystem, blurring the lines in the process.

And then one day you see them decide that they're going to sell a ZFS storage server, because they've heard all these awesome things about it. But perhaps they don't actually spend a few years learning the ins and outs of ZFS first, and treat it just the way they've treated everything else over the years. They get a "distro" and install it on standard hardware ... What could go wrong?
 

philosophish

Cadet
Joined
Nov 13, 2013
Messages
3
Hello,

First off: I don't run FreeNAS (but I've played with ZFS). Still, I have currently 6 of those LSI/SAS2008-based HBAs in use and I'd like to share a bit of information (and I have already paid dearly for the confusion mentioned in the title, as I also got a 9280-8e -which is a different beast- for a couple of hundred , when apparently what I really wanted is the 9200-8e i.e. the external equivalent of the 9211-8i). The LSI MegaRaids are just not my cup of tea, especially because of this virtual drive nonsense, lack of JBOD support and other reasons as mentioned in the opening post.

Back on topic, I just want to add a few things I have come accross:

1) I have one M1115 IBM crossflashed to LSI9211-8i IT mode. And I just want to warn people that there is a truly strange incompatibility with some HP workstations (and possibly other machines); it works as expected in my old XW6600, no difference to any M1015 / 9211-8i I own. In both a Z800 and a Z600, regardless of the PCIe slot I put it in, it invariably results in 203 / 204 / 207 "incompatible memory" DIMM errors on POST. The memory is absolutely fine, though; when I remove the M1115 everything works, memtest shows no problem and I have tried with three sets of different, original HP memory in differing population patterns. The card stubbornly refuses to work in those workstations, no matter what I pull out or put in. I believe it is similar to this (http://h30499.www3.hp.com/t5/Workst...rted-in-PCIe-slot/td-p/5695041#.UoO2XPHLTFs); maybe it's some missing vendor id/data due to the cross-flash (I did not try it with the original firmware). However in the XW6600 it keeps running perfectly, I got it for only 35€ on ebay and it certainly is worth that and more. Just make note when you see such a tempting offer, you might have a machine that does not like it. Note: Original LSI9211-8i and cross-flashed M1015/9220-8i work without problem in the computers that refuse the M1115. So there must be some kind of difference in the build of the card (if it were a bad sample, it would not work in the XW6600 either).

2) I can consistently reproduce (on my machines, tested with 2 LSI and 3 M1015) that for the attached drives (SSDs from Samsung, OCZ, Intel and Corsair), setup and whatnot the cross-flashed M1015s produce ever so slightly better results in benchmarks than the original LSIs. I would have expected the reverse and of course it's absolutely unnoticeable in production use, but it's around 3 - 7% in CrystalMark and AS SSD on Windows and about 2-4% in fio on Linux. Nothing big, might well be due to variance in batches, just saying.

3) I get exceptionally bad results for Samsung 830 512GB + Firmware P17 IT on 4K read and writes (in benchmarks). I know, I know, benchmarks, who cares. But what troubles me is that I already had a huge issue in the transition from P14 to P15 when TRIM/discard support/implementation was abruptly changed and suddenly killed SSD data i.e. ext4 file system corruption under linux (glad I had those backups).
Now there is a performance drop of almost 10% going from P16 to P17 @4k.
Which revisions are you running and have you noticed similar things? Have any of you ever downgraded to a previous revision? What's the possible risks of doing that, if possible at all? To the data, the controller?

4) Another word of caution: If you are looking for cheaper OEM versions of the LSIs, also beware that the H220 from HP apparently comes in two (three) different versions: one as a rebadged 9207-8i (sometimes 9205-8i) which is a SAS2308 card and probably desirable. However there is also a SAS2208 based one and I think it cannot be flashed to a pure HBA. I have no idea if they can be told apart from part numbers, but I've sent one back that was SAS2208 based.

That's all,

philosophish
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, the card's firmware should always match FreeNAS' driver version. FreeNAS uses driver p14. I wouldn't assume your performance issue is anything besides not using the right driver with the right firmware. There's a ticket I submitted about 4 months ago to get it updated, but its effectively being ignored IMO. It was labeled "low" priority and left with no FreeNAS version.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Another thing to keep in mind with LSI firmware & FreeBSD/FreeNAS/etc. Is FreeBSD isn't exactly an OS that LSI does a ton of testing on, if I recall correctly occasionally they even ship versions of the firmware that don't even have a FreeBSD driver. The driver version maintained in the FreeBSD source tree is what the majority of folks use and sees the most use/testing. This all said LSI seems to have a good relationship with FreeBSD and is providing above average support for FreeBSD, but unless you feel the need to be a member of a small group that downloads the newest LSI firmware/driver combo and runs it in production , just don't do it. If you must do it in FreeNAS isn't not a big deal, you just put the driver in the correct location and adding a tuning constant to load the driver and off you go.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you must do it in FreeNAS isn't not a big deal, you just put the driver in the correct location and adding a tuning constant to load the driver and off you go.

I tried to do that and I couldn't get the biznatch to play. I'm running p16 on my card because that's what I flashed it to when I went to IT mode. Since its worked I've never changed it. I was just hoping to get a p16 driver for FreeNAS so I'd feel a little bit safer about what I'm doing.

I'm not normally a proponent of jumping to the latest and greatest as soon as it hits, but we're talking about a driver(and firmware) that is 3 revisions old. To me, that's a little much to swallow without arguing neglect at updating. I have reviewed the fixes and such. None seem to be "ZOMG data loss might result" for FreeNAS. But if you are running in a hardware RAID(too many still do it anyway) a few of those bugs are critical.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
I tried to do that and I couldn't get the biznatch to play. I'm running p16 on my card because that's what I flashed it to when I went to IT mode. Since its worked I've never changed it. I was just hoping to get a p16 driver for FreeNAS so I'd feel a little bit safer about what I'm doing.

Looks like LSI skipped doing a FreeBSD driver for p16, they do have a driver for p17 though. I'd update to p17 which should be much easier now that you have it in IT mode and then download the p17 FreeBSD driver from here. Copy the driver to /boot/modules on your FreeNAS box, you will need to do a "mount -rw /" so the boot disk is writable first. Then add a tuning constant of mpslsi_load with a value of YES. Reboot and you should be golden.

One note: I've noticed when putting drivers into /boot/modules if you upgrade your FreeNAS by booting off a iso image everything is fine. If you use the GUI update via the web interface it erases the modules directory and you have to put them back. Not a big deal but a pain if we get too many back to back FreeNAS updates.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
I'm running firmware p17 on whatever driver is in 9.1.1 (v14?). No problems so far. I've got a crossflashed m1015 in both of my two freenas boxes.

Hopefully there's no gotcha's in having such a new firmware combined with older drivers.
 

hxall

Dabbler
Joined
Dec 1, 2013
Messages
15
Any recommendations for adapter property settings in LSI 9221-8i BIOS? I have a 10 - 3TB WD Red's in RaidZ-2, but some drives fail to mount at startup, wondering if its the spin up time, etc. It's pretty random and drives are good, were all tested. I have firmware flashed to IT version 15.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
In some cases, users have come to the forum inquiring about a many-port controller where some combination would do. It is worth noting: With ZFS, you do want a decent SATA connection for your drives. Motherboard ports are generally fine. The $1 PCIe add-on card in the clearance bin at the local computer store is probably dodgy. A quality HBA from LSI will be reliable. If you need 10 SATA-III ports, you do NOT NEED A 16-PORT LSI RAID CONTROLLER FOR $800. You don't even need a 16-port LSI HBA for $500. You can get away with an M1015 and two motherboard SATA ports. Really! If you need 20 ports, consider two M1015's and four motherboard SATA ports. Even a SATA-II port can go much faster than a contemporary spinny hard drive can. Yes! It works just fine. If you need 24 ports, you definitely DO NOT NEED A 24-PORT LSI MegaRAID 9280-24i4e LSI RAID CONTROLLER FOR $1500. Get the point?


Sounds like this advice flies in the face of the admonition to keep all of your disks on one controller doesn't it? Or am I misunderstanding something which I am sure you will be kind enough to explain.
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
The second you do that you can expect to give up any use of SMART testing and monitoring. The RAID card will abstract the disk into an array and you won't be able to enjoy the SMART testing and monitoring that is one of the most important features for identifying failing disks before they fail as well as indentifying a failed disk when it happens. Just earlier today someone had over 800k files from a RAIDZ2 with one bad disk and likely 2 more failing disks and had no clue anything was wrong until it was too late. I commented in that post that if I were betting money I'd say his SMART monitoring was disabled. He hasn't responded back as far as I know.

Most controllers will still use their cache when in jbod from what I've seen. I know the 3ware, LSI, and Areca I've played with definitely use the read and write cache in jbod mode if they are enabled. Enabling/disabling them has a very distinct performance gain/penalty(depends on the cache you enable). In fact, I miss my Areca controller because the read cache made my pool's performance double! The write cache killed performance though.

Was your Areca card able to have the read/write cache enabled/disabled independently of each other? And is this the norm on most cards to the best of your knowledge? just trying to learn about cards
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The read ahead cache and write cache are separate. Most controllers are separate because write caches without a BBU are extremely dangerous(some won't even let you enable the write cache without a BBU installed).
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Sounds like this advice flies in the face of the admonition to keep all of your disks on one controller doesn't it? Or am I misunderstanding something which I am sure you will be kind enough to explain.

Whose admonition is that? I don't suggest adding six different types of cards to your machine and one drive on each, but that is basically so you don't need to learn six sets of quirks, and maybe inadvertently screw up something like SMART monitoring or power management, which often vary slightly from controller to controller.

ZFS itself is fine and dandy if it sees a raw drive on a HBA (not RAID) SATA port, whether on one controller or fifty, so the admonition probably didn't originate here, or you misunderstood the anti-RAID controller guidance.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
All jgreco is saying is there's no need to add unnecessarily complex hardware to your system when ZFS is supposed to do the complex lifting. There's nothing stopping you from buying a $1000 RAID controller and running it in jbod(assuming it can do jbod properly for ZFS). But why spend money on stuff you don't need? Buy a 24 port RAID controller for $1000 or 3 M1015s for $100 each. Answer seems obvious to me.

Some people make the mistake of buying mini-itx and get a single PCIe slot, so you have little choice and if you want 24 drives you MUST buy a 24 port controller or SAS expanders.

But there's not much to be gained from buying a 24 port aside from having a single controller that could fail versus 3 controllers. And lets be honest with ourselves. How often has a controller failed in relation to a hard drive? But I'm sure you aren't building a server with 3 disk mirrored vdevs either. So think objectively about where your failure rates are and how you are going to mitigate their failure rates. :)
 

Richman

Patron
Joined
Dec 12, 2013
Messages
233
I am almost certain it was in FreNAS Harware Wiki or at minimum the forum where it was stated something like, 'Its best to keep your disks on one controller to eliminate issues'. I figured it didn't mean that you shouldn't use more than one but that you should keep array's grouped as in on array on the on-board SATA and another array on the card because it wasn't a good idea of mixing them spreading one array over numerous controllers. I don't even think it was a link to some other site, write-up, editorial, tutorial, memo, composition, extrapolation, supposition, cartoon .... or anything. I will have to look for it again and let you know when I find it.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I don't think I've ever read that anywhere in my 2 years here.
 
Joined
Dec 25, 2013
Messages
9
So I am confused... As of release 9.2.0, what is the recommended IT firmware release for the LSI 9211-8i / M1015 card?

Is P14 still recommended or it it safe to go to P17? And if P17, are special steps to install drivers still required?
 

Z300M

Guru
Joined
Sep 9, 2011
Messages
882
So I am confused... As of release 9.2.0, what is the recommended IT firmware release for the LSI 9211-8i / M1015 card?

Is P14 still recommended or it it safe to go to P17? And if P17, are special steps to install drivers still required?
I had already updated mine to P16, and it seems to be working fine with 9.2.0. Haven't tried P17 yet.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
People seem to be having success with versions newer then P14, but I haven't gone and looked to see what version of the driver was used to build 9.2 and the release notes are silent on the topic.
 
Status
Not open for further replies.
Top