jgreco
Resident Grinch
- Joined
- May 29, 2011
- Messages
- 18,680
A recent post prompted me to think, there's a lot of confusion regarding LSI HBA/RAID cards. If you're confused, it is probably understandable... it can be a bit complicated!
ZFS users on FreeNAS should avoid using hardware RAID cards. ZFS and FreeNAS work best when the drive is managed directly by FreeNAS, including having SMART data available from the drive. Any use of "virtual drives" or one-drive RAID0 JBOD modes likely involves the controller writing its own proprietary configuration to the drive and then that means that if you ever need to switch controllers, you're going to have extra special trouble because you need to copy the data off the old drive, not just move the old drive to a new SATA controller port. See this linked article: What's all the noise about HBAs, and why can't I use a RAID controller?
LSI makes a lot of hardware. They make HBA ("Host Bus Adapter") and RAID cards. In general, a HBA is probably a better choice than a RAID card. The HBA cards I've seen are most often driven by the LSI Logic Fusion-MPT2 SAS driver in FreeBSD ("mps") but older ones may be driven by the LSI Logic Fusion-MPT SCSI driver ("mpt"). Most of the current RAID cards seem to be driven by the LSI Logic MegaRAID SAS driver ("mfi") which comes with some significant caveats, including being unable to see your drives via the FreeBSD CAM subsystem or access diagnostics via SMART, and all maintenance and management must be done via the "mfiutil" tool or via the BIOS. If you have an older controller of some sort, one that doesn't do SATA-III, you may need some other drivers and you will also be limited to no more than 2TB drives. If you are considering buying one, don't, unless you're getting it for $10 or free or something like that... such controllers include the IBM ServeRAID BR10i and Intel RAID Controller SASUC8I (LSI SAS3082E-R/LSI SAS1068E chipset) and are driven in FreeBSD by the LSI Logic Fusion-MPT SCSI driver ("mpt"). It is possible to crossflash these cards to be a generic LSI SAS1068E card, but the silicon is still capped at 2TB drives.
A favored card in the FreeNAS community is the IBM ServeRAID M1015, a budget RAID card that can be found inexpensively (~$75) on eBay. This card comes by default with IBM's version of the LSI RAID firmware on it. It's basically an LSI 9240-8i. You don't want to run that, you instead want to crossflash it to be an LSI 9211-8i (also known as LSI SAS2008) in IT mode, making it a basic SAS/SATA HBA card. Two SFF8087 connectors provide up to 8 SAS/SATA channels directly via breakout cables, or, if a compatible SAS expander is used, possibly many more SAS/SATA channels. The card consumes around 10 watts and should have at least some airflow in order to maintain proper cooling. It is important to crossflash this card! If you do, it works under the "mps" driver, and it becomes a plain HBA. If you don't, then it is under the "mfi" driver and it is a RAID card. If you get an M1015 and crossflash it to IT mode, you end up with one of the best HBA controllers available for FreeNAS, in my opinion. These cards often come without brackets or with a low-profile bracket to fit into 2U rackmount servers. Getting a proper bracket is recommended.
Configuration of LSI 6Gb/s HBA products
This is the BIOS probe of an M1015 in IT mode with 8 drives attached via a single SFF8087 and an LSI SAS expander (Supermicro 24-drive SAS backplane). Directly attached drives look just about the same, just fewer of them:
You can hit control-C during that to get to the card configuration utility.
Shows you the adapter type and firmware revision. Note firmware version 15 and IT mode. You can look under SAS Topology at your attached disks, but really there's not a whole lot to be seen here in IT mode.
Within FreeBSD, that'll probe as a controller serviced by the mps driver, and the drives will appear as normal "daX" devices and appear in "camcontrol devlist", so pretty straightforward pleasure there. A "dmesg | grep mps" would show something like:
And so far, things like hot adding a drive seem to work fine. It has been a very pleasant and flexible controller to work with, as we use them with ESXi in -IR mode as well. I expect that other LSI SAS 6Gb/s HBA's would closely resemble the above; if you run into something different, please let me know.
On to LSI 6Gb/s RAID:
The higher end LSI RAID products are a bit different. Aimed at high performance and high availability applications, they tend to have more ports, cache, perhaps battery for the cache, and options for managing and configuring the array. These devices are intended to abstract out the physical storage and present the system with virtual devices, which means that they've got an incredibly flexible and rich featureset and the ability to attach lots of drives both directly and indirectly, in many RAID levels, etc., which is great for Windows but not so great for FreeNAS and ZFS, where the operating system is quite capable and competent at dealing with the raw disk devices. Using a hardware RAID card introduces numerous additional failure points such as write caching complications, device drivers that aren't up to snuff, encapsulation inside a controller-provided partition, etc. It's usually easy to make a RAID card work, but that's "work" in the sense that you got it to run, not that it runs correctly or under adverse conditions -- things you definitely should demand from your HBA. Therefore you should avoid high end RAID cards and use a true HBA. A hardware RAID card that claims to offer JBOD or HBA isn't sufficient.
In some cases, users have come to the forum inquiring about a many-port controller where some combination would do. It is worth noting: With ZFS, you do want a decent SATA connection for your drives. Motherboard ports are generally fine. The $1 PCIe add-on card in the clearance bin at the local computer store is probably dodgy. A quality HBA from LSI will be reliable. If you need 10 SATA-III ports, you do NOT NEED A 16-PORT LSI RAID CONTROLLER FOR $800. You don't even need a 16-port LSI HBA for $500. You can get away with an M1015 and two motherboard SATA ports. Really! If you need 20 ports, consider two M1015's and four motherboard SATA ports. Even a SATA-II port can go much faster than a contemporary spinny hard drive can. Yes! It works just fine. If you need 24 ports, you definitely DO NOT NEED A 24-PORT LSI MegaRAID 9280-24i4e LSI RAID CONTROLLER FOR $1500. Get the point?
If you haven't figured it out, the point is to steer away from port-dense and extremely pricey RAID controllers. Don't let me stop you from spending $1500, but if you're going to spend $1500, buy a nice Supermicro chassis with a built-in SAS expander backplane like the SC846BE26-R920B along with an M1015 and cables, it is a better choice for FreeNAS.
But if you're stuck with an LSI RAID controller, here's some hopefully helpful information because you're probably sitting there wondering "what now."
NOTE: In the many years since this was written, there's been lots of evidence that you do NOT want to do this. Please see the linked article: What's all the noise about HBAs, and why can't I use a RAID controller?
We have a Supermicro LSI 2208 built into the mainboard on this server. Planning to use it for ESXi so yes we needed a RAID controller, but right now it is running FreeNAS for fun testing. The "Driver for new LSI card" thread came across so I went looking to see what the deal was.
This is how you can identify the LSI BIOS that uses MFI-based drivers. When you're booting, it'll say "SAS-MFI BIOS" and talk about "WebBIOS". Our Supermicro LSI MegaRAID 2208:
If you hit control-H to go into the WebBIOS (yay for consistency), it'll give you tools to configure your controller and the attached drives. They try to make it look Windows-y and while it's possible to control from the keyboard (use ALT plus the highlighted characters, and ENTER to select), it helps to have a mouse. The controller selection screen there will tell you the make and firmware of your controller.
The LSI intention is for individual drives to be aggregated into virtual drives, such as that virtual drive 0 which is a RAID1 of two SSD's, which is a common config here for our ESXi hosts. But that's not really what you want for FreeNAS. If you just plug drives into the LSI, they'll show up as "Unconfigured Good" drives:
so at this point you know the controller and the drives are talking. So the question becomes, what then. Apparently it is possible to configure the LSI controllers to pass through unconfigured-good drives to the underlying OS but they don't do it by default, and I don't see an immediately obvious way to set that. And it is more difficult because the disks don't show up in "camcontrol devlist". A "dmesg | grep mfi" from the FreeNAS CLI yields
which shows the RAID1 virtual device happily showing up as mfid0, but no mfid1. You can do a "mfiutil show drives" to see attached drives and "mfiutil show volumes" to show the available volumes. So what you probably want to do is do "mfiutil show drives" and then take note of the number (first column) of each "UNCONFIGURED GOOD" drive, then run "mfiutil create jbod NUMBER" for each of those numbers. Be warned that creating JBOD is almost certainly guaranteed to be overwriting what is on the disk. I don't think the MegaRAID Firmware Interface system has a way to cope with acting as a dumb SATA/SAS controller, so you will not be able to migrate disks back and forth from an LSI MFI controller to other random types of controller. So you're locked in to LSI if you have to use MFI, at least as far as I can tell. You also lose the ability to talk to the drive via SMART to look at its diagnostics directly.
Once you have used mfiutil (or the BIOS) to create your virtual devices on the RAID controller, they will be visible to FreeNAS and should be available in the volume manager. FreeNAS will not see MFI-attached devices that are "UNCONFIGURED GOOD", you must configure them and make them into virtual devices for the controller, at which point they'll be presented to FreeNAS. However, even once that happens, they will not show up as FreeBSD CAM devices, so if you're used to being able to do "camcontrol devlist" or other camcontrol ops, they won't be there for management in that manner.
The takeaway from all of this? Avoid MFI/MRSAS based (or other high end RAID) controllers if you plan to use ZFS. They're just about hopeless, and the inexpensive HBA models are much more appropriate and better suited to the task. The LSI HBA drivers have literally billions of aggregate problem-free runtime hours under FreeNAS and are trusted to the task.
ZFS users on FreeNAS should avoid using hardware RAID cards. ZFS and FreeNAS work best when the drive is managed directly by FreeNAS, including having SMART data available from the drive. Any use of "virtual drives" or one-drive RAID0 JBOD modes likely involves the controller writing its own proprietary configuration to the drive and then that means that if you ever need to switch controllers, you're going to have extra special trouble because you need to copy the data off the old drive, not just move the old drive to a new SATA controller port. See this linked article: What's all the noise about HBAs, and why can't I use a RAID controller?
LSI makes a lot of hardware. They make HBA ("Host Bus Adapter") and RAID cards. In general, a HBA is probably a better choice than a RAID card. The HBA cards I've seen are most often driven by the LSI Logic Fusion-MPT2 SAS driver in FreeBSD ("mps") but older ones may be driven by the LSI Logic Fusion-MPT SCSI driver ("mpt"). Most of the current RAID cards seem to be driven by the LSI Logic MegaRAID SAS driver ("mfi") which comes with some significant caveats, including being unable to see your drives via the FreeBSD CAM subsystem or access diagnostics via SMART, and all maintenance and management must be done via the "mfiutil" tool or via the BIOS. If you have an older controller of some sort, one that doesn't do SATA-III, you may need some other drivers and you will also be limited to no more than 2TB drives. If you are considering buying one, don't, unless you're getting it for $10 or free or something like that... such controllers include the IBM ServeRAID BR10i and Intel RAID Controller SASUC8I (LSI SAS3082E-R/LSI SAS1068E chipset) and are driven in FreeBSD by the LSI Logic Fusion-MPT SCSI driver ("mpt"). It is possible to crossflash these cards to be a generic LSI SAS1068E card, but the silicon is still capped at 2TB drives.
A favored card in the FreeNAS community is the IBM ServeRAID M1015, a budget RAID card that can be found inexpensively (~$75) on eBay. This card comes by default with IBM's version of the LSI RAID firmware on it. It's basically an LSI 9240-8i. You don't want to run that, you instead want to crossflash it to be an LSI 9211-8i (also known as LSI SAS2008) in IT mode, making it a basic SAS/SATA HBA card. Two SFF8087 connectors provide up to 8 SAS/SATA channels directly via breakout cables, or, if a compatible SAS expander is used, possibly many more SAS/SATA channels. The card consumes around 10 watts and should have at least some airflow in order to maintain proper cooling. It is important to crossflash this card! If you do, it works under the "mps" driver, and it becomes a plain HBA. If you don't, then it is under the "mfi" driver and it is a RAID card. If you get an M1015 and crossflash it to IT mode, you end up with one of the best HBA controllers available for FreeNAS, in my opinion. These cards often come without brackets or with a low-profile bracket to fit into 2U rackmount servers. Getting a proper bracket is recommended.
Configuration of LSI 6Gb/s HBA products
This is the BIOS probe of an M1015 in IT mode with 8 drives attached via a single SFF8087 and an LSI SAS expander (Supermicro 24-drive SAS backplane). Directly attached drives look just about the same, just fewer of them:

You can hit control-C during that to get to the card configuration utility.

Shows you the adapter type and firmware revision. Note firmware version 15 and IT mode. You can look under SAS Topology at your attached disks, but really there's not a whole lot to be seen here in IT mode.

Within FreeBSD, that'll probe as a controller serviced by the mps driver, and the drives will appear as normal "daX" devices and appear in "camcontrol devlist", so pretty straightforward pleasure there. A "dmesg | grep mps" would show something like:

And so far, things like hot adding a drive seem to work fine. It has been a very pleasant and flexible controller to work with, as we use them with ESXi in -IR mode as well. I expect that other LSI SAS 6Gb/s HBA's would closely resemble the above; if you run into something different, please let me know.
On to LSI 6Gb/s RAID:
The higher end LSI RAID products are a bit different. Aimed at high performance and high availability applications, they tend to have more ports, cache, perhaps battery for the cache, and options for managing and configuring the array. These devices are intended to abstract out the physical storage and present the system with virtual devices, which means that they've got an incredibly flexible and rich featureset and the ability to attach lots of drives both directly and indirectly, in many RAID levels, etc., which is great for Windows but not so great for FreeNAS and ZFS, where the operating system is quite capable and competent at dealing with the raw disk devices. Using a hardware RAID card introduces numerous additional failure points such as write caching complications, device drivers that aren't up to snuff, encapsulation inside a controller-provided partition, etc. It's usually easy to make a RAID card work, but that's "work" in the sense that you got it to run, not that it runs correctly or under adverse conditions -- things you definitely should demand from your HBA. Therefore you should avoid high end RAID cards and use a true HBA. A hardware RAID card that claims to offer JBOD or HBA isn't sufficient.
In some cases, users have come to the forum inquiring about a many-port controller where some combination would do. It is worth noting: With ZFS, you do want a decent SATA connection for your drives. Motherboard ports are generally fine. The $1 PCIe add-on card in the clearance bin at the local computer store is probably dodgy. A quality HBA from LSI will be reliable. If you need 10 SATA-III ports, you do NOT NEED A 16-PORT LSI RAID CONTROLLER FOR $800. You don't even need a 16-port LSI HBA for $500. You can get away with an M1015 and two motherboard SATA ports. Really! If you need 20 ports, consider two M1015's and four motherboard SATA ports. Even a SATA-II port can go much faster than a contemporary spinny hard drive can. Yes! It works just fine. If you need 24 ports, you definitely DO NOT NEED A 24-PORT LSI MegaRAID 9280-24i4e LSI RAID CONTROLLER FOR $1500. Get the point?
If you haven't figured it out, the point is to steer away from port-dense and extremely pricey RAID controllers. Don't let me stop you from spending $1500, but if you're going to spend $1500, buy a nice Supermicro chassis with a built-in SAS expander backplane like the SC846BE26-R920B along with an M1015 and cables, it is a better choice for FreeNAS.
But if you're stuck with an LSI RAID controller, here's some hopefully helpful information because you're probably sitting there wondering "what now."
NOTE: In the many years since this was written, there's been lots of evidence that you do NOT want to do this. Please see the linked article: What's all the noise about HBAs, and why can't I use a RAID controller?
We have a Supermicro LSI 2208 built into the mainboard on this server. Planning to use it for ESXi so yes we needed a RAID controller, but right now it is running FreeNAS for fun testing. The "Driver for new LSI card" thread came across so I went looking to see what the deal was.
This is how you can identify the LSI BIOS that uses MFI-based drivers. When you're booting, it'll say "SAS-MFI BIOS" and talk about "WebBIOS". Our Supermicro LSI MegaRAID 2208:

If you hit control-H to go into the WebBIOS (yay for consistency), it'll give you tools to configure your controller and the attached drives. They try to make it look Windows-y and while it's possible to control from the keyboard (use ALT plus the highlighted characters, and ENTER to select), it helps to have a mouse. The controller selection screen there will tell you the make and firmware of your controller.

The LSI intention is for individual drives to be aggregated into virtual drives, such as that virtual drive 0 which is a RAID1 of two SSD's, which is a common config here for our ESXi hosts. But that's not really what you want for FreeNAS. If you just plug drives into the LSI, they'll show up as "Unconfigured Good" drives:

so at this point you know the controller and the drives are talking. So the question becomes, what then. Apparently it is possible to configure the LSI controllers to pass through unconfigured-good drives to the underlying OS but they don't do it by default, and I don't see an immediately obvious way to set that. And it is more difficult because the disks don't show up in "camcontrol devlist". A "dmesg | grep mfi" from the FreeNAS CLI yields

which shows the RAID1 virtual device happily showing up as mfid0, but no mfid1. You can do a "mfiutil show drives" to see attached drives and "mfiutil show volumes" to show the available volumes. So what you probably want to do is do "mfiutil show drives" and then take note of the number (first column) of each "UNCONFIGURED GOOD" drive, then run "mfiutil create jbod NUMBER" for each of those numbers. Be warned that creating JBOD is almost certainly guaranteed to be overwriting what is on the disk. I don't think the MegaRAID Firmware Interface system has a way to cope with acting as a dumb SATA/SAS controller, so you will not be able to migrate disks back and forth from an LSI MFI controller to other random types of controller. So you're locked in to LSI if you have to use MFI, at least as far as I can tell. You also lose the ability to talk to the drive via SMART to look at its diagnostics directly.
Once you have used mfiutil (or the BIOS) to create your virtual devices on the RAID controller, they will be visible to FreeNAS and should be available in the volume manager. FreeNAS will not see MFI-attached devices that are "UNCONFIGURED GOOD", you must configure them and make them into virtual devices for the controller, at which point they'll be presented to FreeNAS. However, even once that happens, they will not show up as FreeBSD CAM devices, so if you're used to being able to do "camcontrol devlist" or other camcontrol ops, they won't be there for management in that manner.
The takeaway from all of this? Avoid MFI/MRSAS based (or other high end RAID) controllers if you plan to use ZFS. They're just about hopeless, and the inexpensive HBA models are much more appropriate and better suited to the task. The LSI HBA drivers have literally billions of aggregate problem-free runtime hours under FreeNAS and are trusted to the task.