LSI MegaRAID 9265-8i raid controller: All disks set to Raid-0

Status
Not open for further replies.

ran

Dabbler
Joined
Jan 30, 2014
Messages
18
I've been doing more research on motherboards and secondary controller cards. I've been thinking about getting the SuperMicro X10SL7-F, with the 8 SAS2 ports and 6 SATA ports. However, after reading over some details on Calomel.org I'm wondering if I should get a different motherboard and purchase an LSI MegaRAID card. The section for "All SATA controllers are NOT created equal" is very compelling. I'll cut and paste below:

1x 2TB a single drive - 1.8 terabytes - Western Digital Black 2TB (WD2002FAEX)

Asus Sabertooth 990FX sata6 onboard ( w= 39MB/s , rw= 25MB/s , r= 91MB/s )
SuperMicro X9SRE sata3 onboard ( w= 31MB/s , rw= 22MB/s , r= 89MB/s )
LSI MegaRAID 9265-8i sata6 "JBOD" ( w=130MB/s , rw= 66MB/s , r=150MB/s )

1x 256GB a single drive - 232 gigabytes - Samsung 840 PRO 256GB (MZ-7PD256BW)
Asus Sabertooth 990FX sata6 onboard ( w=242MB/s , rw=158MB/s , r=533MB/s )
LSI MegaRAID 9265-8i sata6 "JBOD" ( w=438MB/s , rw=233MB/s , r=514MB/s )

It appears the way they have configure their LSI MegaRAID card is to set all the drives as RAID-0 and then later use ZFS to create raidz or mirrors[1]. The reading I did here on the FreeNAS forums made it seem that it was better to downgrade the firmware in something like the the M1015 card. Are the people at Calomel on to something, or have they munged something up? Or do I not understand this all? If throughput is actually better on the LSI MegaRAID than the motherboard controller, and it's a simple task to just set all the drives to RAID-0, then this would seem a better option as the card in question says it can handle up to 128 disks[2]. The 9211-8i card appears to retail for around $220, which is similar to the M1015 card. I'm assuming the RAID-0 trick would work on the 9211-81, but I have no idea. Maybe that works on the M1015 as well? Maybe the folks at Calomel would get even better performance if they downgraded their firmware in a similar fashion to the M1015 process?

I'm lost.


[1] The LSI MegaRAID native JBOD mode does not work very well and we do not recommend using it. If you use LSI JBOD mode then all of the caching algorithms on the raid card are disabled and for some reason the drive devices are not exported to FreeBSD. The working solution is to setup all of the individual drives as separate RAID0 (raid zero) arrays and bind them all together using ZFS. We are currently using raids in this setup in live production and they work without issue.
For this example we are going to configure 12 RAID-0 LDs, each consisting of a single disk and then use ZFS to make the raid-z2 (RAID6) volume. The LSI setup will be as close to JBOD mode as we can get, but the advantage of this mode is it allows caching and optimization algorithms to be used on the raid card.

[2] How the hell can it handle 256 disks, as per the documentation, yet it only has 2 mini-SAS connectors? Is it just a matter of getting a connector that splits it out to 128? Granted, I really only want to drive 15 disks in total, but, I'm a bit lost on that one. Below is the direct marketing text:

The LSI SAS 9211-8i host bus adapter provides the greatest available throughput to internal server storage arrays through eight internal 6Gb/s ports, driving up to 256 SAS and SATA physical devices. This HBA offers dynamic SAS functionality including dual-port drive redundancy and SATA compatibility. Utilizing two internal x4 SFF8087 Mini-SAS connectors, the low-profile SAS 9211-8i is an excellent fit for 1U and 2U servers.


 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
I've seen a LOT of bad advice on other forums and blog posts that goes uncorrected. From what I've read from the forum experts here, stay as close to bare drives as possible. It will probably work fine (and then destroy itself with no warning) and maybe it will benchmark faster, but once you've saturated your network connection (probably 1G), speed doesn't matter.
 

ran

Dabbler
Joined
Jan 30, 2014
Messages
18
I'm not sure network connection would come into play, right? Maybe I'm missing something though.
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
I'll put it this way: my 6x3TB setup that's probably skinny on RAM benchmarks at 400MB/s write, 600MB/s read. All fine and good until I'm trying to access that data from another PC (which being a NAS, is pretty much ALL I'm doing with it). On gigabit ethernet the maximum possible is about 120MB/s.
 

ran

Dabbler
Joined
Jan 30, 2014
Messages
18
I appreciate your comments. I just wasn't clear on why you brought up the network connection in relation to my question. If the disk activity was caused through network connectivity only I guess I can see where you're coming from, but for my use case, there will be other non-network related disk activity.

The question would still be if what they're doing is viable. Calomel is not some random site, they've been around for a few years doling out good advice on various *nix related things. But this is the first time I've heard anyone suggest something like setting the disks up to be RAID-0 to get the benefits of the raid card.
 

joelmusicman

Patron
Joined
Feb 20, 2014
Messages
249
The question would still be if what they're doing is viable.

Ultimately, that's the question we face with every choice whether we ask it or not. Just like improperly virtualizing FreeNAS, this solution might work great right up until the zpool won't mount with zero warning. If you're willing to risk your data, it's your choice, but I doubt you'd find much support from the forum experts if something goes wrong.
 
Status
Not open for further replies.
Top