Confused about that LSI card? Join the crowd ...

Status
Not open for further replies.

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
Do you think using a AMD APU will still work when cross flashing?

Thanks

I JUST saw this. Since the problem seems to happen on intel chipsets, I would have to assume the odds of success would be a bit higher on an AMD platform. In other words, plug it in and roll the dice ;) I notice you got it working so it is a moot point anyhow LOL
 

tmacka88

Patron
Joined
Jul 5, 2011
Messages
268
I JUST saw this. Since the problem seems to happen on intel chipsets, I would have to assume the odds of success would be a bit higher on an AMD platform. In other words, plug it in and roll the dice ;) I notice you got it working so it is a moot point anyhow LOL

yeah I did get it working but actually had to do it on my intel MB. it is a older MB using a intel 775 core2duo which is why it may have worked. read somewhere older intel ones seam to work.
 

9C1 Newbee

Patron
Joined
Oct 9, 2012
Messages
485
I believe my computer is also a 775 core2duo from 2009. It also ended up being the solution.
 

Posttime

Cadet
Joined
Oct 2, 2013
Messages
3
I am not sure if this link has been provided here prior...however I found it most helpful in dealing with the cross flash issue where the EFI shell method is needed...like on my SM X10SLH-F board. There is also info regarding X9 bios installs here as part of the post...just ignore if you have no need of it.

http://lime-technology.com/forum/index.php?topic=26774.msg234515#msg234515

Cheers!
 

steveh7

Cadet
Joined
Oct 31, 2013
Messages
1
Hi - I posted this on another forum as was directed to this thread - sorry for just jumping in!

I can get an HP SC08e for a reasonably good price - about 1/3rd of a new LSI 9200-8e, which by all accounts it is a rebadge of.

Does anyone know if it will be compatible or not with non-HP hardware, or is it "locked" somehow?

I intend to use it as JBOD with ZFS, like an external version of the venerable M1015.

http://h18004.www1.hp.com/products/s...08e/index.html
http://www.lsi.com/products/host-bus...s-9200-8e.aspx
 
Joined
Sep 23, 2013
Messages
35
Here's a snippet from a ZFS best practices document that sums it up well:



So based on that the 11 in a raidz3 + spare is probably the best for a 12 disk setup, though I'd benchmark it against a pool with 2 6 disk raidz2's in it. Also if you have a failure the pair of raidz2s will rebuild much quicker since it has less disks to pull data from for the rebuild and performance wise it might win out also. ZFS works very well with multiple zdev's in a pool, it's best not to add a zdev to a pool with data in it already because it won't strip the existing data across to the new zdev.

i got a second card today to connect the last 4 drives. so physically im going to have 8 drives in one card and 4 in another.
how would do it in this 8+4 scenario? (i know its JBOD so it would seem them all together, however i want the max space available and perfomance)
i believe the best perfomance can be achieved by keeping each hard drive in the same group as hard drives that are connected to the same card.
 
Joined
Sep 23, 2013
Messages
35
ok. so 11z3+1spare or 6z2+6z2. either way i lose 25% of my hard drives, which is a lot! i need more ideas T_T i tho i could make a 4+8 with less hard drive loss i need more space. i got way too many usb drives with data to move into this central storage

can i perhaps do 9z1+3z1? (that way i only lose 2 drives from the whole thing)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
can i perhaps do 9z1+3z1? (that way i only lose 2 drives from the whole thing)

"can" is extremely relative. I "can" go jump off a bridge and hope I don't die when I hit the cement below. Does that make it remotely smart? Nope.

You "can" do what you want to do, but you will be told that it is so incredibly stupid you might as well just delete your data. Read that link in my signature about RAIDZ1 being "dead". Then you'll understand why UREs have killed so many people's data. About 90% of all users that lose their data in these forums used RAIDZ1. If you want to take that risk, feel free. But be warned that as soon as you have a single disk failure you can pretty much expect data loss because of UREs and other things.

So "Good Luck" because you are going to need it with RAIDZ1. And luck always runs out at some point...

This thread is beyond the original topic of LSI cards. So if you plan to post in this thread any more instead go make a new thread. This is getting very off-topic.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
jgreco: what about using the drives with the LSI making each drive a RAID0 array (not jbod)

A few articles suggest this approach as to maximise speed transfer, helped by the hardware controller and it's extra cache.
And benchmarks found here https://calomel.org/zfs_raid_speed_capacity.html, certainly seems to confirm that point. They get nearly 3 times as fast transfer speed as opposed to onboard sata controller (comparing the LSI card vs onboard sata on a super micro X9 motherboard).
Though can't tell from that benchmark if it's the onboard sata that is super crap..
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
jgreco: what about using the drives with the LSI making each drive a RAID0 array (not jbod)

The second you do that you can expect to give up any use of SMART testing and monitoring. The RAID card will abstract the disk into an array and you won't be able to enjoy the SMART testing and monitoring that is one of the most important features for identifying failing disks before they fail as well as indentifying a failed disk when it happens. Just earlier today someone had over 800k files from a RAIDZ2 with one bad disk and likely 2 more failing disks and had no clue anything was wrong until it was too late. I commented in that post that if I were betting money I'd say his SMART monitoring was disabled. He hasn't responded back as far as I know.

Most controllers will still use their cache when in jbod from what I've seen. I know the 3ware, LSI, and Areca I've played with definitely use the read and write cache in jbod mode if they are enabled. Enabling/disabling them has a very distinct performance gain/penalty(depends on the cache you enable). In fact, I miss my Areca controller because the read cache made my pool's performance double! The write cache killed performance though.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
You can access the smart data for drives behind most raid cards, just need to provide the right arguments to smartctl and tell it the type of device to use

LSI card:
http://wiki.hetzner.de/index.php/LS...to_read_the_SMART_values_of_the_drive_in_RAID

3ware:
http://www.cyberciti.biz/faq/unix-linux-freebsd-3w-9xxx-smartctl-check-hard-disk-command/

I had never heard of Areca before..
But the smartctl man page indicates that to read the smart data behind an Areca page you do something like:
smartctl -a -d areca,2 /dev/sg2
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are sorta right...

For the LSI, some can. But they aren't supported in FreeNAS for SMART monitoring or testing. You also have other issues because the abstraction is specifically mentioned as highly not recommended with ZFS. You lose the ability to easily(if at all) match the device name to the gptid for disk replacement when you have a failing disk.

For 3ware, the command line works for some controllers, but not all. One I had would work, but the other wouldn't. Note that this is just like the LSI, its not supported. And again, just like the LSI the abstraction of the disk isn't recommended for ZFS and you potentially lose the ability to match a gptid with a disk serial number.

Areca support was added by me in a ticket months ago. It works for my controller in jbod only. As soon as you go to RAID the only option is to use the webGUI(if your card supports it) or from the areca-cli. The areca-cli is included in FreeNAS, but it doesn't support SMART monitoring and/or testing. I falsely thought that it had and had 2 disks failing for at least 2 weeks and I had no indication of it during that time.

Highpoint really screws you over. Its supposed to support SMART, but doesn't. Got a ticket in with the smartmontools guys and they couldn't figure it out. Highpoint provided no help on fixing it since its proprietary. There's a sticky on Highpoints if you want to see how screwed up their stuff is.

And if you do setup arrays, you are forcing yourself to buy that controller if it fails since RAID information will be stored on the beginning of the disk instead of the partition table, etc. Most people are deliberately going to ZFS to avoid the requirement to have a RAID controller(or replace it at a high premium) if it fails.

The manual has the warnings that you shouldn't do that, plenty of people have tried and many have been sorry. Just last week there was a guy in IRC with me that spent almost a week trying to figure out what was wrong with his FreeNAS server. Turned out his RAID controller was maSking disk errors. Then, he couldn't identify the bad disk because he had no ability to run SMART tests or monitor the disks nor could he even get their serial numbers. And since he had a RAIDZ1 if he pulled the wrong disk it could mean the end of his data. After a week of him fighting his system he finally went to an M1015 so that he could use the features that are preferred.

What makes me sick is that pretty much everything I've just spent time writing up is all discussed in this thread. So I'll just stop here and let you read that stuff if you feel motivated. Anyway, to make a long story short, really a bad idea. Plenty of users have learned the hard way. The manual, faq, and stickies should be enough to deter you from thinking this is a good idea in any way, shape, or form.


Edit: Here's someone that enabled emailing for himself after a year and found out within a few minutes he had 2 failing disks in a RAIDZ2...http://forums.freenas.org/threads/zfs-raidz2-issues-with-2-drives.16035/. So never underestimate the importance of SMART monitoring/testing and those emails.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Adding a crappy little RAID controller (LSI, etc) to your massive big RAID controller (ZFS) creates a very bad layering. There are some potential benefits but also significant downsides. If you can sufficiently address all the downsides, and cyberjock has discussed at least some of them, then you can do as you please, but with the caveat that "you've been warned."
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Adding a crappy little RAID controller (LSI, etc) to your massive big RAID controller (ZFS) creates a very bad layering. There are some potential benefits but also significant downsides. If you can sufficiently address all the downsides, and cyberjock has discussed at least some of them, then you can do as you please, but with the caveat that "you've been warned."


I never thought that my post would imply that I supported that configuration. Was more a matter of curiosity seeing that it's a configuration that was chosen by an organisation who obviously put a lot of thoughts into it.... You would have noticed that I had addressed the post to you directly seeing that you are obviously very experienced on this matter and not just on a theoretical basis...

To me the main factor in *not* using a hardware raid card, and use a simple HBA one is that more often than not; you're stuck to particular vendor has they all have their proprietary ways of doing things:
I remember a few years back; our main server (ZFS) was being backed up to a mirror machine using zpool export/import. The mirror didn't have enough onboard SATA; so I had used a Highpoint RocketRaid 8 ports: because they were cheap enough, and they were supported by FreeBSD without messing around (unlike 3Ware or LSI that always required to rebuilt the kernel from source).

We had a hardware failure on multiple drives at once; that made the zpool unrecoverable . I couldn't use the mirror hardware has it didn't have the hardware capabilities to handle the load. So I just took the drives out of the mirror machines, put them in the primary server and was hoping that was the end of it...
Too bad : the drives had been used as JBOD on the RocketRaid and wasn't working with the onboard SATA of the main server; and there was no spare slot to use the rocketraid...

So I had to purchase new drives and perform another zpool export/import across gigabit link which caused a 2 days downtime...

I won't ever go through that again...

As to my answer to cyberjock, I was only replying to the content of his message: that SMART couldn't be used once the drives were connected to a hardware card. Which is just not correct, and personally I've never had any issues monitoring the disks health status either via smartctl or the proprietary raid controller utility.
Prior the port of ZFS to FreeBSD; my controller of choice was 3Ware RAID controller: the 3ware utility on FreeBSD comes with a little daemon that provide a web interface: where you can set various alerts including emails for when a particular SMART registers reaches a defined value.
This has always worked just fine for my use.
And all the RAID cards I've ever used (I'm talking real hardware RAID card, not onboard intel software raid and the like) has provided one way or another to check the SMART status. But I only ever used hardware raid controller that I knew before hand where perfectly supported by the OS they were going to be used with

When I posted my question to you, I was entirely focusing on the performance side of things... Only to get an answer about SMART, which once again triggered a long lecture about items I already entirely agree with and that you mentioned in your first post...

I don't want to enter into a long, fruitless argument once again, about points that are ultimately agreed upon only because it started from a misunderstanding or different technical terms were used to which one party or the other was unfamiliar with.

I have chosen the X10SL7-F supermicro motherboard for my last system that comes with its own LSI onboard controller, only because I have confirmed that it can be flashed with an alternative firmware that make it a plain HBA. And that's how it will be used.

It's easier to set and source than getting a 2nd-hand IBM cards of ebay that may or may not be available (plus I never buy any computer gear 2nd hand as a rule anyway).
The card I usually use otherwise is the LSI SAS 9211-4i if there's not enough onboard controllers for when performance isn't too critical
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I agree that 3ware's utility allows you to access the web interface(if you so desire to use it). But it's not included in FreeNAS, so that's kind of a non-starter except via jbod. I used a 9550SE-24M8 and thanks to WG a patch was included in FreeNAS to make it work in jbod.

Are you actually using the X10SL7-F now? Did it reflash and work with SMART as built into FreeNAS? Someone in IRC the other day said he reflashed it to HBA mode and it still wouldn't work. SMART was a no-go for him. If you do get it up and running I'd like to know if it does work. I had eyeballed that board a week ago for a friend but we couldn't find a solid answer on if the reflashed HBA actually worked with FreeNAS.

As for sourcing the M1015, you can get them on ebay, amazon, and a few other places for cheap in both "new" and "used" setups. Mine was brand new and I paid $103 with shipping and everything for mine. Been a great card for me. One of the main reasons I recommend it is because so many people use it that if there's a problem you can bet the M1015 will be made to work. This issue comes to mind as an excellent example. That would have been a show stopper for MANY users(including myself). As for my Areca, 3ware, and Highpoints, support is virtually nonexistent. If it doesn't work you are kind of on your own to figure it out. I got the Areca and 3ware to work, but sadly Highpoint will probably never play ball correctly.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
I agree that 3ware's utility allows you to access the web interface(if you so desire to use it). But it's not included in FreeNAS, so that's kind of a non-starter except via jbod. I used a 9550SE-24M8 and thanks to WG a patch was included in FreeNAS to make it work in jbod.

I haven't played with FreeNAS since it was restarted by Ix system, only before (FreeNAS .7 I think it was called) ... And I always compiled the 3ware utility (or any other utilities for that matter) from the port system. I thought you could build the FreeBSD ports within freenas? (I've seen a few guides describing how to)

Are you actually using the X10SL7-F now? Did it reflash and work with SMART as built into FreeNAS? Someone in IRC the other day said he reflashed it to HBA mode and it still wouldn't work. SMART was a no-go for him. If you do get it up and running I'd like to know if it does work. I had eyeballed that board a week ago for a friend but we couldn't find a solid answer on if the reflashed HBA actually worked with FreeNAS.

I haven't received it no... I ordered two and one was in 2 weeks back-order. Couldn't be bothered to only pick half of the system and was waiting for the lot to arrive. Hopefully by next Monday...
Will report then for sure... If not I'll find a way to make it work :)
I certainly didn't for a second consider that not being able to read the SMART data could ever come in the picture. I haven't in all those years :(

As for sourcing the M1015, you can get them on ebay, amazon, and a few other places for cheap in both "new" and "used" setups. Mine was brand new and I paid $103 with shipping and everything for mine. Been a great card for me. One of the main reasons I recommend it is because so many people use it that if there's a problem you can bet the M1015 will be made to work. This issue comes to mind as an excellent example. That would have been a show stopper for MANY users(including myself). As for my Areca, 3ware, and Highpoints, support is virtually nonexistent. If it doesn't work you are kind of on your own to figure it out. I got the Areca and 3ware to work, but sadly Highpoint will probably never play ball correctly.


the M1015 new here in Oz starts at $208 here, and no one has stock, Amazon M1015 retailers do not ship here either... Hence why I mentioned the LSI above, it's only $25 more and let you connect 256 SATA should you ever want to push it and is also fully supported in FreeBSD.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
the M1015 new here in Oz starts at $208 here, and no one has stock, Amazon M1015 retailers do not ship here either... Hence why I mentioned the LSI above, it's only $25 more and let you connect 256 SATA should you ever want to push it and is also fully supported in FreeBSD.

ZOMG! Remind me to never move to Oz! That's double the price. :(

The M1015 is limited to 32 devices, but has 2xSFF-8087. So you get up to 8 disks on the controller directly, but are limited to 32 total versus the LSI 9211-4i which is 4 disk directly and 256 total. I'd say that for 99% of users though 32 devices is enough and most people would rather have 8 ports available versus 4.. I can only think of one person that has more than 32 disks off the top of my head and he has multiple controllers for performance reasons.

Interesting comparison in price though. Really shocked at how expensive stuff is in the land down under. I need to buy 100 M1015s and resell them for a profit. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I never thought that my post would imply that I supported that configuration. Was more a matter of curiosity seeing that it's a configuration that was chosen by an organisation who obviously put a lot of thoughts into it.... You would have noticed that I had addressed the post to you directly seeing that you are obviously very experienced on this matter and not just on a theoretical basis...

This forum largely ends up serving home enthusiasts, and while some of us come here from the business side of things, the ability to "fix" deficiencies by throwing lots of money at a problem is rare for enthusiasts. Telling people what is known to work is partially a side effect of that.

Besides, "organization who obviously put a lot of thoughts into it" is a potential red herring. There are examples of allegedly professionally designed storage appliances by vendors who jammed ZFS on top of a RAID controller which proceeded to eventually implode. So my opinion on this has evolved over time. In part because FreeBSD is missing bits necessary to make a more resilient error detection and correction system (the famous "zfsd", etc) and because it is very difficult to actually test what happens to a given controller in the face of a developing partial failure, we are VERY strongly in favor of a particular methodology that has been shown to work.

This does not in any way mean that other avenues cannot be made to work. Possibly even well. Maybe even safely. But there's a lot more work to do there than your average enthusiast is usually willing or able to do.

When I posted my question to you, I was entirely focusing on the performance side of things... Only to get an answer about SMART, which once again triggered a long lecture about items I already entirely agree with and that you mentioned in your first post...

You might not get much else here in the forums because most of the users here will have limited experience with alternative configurations. Cyberjock comes to us having some experience with that; his old (Areca?) controller apparently performed very well with ZFS in part because it had some significant amount of cache onboard. I've used 3Ware, LSI, and other controllers. There are ups and downs, and it turns into a complicated topic for a forum.

But generally speaking, I will note that the design of ZFS is oriented at avoiding those extra hardware layers if possible. ZFS maintains, for example, a massive "write cache" (up to 1/8th your system memory by default!) and so you can actually cause problems by adding a hardware controller in front, whose write cache will simply be flooded by ZFS pounding out a large transaction group.

It is hard at first for most people to grasp this so I usually describe it as putting a RAID controller (hardware) in front of your big RAID controller (ZFS), and then sometimes they kind of get it.
 

jyavenard

Patron
Joined
Oct 16, 2013
Messages
361
Besides, "organization who obviously put a lot of thoughts into it" is a potential red herring. There are examples of allegedly professionally designed storage appliances by vendors who jammed ZFS on top of a RAID controller which proceeded to eventually implode.

This is terrible! I think the only thing fitting that description is "organization", I doubt they've put much thoughts into it...

Can't understand why anyone would even attempt that design to start with!
 
Status
Not open for further replies.
Top