3ware 9500s -12

Status
Not open for further replies.
Joined
Nov 21, 2011
Messages
6
I'm installing freenas 8.02. I have a gigabyte m68mt-2sp motherboard with an AMD phenom x2 3.0 ghz cpu with 4gb ram. I have 5 seagate 2 TB drives. 4 drives in a raid 5 array and one drive as a hot spare. When trying to install freenas, the install stops giving me the following error:

twa0: Warning (0x15: 0x1010): Driver firmware mismatch.

I called 3ware, they said the firmware is the latest there is. Anyone have anythoughts on this?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I don't think I have any more of the 9500's in stock, we mostly use the 9550's these days for PCI. The 9500's used to work really nicely with FreeBSD.

If you're planning to use ZFS (which you admittedly haven't stated), why are you using the 3Ware's RAID function? Don't do it. Use RAIDZ2 instead. When you get that far, that is. :smile: You might find that going to JBOD or passthru mode "fixes" whatever the problem is, anyways, so it's worth a shot going that route.
 
Joined
Nov 21, 2011
Messages
6
9500s or ZFS

My original intention was to run hardware raid. However after fighting the 9500s card, I gave up, did some reading and decided on ZFS Raid-z2 I tried using the 9500s-12mi as jbod, but I still get the error on boot and freenas quits loading.

I took the 9500s out, and started experimenting with ZFS using the 4 sata ports on the motherboard. I also ordered another non-raid sata controller so I can add more drives. I'm fairly certain that ZFS is the way to go. I'd like to end up around 10TB. I also have 2, 5 bay hot swap caddies for this server.

I was also considering a 12 port 9650 controller.

Thanks for the tip. I'm just really disappointed at the issues that I've had with the 9500s, I just really liked 3ware stuff and never had problems in the past.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
5 2TB drives in either your RAID5/spare or a RAIDZ2 will only get you 6TB usable space.

I don't have a 9500 to test with. We run a lot of traffic through 9550's though. So I, too, am a little disappointed to hear there's trouble with the 9500's.
 
Joined
Nov 21, 2011
Messages
6
5 2TB drives in either your RAID5/spare or a RAIDZ2 will only get you 6TB usable space.

I don't have a 9500 to test with. We run a lot of traffic through 9550's though. So I, too, am a little disappointed to hear there's trouble with the 9500's.

Are you in the US? I can send you a 9500s-12mi to test with, as its fairly useless to me now. Tell me where to send it.

I'm on a fairly tight budget, so if as soon as I get a non-raid sata controller working, so that I can get 6-8 more ports, I'll order some more drives.

I'll probably have to make a second machine identical to this one for backups too. 10TB is a lot of data to back up. (rsync?, snapshot?, hmmm... more challenges!) Thanks for the insights.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Thanks, but no, I'd never end up doing anything meaningful with it, even if I could dig up appropriate cabling. Sorry.

Sold a bunch of those, though, we made heavy use of them in a 24-drive chassis design that a bunch of Usenet providers adopted as the cheapest way to store large quantities of data. Awesome to see three racks of eleven 4U servers each, each sporting 24 250GB drives (200TB). The competition then began... but it's entirely possible I haven't actually touched a 9500s-12mi since then. We adopted the 9550's in late 2005 when support in FreeBSD was still very sketchy, 6.1B4 IIRC. Wow, makes me feel old.

Anyways, it's kind of unfortunate because the writeback cache on these would be nice for a little ZFS acceleration, but you'll probably be delighted even using just a plain SATA controller.
 
Joined
Nov 21, 2011
Messages
6
I did try rolling back to OF 2.3 to see if maybe the 3ware kernel driver had been updated and was causing the problem. No joy. I guess I could go back even further and see if *that* was the issue, but I'm thinking what's the point? If I have can multiple drives and multiple cheap non-raid sata cards with ZFS, *and* get the same reliability, why bother sitcking a new $300 -$600.00 3ware card in there?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
That's kind of the idea. As someone who's been watching acceleration technologies and remembers things like PrestoServe with annoyance and disgust, I have to say there's something nice about just throwing RAM, SSD, and cheap disk at the issue and watching it go away...
 

gooch

Cadet
Joined
Nov 21, 2011
Messages
7
Using FreeNAS 8.0.2 and I'm running Firmware Version = FE9X 2.08.00.003 of a 9500S-4LP on an Intel Server board without issue. Have you updated all the firmware for your system? I've had several PCI cards not work and throw odd errors when drivers attempt to load them, but then all be corrected by updating the system BIOS or BMC.
 
Joined
Nov 21, 2011
Messages
6
The update is a great idea. I just checked, and I'm running the latest motherboard bios version. Right now, I'm using the 4 sata ports on my motherboard and I added in a 4 port rosewill rc222 PCI sata raid controller and I'm using it as just a controller without the raid functions. I'm doing ZFS and it works well, but I need 4 more ports. I haven't been able to find a 8 port rosewill to replace the rc222 card. I have two pciex and one pciex16 slots open and I was considering adding another pciex 4 port rosewill. This is really looking like it would be a big kludge, a motherboard controller and two additional but differant controllers. I'm guessing will cause some problems later on down the road? I might have to just give up and get a 12 port 3ware 9550, 9650 or 9750, but I'm concerned that if the 12 port 9500s isn't compatible with this motherboard, maybe a new one wont be either?
 

gooch

Cadet
Joined
Nov 21, 2011
Messages
7
I wouldn't build a software array from disks connected to various controllers. You'll get unpredictable performance as the blocks are ready from the different disks with different caches. The cleanest install would be to get a controller that supports the number of disks you'd like, and to make sure all the spindles are the same make/model/type/speed.

As far as it not working, just make sure you buy it from a place with a good return policy. :)

/gg
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
and to make sure all the spindles are the same make/model/type/speed.

That's horrifying advice. Do NOT do this if you value your data.

Hard disks are unreliable. We pair them up, make arrays out of them, etc., as a result. Some failures are random. Some are the result of design flaws. So what you're doing when you make all the spindles "the same make/model/type/speed", you are betting your data on your hard drive vendor's design and manufacturing capabilities. History shows this to be a foolish bet.

Talk to any storage admin who's managed more than a handful of identical drives, and you will find stories about how several drives from the same batch failed within days of each other... sometimes after running for years. You really do NOT want this. Best reliability? Mix drives, mix manufacturers.

Once you lose fascination with the idea of having all of your disks be the same, ZFS allows you to lose fascination with having all the same controller as well, because the compelling reason for THAT has always been to allow a RAID array to be built. You probably don't want to hodgepodge it together unnecessarily, but that doesn't preclude you from intelligently using the resources available.
 

gooch

Cadet
Joined
Nov 21, 2011
Messages
7
I think you've grossly read into that statement. I was merely stating that you wouldn't want to mix 5400RPM and 7200RPM in the same zpool, don't mix SCSI and SATA, nor personally would I have 750GB disks and 1.5TB disks. ZFS is very similar to NetApp WAFL as a COW file system, and in an ONTAP system you keep all disks aggregated together the same speed/type/size/etc.

Mixing any of these types is not considered best practice.
 

gooch

Cadet
Joined
Nov 21, 2011
Messages
7
I forgot to mention that I still persist that make and model should be the same as well. It makes it worlds easier to support when no matter what bay the drive that fails comes from, there's the same number to call for a replacement. Why complicate things? In fact, when you purchase an enterprise array, whether it's a small single shelf of disk, or several racks' worth, they're very often the same make/model (all Seagate for example) when you remove the EMC, NetApp, or Equalogic faceplate.

I do see your point regarding the mean-time-to-failure of sequentially factory built disks being similar, but that's what parity and hot spares are for. If it's uber-critical and it can stand the performance hit of double parity calculations, use RAID-6, or simply use RAID 10.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yes, when you purchase an enterprise array and you allow the vendor to define what gets put into it, that's what you get. I suppose you hadn't noticed that they also push really heavily for you to buy backup systems and second sets of storage and all sorts of things if you want "additional reliability."

You know what they say... "There's a sucker born every minute." Your vendor thanks you. Your competitors thank you too.

The Google study showed a REALLY impressive correlation for drives from certain batches failing in groups; you can of course ignore that and you're probably okay, unless you happen upon one of those bad batches, in which case you are totally screwed. This is not new or unusual advice. It's mentioned all the time. See

http://serverfault.com/questions/15531/raid-hard-drive-preventive-replacement

for example. Or look for the ZFS references to it. Please don't suggest "ease" as an excuse for bad practices. Yes, it's easier. But I don't get paid to sit on my rear and take it easy. I get paid for results. I get paid to make sure things work the way they're supposed to, and a storage system is supposed to be reliable and available. This is a simple thing to do to encourage reliability and availability. This is not the age of SCSI busses where you had incompatible firmware on different drives on the same bus and you'd get lockups if you mixed them. This is the new and improved age of SATA/SAS, and quite frankly it's Really Hard(tm) to tell the difference between two drives with similar characteristics from different mfrs. So take advantage of the modern era, make your data safe, use a heterogeneous mix of drives if you want to improve reliability.
 

gooch

Cadet
Joined
Nov 21, 2011
Messages
7
You have a rather combative attitude, referring to me as a sucker and my advice as horrifying. Personally, I think you're taking a rather statistically improbable scenario and blowing it way out of proportion to sound as though you're more informed than you actually are.

Truth is, whether you, I, or anyone likes it - these shelves of disk that are installed by major array vendors (sometimes many racks' worth) are usually supplied through a single disk vendor's run of disks from a particular batch. That is simply how it is in the industry and you can't get around it. When I discuss with my customers how their arrays have been operating a year to 3 years after the initial install, they have always reported that only a disk or two has failed. I have never heard of a case, within or outside of my customer base, where rampant failures have occurred. I would love to hear that this happened to one of my competitors, but nothing such as this has ever come my way.

Major array vendors stand to lose not only money, but consumer confidence if their arrays are constantly losing customer data as your opinion would suggest (since they're using disks from a single run). They are not. The bottom line is, whatever mathematical possibility there exists through same-run disks, it can be easily combatted with parity and hot spares. That is what these technologies are for and they work excellently in data protection.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have a "combative attitude"? Maybe, if you find yourself on the side of being wrong.

I'm trying to sound "more informed than I actually am"?

Well. Let's see. I've been doing this big storage stuff like forever. I'm well known in the FreeBSD community for disk related stuff. As a FreeBSD developer, see http://www.freebsd.org/doc/en/articles/contributors/contrib-develalumni.html, I contributed things such as the first implementation of the noatime flag on FreeBSD, did extensive testing and debugging of many things related to huge DAS arrays on FreeBSD, well documented on the mailing lists, and contributed tons of performance tuning advice to the community, much of which remains the gold standard, such as the comments in ccd regarding interleave. I'm well known as the primary developer behind Diablo, the Usenet news package used by numerous service providers, who compete with each other on how many petabytes of storage they currently offer. It's not too hard to check my references.

So here are the facts.

1) Shelves of disk installed by major array vendors will come populated with a single manufacturer's drives, if you allow them to do that to you. Now, if you're small, they may simply choose to ignore your business if you insist on heterogeneous storage. But big customers can and do do it. Even if you don't, it isn't that unusual to find that your storage vendor was decent enough to use drives sourced from two or more batches in building your array, which would appear to be tacit acknowledgment that using all one drive from a single batch is a really bad idea.

2) As I previously said, no, homogeneous arrays do not "constantly lose customer data as my opinion would suggest", which I didn't actually suggest. The industry *knows* that there are high failure rates among certain models and vintages of hard drives. I can point to numerous discussions of this. If this year's Barracuda has some bad runs, your array built out of WD Black's isn't going to be more unreliable because of it. But out of the population of arrays that are made out of Barracudas, you are likely to have an artificially high rate of data loss, because some sets of drives are statistically more prone to fail.

3) Even the storage vendors have researched this. Consider for example, a commonly quoted authority on the subject, NetApp, who submitted the following a while back. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1285441

4) We're talking about FreeNAS best practices, here, not vendor-supplied arrays. For a FreeNAS deployment, the majority of users are free to pick and choose their hardware. Many of them cannot afford to be building excessive amounts of backup and redundancy into their systems, and have come to FreeNAS in order to protect their data with ZFS. While it is very unlikely that a user who has a small quantity of drives in RAIDZ2 will lose enough drives to render the pool useless, the fact of the matter is that you can reduce that risk further through careful selection of hardware. Some of these guys are buying 24-drive chassis like the Norco, and at that point, it's definitely worth looking at doing anything and everything you can to protect your data.

5) FreeNAS users themselves report a lot of familiarity with certain brands/batches of drives failing. I mean, geez, read the forums. Check out http://forums.freenas.org/showthread.php?577-And-another-2TB-drive-dies-Samsung-F4-this-time for example.

Or perhaps I'm uninformed, Network Appliance's uninformed, Google's uninformed, FreeNAS users are uninformed, and you're here to enlighten us as to how our ways are stupid and we're wrong to buy a mix of drives so that it's easier for us to .. uh, how did you put it? "There's the same number to call for a replacement."

I guess I'm stupid as I can't even begin to figure out how that's meaningful. I would have thought that the user would pull the drive, look at the manufacturer's name, go to their web site, and fill out an RMA request. Since you're not RMA'ing the *other* drives in the array, the utility of having the "same number to call" for all your drives seems irrelevant. Please enlighten us as to how this is relevant.

(well, hey, you did accuse me of a combative attitude, I aim to please.)

But seriously, now, my point is to avoid giving people advice that's commonly understood to be bad. Telling someone to buy drives of all the same kind because it's *easier* (your word, not mine), well, it'll work out often enough, it's a minor risk, except for when you actually do experience a multiple failure. I mean, the number of times my life has been saved by a seatbelt in my car is zero. Therefore, I don't need to wear a seatbelt, right? It's that same sort of thing. You can likely survive just fine buying homogeneous arrays of matched drives. But I'm going to advise people that this is risky, and I'm certainly going to speak up when someone actually suggests it as though it's a best practice.
 
Joined
Nov 21, 2011
Messages
6
Wow... I was just trying to work in within my budget and make what I had work... not start a war. You've both provided good advice in helping me through this process. Originally, I couldn't get Freenas 8.0.2 to work with my 3ware 9500s -12mi. So I tried a couple of things, got some advice and learned some things. I'm still working the problem. It was suggested that I might have a bois issue that was reporting something funny to freenas and causing the problem. This turned out to be case. I got rid of my new phenom II motherboard and went back to an older board with an AMD Sempron 3000 cpu and 768mb ram. Obvisously not enough ram to run ZFS. So, I configured the 3ware card for hardware raid 5, (4-2TB. +1 2TB hot spare) fired up freenas, and created a 5tb UFS volume. I'm gonna work with this for a while, and it it works, I'll four add more drives (all the same size, but from a single vendor) and create a second array and another UFS volume..

If I dont like it, I'll go back to the phenom II motherboard and cpu. Create a zfs array consisting of 4 drives on one controller, and create a second consisting of 4 drives on another controller. I think if I keep each ZFS volume on its own controller, things will be fine. So... I've learned from both of you and I'm making progress and moving forward.
I'll keep you posted.
 
Status
Not open for further replies.
Top