I think it's a combination of people buying enterprise class hardware because its "enterprise class", people ignorant of ZFS(I had heard about it but didn't know about all of the advantages until early 2012) and the fact that FreeBSD really isn't that popular compared to Microsoft. Too many people know MS stuff and assume there can't possibly be anything better when you have a multi-billion dollar worldwide software developer making software.
A slightly different perspective: there is a demand for cheap hardware, driven by cheap consumers who want to get the most for the least. From this pool spawns the kinda-works stuff like Realtek ethernet chipsets. There is a demand for quality hardware, driven by businesses who just want to get on with getting business done, and are willing to pay a reasonable premium to obtain it. From this pool spawns the reliably-works stuff like Intel ethernet chipsets. There may be a third pool, the prosumer, a little bit in the middle, but probably leaning towards business/enterprise.
So consider storage:
- Your average consumer stores their photos on the cheapest PoS external drive they can get their hands on at WalMart, fully expecting that it will be with them for the rest of their lives. The slightly more "power user" ones will get the drives built into their PC by the local PC shop, maybe even with the crappy and chintzy software RAID written by intern monkeys of the SATA chipset's manufacturers, which keeps the data in RAID until one of the drives dies and the recovery software doesn't work quite right.
- Your average business just wants their data to Be There(tm). So years ago, long before ZFS, it became common for businesses to utilize RAID and "enterprise grade" hard drives to build fast, supposedly resilient storage systems, and for manufacturers of these to charge a premium for things like RAID and BBU because the market was real tolerant of that sort of thing. Businesses also had the need to store massive amounts of data, so technologies to allow dozens or hundreds of drives to be attached exist for that. Usually data protection of such large caches of data is considered mandatory! And as long as it all worked with Netware, Windows, NT, and later things like ESXi, it was all good stuff, but basically the hardware was targeted towards those use models.
Now along comes ZFS, a revolutionary storage technology, championed by Sun who recognized that building specialized silicon for storage was extremely expensive and was rapidly outdated every few years, but CPU advances with multiple cores and multiple busses meant that you could potentially run your RAID code in kernel-land on the host system. And Sun had a history with similar products, particularly Online: DiskSuite (later named Solstice DiskSuite). Attaching the disks directly and letting the host CPU deal with it, giving them a unique edge in a variety of ways, not to mention the opportunity to sell massive CPU/massive memory servers.
Next, along comes FreeBSD on Intel. Integrates ZFS. And now you wonder why there aren't cheaper HBA's. Well there are, but for the most part, the "HBA"'s you will find are the ones that let you add two or four SATA ports to the system. If they have a decent chipset on them, they probably even work fine for ZFS. But they lack port density. So then we have to look up. There are HBA's that support larger numbers of drives, but they're often a similar/same platform as RAID controllers by the same mfr. Look at the LSI2008, which can be flashed to IR or IT modes.
So this is where the PC world is completely awesome; many businesses work on three year cycles, and many manufacturers include things in base packages that aren't needed. So you can look both at the gear that's being retired as "out of date" and also stuff that's current but being pulled out to be upgraded. I don't remember offhand the story of where the flood of M1015's is coming from, but you know that for every M1015 you see on eBay for $75, some business probably paid more than twice that for it. And the M1015 is built on a generic platform LSI designed to be able to build a range of storage controllers, both HBA and RAID. So you can get your cheap multiport HBA for ZFS off eBay, but it'll also be capable of being a RAID card, because designing it to support both possibilities means more sales of a single product, which is in turn easier to support.
Now let me flip this on its ear, because fair is fair, and while I agree ZFS has great advantages, it also sucks, at least right now, in some ways, on FreeBSD.
ZFS won't let you know when a drive fails, at least not right away. By way of comparison, when we have a drive in RAID die on one of our ESXi hosts, we get an instant flag and alert.
ZFS needs some significant manual puttering to swap in a new drive. By way of comparison, when one of our RAID drives dies, most RAID gear these days begins the rebuild without further prompting when a new drive is inserted.
etc. I fully expect ZFS to be able to do those things in the near future. But right now? The RAID controllers still have some advantages if you need simple and reliable.