Even with a battery or flash-backed cache on the RAID card there's still a window of opportunity for things to break.
Let's say ZFS issues a "flush cache" against a disk that's actually a RAID volume. Your controller card absorbs the 1GB or so of data into its protected RAM and starts spooling it out to disk.
Then your motherboard decides it's had enough, and goes poof. Your RAID card says "Hey, I lost power" and dumps the data to its internal NAND. That 1GB of data is trapped in your RAID controller card, and you likely won't be able to bring your pool back online without it. Hope there's no errors in the NAND flash on that card, or you'll then commit corrupted data to your pool.
With an HBA, all of the data is on the disks themselves. HBA dead? Swap it out, import the pool, you're up and running. Motherboard goes up in smoke? Pull all the drives and put them in another server, import the pool ... you get the idea. Having a RAID card involved ties the drives and controller together, and you lose portability as well as risking your pool being in a nasty, inconsistent state - which ZFS by its very nature was designed to avoid.
There's plenty of other cases beyond "lost features" such as "lost performance."
Let's say your RAID controller decides that it's time for a patrol read of your array. ZFS is trying to queue up I/O to disks that should be idle and doing nothing, but in reality they're thrashing trying to keep up with the RAID controller's demands as well as their own. With an HBA, ZFS goes "time for a scrub" and has visibility into how much I/O is hitting each device, so if there's a period of low activity, it will sneak a few extra read I/Os in there for scrubbing, with thresholds based on tunables.
Even better, let's say you have an older RAID card like the Dell PERC H700 series with a battery-backed write cache. Everything's working fine, the write cache is able to accelerate your disks. And then one day, write performance slams face-first into the ground. You look at the OS and try to figure out what's going on. Nothing. Just seems like your disks have decided to only be able to deliver a tenth of their previous speeds. You pull logs. Analyze. Try to decide if it's worth rebooting and causing downtime. Then suddenly things are back to normal. What the hell caused it? You're never able to sort it out. And then it happens again. Oh no, you think, here we go again. You post your ventings on a forum, like this one.
And someone goes "Oh, that's the 90-day battery learning cycle. It puts your drives into write-through mode. Performance will suck until it's done. Gotta upgrade to the newer model to fix that."
Now that's for end-users with RAID cards.
You're looking at layering ZFS on top of a LUN presented from a storage array that presumably addresses these issues, obeys flush commands, etc. So what do you lose versus XFS/ext4? A little bit of performance overhead, perhaps. Some peculiarities with a copy-on-write filesystem (which might be magnified at your array if it also uses copy-on-write or redirect-on-write). If you only present a single disk, ZFS can detect corruption, but it can't correct it - so you lose that major bonus. If your array can detect and correct it, then it's fixed there.
Basically, if you're willing to check and confirm that your array will fill in the gaps that ZFS won't be able to provide you - you won't lose anything. A commercial SAN array often can; a single HW RAID controller presenting virtual disks often can't.