ZFS has zpool import -F which reverts to a previous uberblock, it seems designed for exactly this situation. Along with zpool scrub one can be sure that the file system is still consistent.
This is just FALSE, it's like saying you can fix gangrene by cutting off your leg. Which is true. But really not a generally applicable fix.
A scrub does not detect and cannot repair any damage beyond a simple corruption of a data block that is detected (and detectable) due to a checksum error. If you have an inconsistency that is committed because your hardware or system has broken ZFS rules, such as flushing data to drives incorrectly, you might end up with an irretrievable block because the data is LOST (on read, ZFS sees the bad checksum, cannot rebuild from redundancy, thus retrieves as zeroes) or other edge cases that the designers did not anticipate because they expected direct and reliable access to the disks.
The write cache in a typical RAID controller may hold many megabytes of data; some of the more recent LSI's have 8GB or more of read and write cache. Losing some or all of that is very bad.
Rolling back to a particular uberblock is really only a fix possible if the issue is detected and mitigated very soon after the error is introduced, such as within seconds. That might help if the system crashes, giving you the needed pause in your I/O, but if you trash your pool and then write tens of thousands of additional transaction groups, you're going to lose all of that more recent data. That's why I used the carefully selected words
no "fsck"/"chkdsk" type tools to fix your pool because those tools are designed to work weeks or months later, and are intended to validate the STRUCTURE of the metadata, not the consistency of the block checksums (which is what a scrub does). ZFS lacks tools to validate or repair the structure of the filesystem metadata.
Therefore I would have to submit that zpool import -F is not "designed for exactly this situation".
Meanwhile, there is no way to know if fsck fixed the file system or broke it further by dropping inconsistent parts.
Well, that's certainly true, but it is going to also be true for ZFS if ZFS were to have some hypothetical fsck. Once blocks are corrupt in ZFS, they typically appear as zero-filled blocks, which means that it is not too hard to have large amounts of stranded data floating around on a pool.
Basically this all boils down to ZFS being a particular way of thinking about storing data. If you are not interested in taking advantage of the hard compsci that was put into the design, by all means, go use ext4 or whatever alternative you prefer. ZFS isn't for everyone, and those of us who are willing to discuss it honestly will also concede that it has weak points, such as its reliance on pool integrity and a lack of fsck. You can address these issues using "the ZFS way" and get a workable solution. Or you don't have to, and then you get what you get.