Okay then. Don't panic, well, probably don't panic.
First, welcome to the forums, and sorry you're having problems.
Please help us help you better. The Forum Rules, conveniently linked at the top of every page in red, provide guidelines for presenting a more cogent summary of your situation, including stuff such as some detail about your hardware platform, your pool setup, etc.
If you have redundancy in your pool, such as RAIDZ1 or mirrors, it is unlikely you have done any permanent damage. If you have RAIDZ2, you're almost certainly just fine.
I found a few blog posts that suggested that I use to dd to zero out the blocks so that zfs would reallocate them somewhere else.
dd if=/dev/zero of=/dev/ada5 bs=512 count=1 seek=560920
Assuming this is a 512-byte sector drive, that is probably fine. I do this from time to time, but I typically do a read first, to verify that the disk is having problems with the thing I'm about to zero:
dd if=/dev/ada5 of=/dev/null bs=512 count=1 skip=560920
Note the change from zero to null, and seek to skip, and the device moves from of= to if=
If I get a read error, I typically do the overwrite. You've lost the data there anyways. .
I've seen in some forums dd referred to as disk destroyer.
It can certainly be a terrible disk destroyer. It is a loaded gun. Used carefully and properly, it is safe. In this case, the fully automatic capabilities of the dd machine gun are held at bay with "count=1". You have shot at most one sector dead.

That is well within ZFS's self-healing capabilities as long as you've got redundancy in your pool.
Now I'm concerned that I've corrupted some files in my pool or maybe zeroed out the wrong ones.
Well, could be. But, probably not. Without the description of your pool and what kind of vdevs you have, it is hard to offer an authoritative opinion.
A scrub showed that it corrected some errors but I was unable to find a log of what it actually did.
ZFS typically does not report the specific corrections. This magic all happens internally to the filesystem, and it is really designed to assume that there are occasional corrections as part of normal operations. It will log corrupted files. If you completed a scrub and "zpool status" does not report any problems, you should be fine. Imagine the amount of log chatter you'd get from a pool of a thousand drives.
I can't find a way to tell what file occupies a specific block.
There isn't a way, or, at least, not an easy way without going into zdb and learning all about ZFS metadata and structures.