I do have to wonder what wiped out the GPT partition and what would have happened if the swapspace had been disabled. Lots of people choose to set the swapspace to 0, and we aren't really sure how much space was wiped when whatever went wrong did whatever it did.
I'd be curious to see what a 100% full zpool would do if you scrubbed it after this accident. I'd assume that if anything more than the GPT were corrupted the zpool would have a problem. Depending on the corruption it could range from a few corrupted files to the pool being unmounted.
Good points.
If the ZFS areas were badly damaged then you're at the mercy of the location coherency of data/corruption across discs as to whether Z[1-3] redundancy would buy you anything. At least with the ZFS checksums you will be told if things were actually OK.
All I can say with certainty about this particular corruption is that it affected at least between 0 and 16Kb on Thomas' discs. Beyond that, who knows. The ZFS signature at 2Gb was intact, but random corrupted sectors could have reached beyond that.
Having thought it, I'm happier with the idea of "whole disc" ZFS then when I started out. I did it originally for cross-system compatibility, but now value the fact that ZFS occupies the entire raw block device. ZFS actually seems to have better structural redundancy than GPT: 4 copies of a 256KB structure - 2 head and 2 tail. A GPT corruption doesn't actually loose your data, but will hide it until you recreate things. Whole-disc ZFS may avoid this - but I haven't run an experiment to prove how easy or transparent it is.
A potential downside is that you may have 512KB rather than 2G of sacrificial buffer at the start of the disc to deal with whatever clobber problem first caused this for Thomas. We don't know if it's zero-oriented, sequentially extending or random so it's difficult to say.
Thomas: can you post "zpool status" after you've copied off the data and/or run a scrub. Also, how full is the zpool?