The design of modern file systems has become more aggressive over time; in the old days, storage was very expensive, files were small, and it was common to see systems tuned to minimums for free space reservations (which meant severely reduced write performance). It is somewhat different today. I mean, really, a terabyte of disk space? You're not storing mostly small files in that, are you?
So one of the things to note is that modern filesystems like ZFS are tending to be written - and optimized more - for handling larger files. That also includes some hidden assumptions: one of which is that big files are not as commonly randomly written, another of which is that violations of that (such as databases) frequently don't require sequential read access anyways, so fragmentation is less of an issue, and another is that sites implementing such things can address performance problems with other ZFS features such as L2ARC, which neatly addresses both sequential and random access read performance issues resulting from fragmentation.
On the other hand, ever-larger free space reservations are becoming more commonplace in environments where there are a lot of writes (good anti-fragmentation policy in any case), and the usage patterns of storage are changing as well, as much data is put on storage and then left for extremely long periods of time without further access, so even if somewhat fragmented, may not be a serious problem.
This turns out to be bad for small-scale iSCSI users, though, where you get the random writes and fragmentation and don't have an L2ARC to help "fix" the situation. There are probably other use cases that break as well. My suspicion, however, is that we're not going to see a "defrag" tool anytime soon. Those who have the most pressing need for such a thing also have other solutions to their performance issues.