You might want to think about how you are using your storage. While mirrors are quick like a bunny, they aren't efficient in terms of storage space.
You would never do that for a production VMware backend datastore seeing typical usage. I don't have any idea why he thinks this is a good idea. RAIDZ is basically evil for this sort of usage, as the IOPS will see a significant bottleneck. Plus be aware of the ZFS RAIDZ variable space allocation issue, so for a poorly designed combination of pool and ZVOL, you can end up eating massive amounts of space, far more than mirrors, if you do it wrong.
It's definitely a best practice to put all your ISO's, backup tarballs, and other big files on a RAIDZ in a different pool, though you can store those on mirror space as well as long as you don't mind the space waste. Can't always win that.
More curious is that you've been able to make a single ZVOL work. You might want to be careful to experiment if you add more disk space. VMware limits the queue depth etc, maybe take a look-see at stuff like
http://www.pearsonitcertification.com/articles/article.aspx?p=2240989&seqNum=4
which is just sort of randomly picked because I'm late and I've gotta run.
I would be very interested to hear what
@jgreco has to say about mixed drive sizes? I mean, I would be all for going with 4TB or 6TB drives, for the next 12, but is that going to cause me other issues?
If you replace existing drives in a vdev (mirror pair in this case), it will not do anything for you until all (both in your case) drives in that vdev are upgraded.
When the size of a vdev is increased, there will be a strong tendency to favor that one single vdev for writes for awhile, as it appears to have (and does have) significantly more free space than other vdevs. ZFS does not "stripe" as people mistakenly call it. It opportunistically allocates new blocks, strongly preferring the least-full vdev.
For an array of mirrors, you can bump the size of the array by picking a victim^Wvdev, adding a third (new) larger drive to the mirror, letting it resilver, disconnecting one of the old drives, inserting another large drive, let that resilver, and remove the old drive, leaving you a mirror vdev increased in size. It will see heavier traffic for awhile. Make sure the autoexpand property is set before you begin.
You can also add additional vdevs, which appears to be the original suggestion. You will gain additional IOPS as the number of vdevs increases. Again, ZFS will favor the new drives, and it will do so even more aggressively than in the replacement scenario. If you add a whole bunch at once, this is unlikely to be a problem.