So, you are expecting to rewrite all data at each step? So the efficiency gain would then be "original width gain compared to final" + "plus one width gain compared to final" + "plus two width gain compared to final", and so on?
I am saying "compared to final" because while in your model data are rewritten multiple times, that gets a bit wild. So for 4-to-10, it'd be percentage of 4 disks, 4-wide, compared to 10-wide, plus percentage of 1 disk, 5-wide, compared to 10-wide, plus percentage of 1 disk, 6-wide, compared to 10-wide, and so on.
The question I have is how useful that degree of granularity is. Simply because writing data after expansion is part of the expansion, and while it's not optimal during use, it becomes optimal at the end. I guess you are comparing "don't rewrite at all and expand one by one while continuing to fill", whereas I am comparing "don't rewrite at all and expand to final width before writing anything more". I can see where your model could come in handy, as well, for users that need to decide on whether to rewrite, and that will expand their vdev very slowly.