Your disks just seem to be sitting idle during heavy reads. They're not delivering a lot of bandwidth each but they aren't taking a lot of time to do it either; if it was a case of heavy fragmentation I'd expect higher ms/read latencies.
What does the ARC hitrate look like during an svMotion operation? I know that prefetching has big impacts on svMotion under NFS because it's able to accurately pick up on the read pattern from the .vmdk file, but iSCSI is block-level so it's more scattered.
What HBA and drives are being used in the host?
Attached are screenshots of svMotion from iSCSI SAN to local M.2. I am svMotioning my vCenter and it is about 100GB right now. The first screenshots are from iSCSI mirror pool to local M.2.
I then performed a test of svMotioning from local M.2 to test samsung 850 pro ssd via ISCSI. This was weird because it transferred at like 50Mb/sec to this test iscsi share. Not sure if this pro is a failing drive or what. (This 850 pro used to be my l2arc). I then svMotioned back to local M.2 and transfers were about 150-300MB/sec.
I then svMotioned the vCenter back to the standard ISCSI SAN mirror pool and transfers were anywhere from 300-700 MB/sec.
As for drives, I know you are instantly going to say thats my problem and I am sure it is contributing to it somewhat, but I don't think its the full issue.
I have a mix of drives.
I have a Seagate Barracuda - ST2000DM006-2DM164
A couple of wd greens - WD20EARS
And some other Barracudas - ST2000DM001-9YN164
The drives are older drives and I have plans to replace them in the future with shucked 8tb elements. Currently using these drives because they were all free. I have replaced every drive showing starting signs of failures. Have had no problems resilvering. I was initially running is 2x6 Raidz2s, but switched to mirror for trying to get extra performance. I didn't really get much.