Just read through that HardForum thread. You've definitely been at this for ages, and been learning along the way. Good stuff. :)
One thing that stands out to me, is that in your screenshots over time it seems you're not taking into account the type of physical slot you're putting the cards into. eg x4, x8, x16 PCIe slots.
From memory - as it's been a while for me too - the MHGH28-XTC cards are designed for PCIe 2.0 x16. So, they'll be able to use their full bandwidth (eg 1GB/s) in an x16 slot. If you put them in an x8 slot, they'll technically work... but your maximum bandwidth will be reduced as there's less bandwidth on their motherboard connector for them to transfer stuff through. Not sure if the cards will even work in x4 slots. If they do, it's likely they'll be even more bandwidth constrained.
> Mellanox 2.9.1000
Ahhh, yeah. That's what's on most of mine.
> I was told 2.9.1200 enables something that makes network performance better
Sounds like RDMA mode, which (under windows) allows for much higher performance. The windows SMB drivers are reported to be able to use RDMA mode when talking to other windows boxes. Not something I've played around with, but if you can get it running then why not? :)
I did experiment with getting the 2.9.1200 drivers on some of my cards a while back too, just to see how it goes, and got it working. It required pulling apart the driver packages from HP's downloads of the driver, as they include some extra files, that allows for regenerating firmware with new settings.
Got it working eventually, but it was a real time sink. eg searching the HP(E?) website for Mellanox drivers, downloading them, installing + extracting bits, just to check if the driver package had the right pieces.
Looking through my archives... I didn't keep any of the end results. Dammit. I only have some of the drivers packages I downloaded, and they're not even sorted into good/bad.
That's kinda weird, as I normally archive the end result of things too, just so I don't lose work. :(