I'm sure you are wanting a yes/no answer. And the bottom line is that you can't expect a yes/no question to your answer. Takes more brains that just a blind Yes/No. It's not the end-all as you are trying to argue against(it has its own limitations, which requires you the admin to actually have knowledge about networking which sadly most people don't). But its a
damn good starting point to see what is going on without relying on VERY variable and unreliable things such as "throughput of CIFS". You might not remember CIFS from the XP days, but getting over about 40MB/sec in a single link was pretty much impossible because of how CIFS worked. Luckily for all of us, CIFS was updated to the "SMB 2.0 spec" which took a protocol originally designed in the 1980s when high latency and low throughput was standard(and was designed for those exact conditions) and made it compatible with modern networks (low latency/high throughput). So yeah, using network protocols as a way to benchmark your network performance is a horrible way to do business. Just check out this link to a fairly technical article on the "complete redesign" of CIFS
here. There's plenty of testing people have done with SMB1 versus SMB2. Some show a more than 10 fold increase in performance between the two. So clearly that demonstrates that CIFS should NEVER be used as a benching tool if you have that kind of fluctuations in testing just by a change in the protocol.
Actual usage on the network at one moment while doing iperf testing but different loading when you do file load testing can affect the results too. There's dozens of factors that can affect both throughput and iperf. iperf is nice because it can tell you something is wrong. In this case, because:
In this case, the fact it was only hitting about 1/2 of the theoretical tells me something is VERY wrong with the network setup. If it had been 850Mb/sec then I would have dismissed the network setup as the problem. Not surprisingly, the iperf test and the speeds that the OP is getting are pretty much inline with each other. Pretty much most people that complain about network performance never have real network transfer speeds that exceed iperf(usually they are exactly the same as iperf or lower because the bottleneck isn't the network). This tends to validate that iperf is a useful tool for diagnostic purposes.
The problem with using throughput from a file transfer is that it's behavior is slightly different than an iperf test. For one, having a pool that is limiting can give a false low(or high) value. Compression of protocols and trying to transmit large quantities of zeros can give you 1GB/sec+ despite the fact that we know better.
Additionally, iperf can give very weird bizaare results if you do things like improperly setup jumbo frames. We've seen people that would get just 100Mb/sec for 1 second, then 800Mb/sec for another second. Just because you average that out to 400Mb/sec in a network transfer is not too useful. But the bizaare iperf results are VERY useful!
So yes, I stand behind iperf as a troubleshooting and diagnostic tool. Just as most everyone else that does network troubleshooting does too. There's a reason why it's considered industry standard and compatible with alot of network diagnostic equipment, comes builtin with FreeNAS, Linux, FreeBSD, ESXi, even custom networking OSes such as monowall and pfsense. It really is an amazing tool. Interpreting the results is just as important as understanding how the test works. If you don't want to use it or rely on it, that's totally your perogative. But there's no other tool out there that benchmarks the network subsystems and no other subsystems(which is extremely important when troubleshooting and benchmarking) any better out there(and there quite possibly never will be either).
Only one OS that I know of that is widely used doesn't use iperf. That's Windows. And I won't even go there because Windows really is a bloated piece of sh*t that needs to die with fire.