- Jul 10, 2014
I still don't see how this suboptimal IO pipeline can give such bad overall results. I mean, yes the IO performance may be bad considering the array, but the network performance is just abyssal. He can read with 1200 MB/s from ARC, but just gets 300 MB/s on the wire...
In order to make sure 10 GbE transfer rates can be met, I'd
I'd not think about getting a new HBA before those tests are green. The current rig really should be able to pass those simple tests.
- iperf that sucker really hard
- and then try a test with smaller files that would be read entirely from ARC to eliminate the IO pipeline from the equation
BTW: Maybe this DTrace script can provide some insights into the IO problem. Although not being designed for SSDs, I'd still give it a try. You know, just to make sure that the latencies are as low expected.
Thanks a ton for these suggestions! I did a dd test while forcing a sync write, the speeds were nearly as fast as without the forced sync write, so it seems that is not the issue (Since NFS forces sync write, this seemed to be a valid test)
I'll give iperf a go and see what happens there. If I *can* narrow it down to a piece of hardware, that would be nice as this is driving me crazy :)