Hello
I have FreeNAS 9.2.1.5 running in a ESXi5.1 VM with 12 Drives.
When there is NFS throughput going through what zpool iostat outputs in terms of read bandwidth is around 100MBps, yet the network throughput is only around 10MB/s. which seems odd to me.
This is what zpool iostat shows
capacity operations bandwidth
pool alloc free read write read write
pool1 15.1T 6.32T 899 0 111M 0
pool1 15.1T 6.32T 957 74 118M 2.43M
pool1 15.1T 6.32T 875 73 108M 1.78M
pool1 15.1T 6.32T 774 109 95.6M 4.34M
pool1 15.1T 6.32T 720 73 88.8M 2.37M
pool1 15.1T 6.32T 866 69 107M 2.31M
pool1 15.1T 6.32T 762 119 94.3M 5.64M
You will see in the attached graph from the interface traffic that the TX on the NIC is only around 70Mbps (9MBps).
This seems to be a wild difference and I have checked and prefetching is disabled.
The output from arcsummary is as follows:
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
05:21:52 0 0 0 0 0 0 0 0 0 11G 11G
05:21:53 1.7K 769 44 42 4 727 100 10 1 11G 11G
05:21:54 2.2K 1.2K 51 29 2 1.1K 100 12 1 11G 11G
05:21:55 2.2K 1.0K 46 16 1 1.0K 100 9 1 11G 11G
05:21:56 1.3K 645 47 29 3 616 99 5 1 11G 11G
And this is my zpool config
pool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/2ab18178-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/25cf095d-8620-11e2-a143-000c2976f274 ONLINE 0 0 0
gptid/2badff45-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2c2f92ad-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2c771923-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2cbf1e95-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/5cd38309-89e7-11e3-a2b6-000c2976f274 ONLINE 0 0 0
gptid/79e65895-a14e-11e2-a143-000c2976f274 ONLINE 0 0 0
gptid/00d79ab4-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/01505fb7-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/01c84423-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/0240dbec-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
Is what I am seeing normal? I have checked VMware and there is around 100MBps hitting the disks which backs up the zpool iostat output.
It appears as though the system is reading a lot more data than it is sending to the NFS client.
Does anyone have any ideas?
I have FreeNAS 9.2.1.5 running in a ESXi5.1 VM with 12 Drives.
When there is NFS throughput going through what zpool iostat outputs in terms of read bandwidth is around 100MBps, yet the network throughput is only around 10MB/s. which seems odd to me.
This is what zpool iostat shows
capacity operations bandwidth
pool alloc free read write read write
pool1 15.1T 6.32T 899 0 111M 0
pool1 15.1T 6.32T 957 74 118M 2.43M
pool1 15.1T 6.32T 875 73 108M 1.78M
pool1 15.1T 6.32T 774 109 95.6M 4.34M
pool1 15.1T 6.32T 720 73 88.8M 2.37M
pool1 15.1T 6.32T 866 69 107M 2.31M
pool1 15.1T 6.32T 762 119 94.3M 5.64M
You will see in the attached graph from the interface traffic that the TX on the NIC is only around 70Mbps (9MBps).
This seems to be a wild difference and I have checked and prefetching is disabled.
The output from arcsummary is as follows:
time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
05:21:52 0 0 0 0 0 0 0 0 0 11G 11G
05:21:53 1.7K 769 44 42 4 727 100 10 1 11G 11G
05:21:54 2.2K 1.2K 51 29 2 1.1K 100 12 1 11G 11G
05:21:55 2.2K 1.0K 46 16 1 1.0K 100 9 1 11G 11G
05:21:56 1.3K 645 47 29 3 616 99 5 1 11G 11G
And this is my zpool config
pool1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/2ab18178-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/25cf095d-8620-11e2-a143-000c2976f274 ONLINE 0 0 0
gptid/2badff45-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2c2f92ad-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2c771923-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/2cbf1e95-7c00-11e2-95eb-000c2976f274 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/5cd38309-89e7-11e3-a2b6-000c2976f274 ONLINE 0 0 0
gptid/79e65895-a14e-11e2-a143-000c2976f274 ONLINE 0 0 0
gptid/00d79ab4-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/01505fb7-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/01c84423-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
gptid/0240dbec-7c01-11e2-95eb-000c2976f274 ONLINE 0 0 0
Is what I am seeing normal? I have checked VMware and there is around 100MBps hitting the disks which backs up the zpool iostat output.
It appears as though the system is reading a lot more data than it is sending to the NFS client.
Does anyone have any ideas?