I' trying to tune my small NAS based on Atom D525 mobo, 4 GB RAM, 4x2TB RAID-Z1 disks.
Network speed measured with iperf or dd|nc is over 100MB/s in both directions, ZFS pool read 260MB/s, write 160MB/s.
Everything seems perfect, but when I try simultaneously read from disk and send to LAN network speed drops to ~25MB/s.
At first I thought it is PCIe problem but it is not - when iperf or dd|nc is sending data over network even simple dd if=/dev/zero of=/dev/null causes LAN slowdown to 25% of its speed.
In both cases (disk read or zero->null copy) CPU is about 55% busy, interrupt rate shown by top about 7-8%.
When NAS is receiving data from network running dd from disk or /dev/zero does not cause such a big slowdown - LAN speed drops from 100MB/s to 85MB/s.
In the real use above phenomenon can be observed as much faster write then read over NFS or SMB (70MB/s write, 30MB/s read).
Tests were made with FreeNAS 8.0.4-BETA1 and FreeNAS 0.7.5.9496 (which is even worse then v8).
Anyone have an idea what's going on? Help!
Network speed measured with iperf or dd|nc is over 100MB/s in both directions, ZFS pool read 260MB/s, write 160MB/s.
Everything seems perfect, but when I try simultaneously read from disk and send to LAN network speed drops to ~25MB/s.
At first I thought it is PCIe problem but it is not - when iperf or dd|nc is sending data over network even simple dd if=/dev/zero of=/dev/null causes LAN slowdown to 25% of its speed.
In both cases (disk read or zero->null copy) CPU is about 55% busy, interrupt rate shown by top about 7-8%.
When NAS is receiving data from network running dd from disk or /dev/zero does not cause such a big slowdown - LAN speed drops from 100MB/s to 85MB/s.
In the real use above phenomenon can be observed as much faster write then read over NFS or SMB (70MB/s write, 30MB/s read).
Tests were made with FreeNAS 8.0.4-BETA1 and FreeNAS 0.7.5.9496 (which is even worse then v8).
Anyone have an idea what's going on? Help!