Average performance with good hardware

Status
Not open for further replies.

mullepol

Dabbler
Joined
Oct 12, 2011
Messages
10
I've finally set up by FreeNAS8 build and after a few hitches with USB3 and Realtek 8111E incompatibilies it is up and running. I'm a bit disappointed with the throughput though, especially compared to my previous Linux based file server.

I'm getting good numbers internally on my 5 drive raidz1:
dd if=/dev/zero of=tmp.dat bs=2048k count=50k
107374182400 bytes transferred in 420.021247 secs (255639883 bytes/sec)
dd if=tmp.dat of=/dev/null bs=2048k count=50k
107374182400 bytes transferred in 579.030657 secs (185437819 bytes/sec)

I'm a bit confused by the inverted results, with writing much faster than reading.

The problem is however remotely. Using iperf and various window sizes I get approx 400Mbits/sec with FreeNAS as client and 300Mbits/sec as server to one client. With 2 clients I get roughly 500Mbits/sec in total.

I've tried ftp and cifs with multiple clients, all with similar results. CrystalDiskMark over cifs gives:
Sequential Read : 27.160 MB/s
Sequential Write : 52.057 MB/s
Random Read 512KB : 25.480 MB/s
Random Write 512KB : 50.648 MB/s
Random Read 4KB (QD=1) : 8.861 MB/s [ 2163.2 IOPS]
Random Write 4KB (QD=1) : 8.582 MB/s [ 2095.1 IOPS]
Random Read 4KB (QD=32) : 34.891 MB/s [ 8518.3 IOPS]
Random Write 4KB (QD=32) : 50.974 MB/s [ 12444.8 IOPS]

I can't find the bottleneck, neither cpu, memory or network is saturated. Any suggestions would be helpful.

My setup is:
FreeNAS-8.0.1-RELEASE-amd64 (8081)
CPU: Intel Pentium G620 @ 2.60GHz
RAM: 8161MB
MB: Gigabyte GA-PA65-UD3-B3 (Intel H61)
NIC: Intel PRO/1000 GT
Drives: raidz1 with 1 x WDC WD20EARS, 4 x WDC WD20EARX
 

mullepol

Dabbler
Joined
Oct 12, 2011
Messages
10
I've swapped back to the onboard NIC (Realtek 8111E) instead of the Intel 1000GT (PCI) since I've upgraded from 8.0 to 8.0.1 which supports it. To my surprise I got significantly better results:

Sequential reads over CIFS: 48 MB/s
Sequential writes over CIFS: 80 MB/s
iperf with server on Windows 7 (64k window): 600 Mb/s
iperf with server on FreeNAS (64k window): 630 Mb/s

Still not all the way but much, much better now. I still don't get why the CIFS read is so low though. A FTP transfer with Windows explorer as client gives roughly the same result.
 

mullepol

Dabbler
Joined
Oct 12, 2011
Messages
10
I've installed ubuntu 11.10 on the client now with significant changes

iperf server on FreeNAS
Code:
choy@artur:~$ iperf -t 100 -i 10 -c 192.168.0.44
------------------------------------------------------------
Client connecting to 192.168.0.44, TCP port 5001
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.3 port 50101 connected with 192.168.0.44 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.09 GBytes   936 Mbits/sec
[  3] 10.0-20.0 sec  1.09 GBytes   936 Mbits/sec
[  3] 20.0-30.0 sec  1.09 GBytes   935 Mbits/sec
[  3] 30.0-40.0 sec  1.09 GBytes   934 Mbits/sec

iperf server on ubuntu client
Code:
-----------------------------------------------------------
Client connecting to 192.168.0.3, TCP port 5001
TCP window size: 32.5 KByte (default)
------------------------------------------------------------
[  3] local 192.168.0.44 port 39593 connected with 192.168.0.3 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   918 MBytes   770 Mbits/sec
[  3] 10.0-20.0 sec   918 MBytes   770 Mbits/sec
[  3] 20.0-30.0 sec   918 MBytes   770 Mbits/sec

Writing to and from FreeNAS over CIFS
Code:
choy@artur:/mnt/pub2$ dd if=/dev/zero of=TV/ddtest bs=1024k count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 9.50766 s, 113 MB/s
choy@artur:/mnt/pub2$ dd if=TV/ddtest of=/dev/null bs=1024k count=1k
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 24.0196 s, 44.7 MB/s


If I interpret this correctly, writing to FreeNAS is now full gigabit speed, whilst reading is still slow, and even network limited if disk speed increases.
 
Status
Not open for further replies.
Top