Hi All,
I have built my new FreeNAS Box and wanted to run a few standard performance tests before I put it into operation. Here are the specs of interest:
FreeNAS 9.1 Release x64:
SuperMicro X9SCL-+F Motherboard
Intel Core i3 3240
Amicroe DDR1333 ECC UDIMM 8GB x 2
WD30EFRX 3TB Red SATA3 64MB x 6
Sandisk 8GB Cruzer
PC Windows 8 x64:
Gigabyte X79-UD3 Motherboard
Intel Core i7-3820 @ 4.00GHz
Kingston DDR1666 8GB x 2
Sandisk 240 SSD - O/S Disk
Seagate 2TB ST2000DM001 - Data Disk
Network:
Billion BiPAC 7800N
All devices directly connected to switch
All parts on the FreeNAS box are brand new.
I went through and ran a badblocks tests using 2 passes with random blocks on all 6 individual disks.
Followed by smart long self tests on each of the 6 disks.
Configuration is as follows:
I ran some iperf testing first up using default window sizes of 64KB:
I then ran some iperf tests using a window size of 128KB:
As you can see, a window size of 128K is pretty much saturating my GB link. I can show this by pushing more threads through iperf to see whether I can get any higher Bandwidth:
As you can see the SUM above is roughly the same so I know that my network throughput is pretty much at optimum speeds. I ran a lot of the above tests multiple times which produced the same result with little variance.
Next, I decided to start off with some base dd benchmarks.
The second dd pass ran for about 5 minutes. I'm open to running some more tests if anyone has any ideas or wants me to try and benchmark my system in any other way. Once I get past these standard tests I will then run the different performance tests against my proper workload, mostly CIFS from my PC.
Cheers,
Tabmow
I have built my new FreeNAS Box and wanted to run a few standard performance tests before I put it into operation. Here are the specs of interest:
FreeNAS 9.1 Release x64:
SuperMicro X9SCL-+F Motherboard
Intel Core i3 3240
Amicroe DDR1333 ECC UDIMM 8GB x 2
WD30EFRX 3TB Red SATA3 64MB x 6
Sandisk 8GB Cruzer
PC Windows 8 x64:
Gigabyte X79-UD3 Motherboard
Intel Core i7-3820 @ 4.00GHz
Kingston DDR1666 8GB x 2
Sandisk 240 SSD - O/S Disk
Seagate 2TB ST2000DM001 - Data Disk
Network:
Billion BiPAC 7800N
All devices directly connected to switch
All parts on the FreeNAS box are brand new.
I went through and ran a badblocks tests using 2 passes with random blocks on all 6 individual disks.
Followed by smart long self tests on each of the 6 disks.
Configuration is as follows:
Code:
[root@freenas] ~# zpool status pool: tank state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 gptid/10e54aee-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 gptid/11513510-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 gptid/11bcd8f8-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 gptid/122a390e-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 gptid/1297c1b4-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 gptid/13043d6b-04bf-11e3-9d21-0025907c4554 ONLINE 0 0 0 errors: No known data errors [root@freenas] ~# zfs get compression NAME PROPERTY VALUE SOURCE tank compression lz4 local
I ran some iperf testing first up using default window sizes of 64KB:
Code:
[root@freenas] ~# iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.1.200 port 5001 connected with 192.168.1.2 port 50960 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 842 MBytes 706 Mbits/sec
Code:
T:\Downloads\iperf-2.0.5-2-win32>iperf.exe -c freenas -p 5001 -f m ------------------------------------------------------------ Client connecting to freenas, TCP port 5001 TCP window size: 0.06 MByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2 port 50960 connected with 192.168.1.200 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 842 MBytes 707 Mbits/sec
I then ran some iperf tests using a window size of 128KB:
Code:
[root@freenas] ~# iperf -s -w 128k ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 128 KByte ------------------------------------------------------------ [ 4] local 192.168.1.200 port 5001 connected with 192.168.1.2 port 51119 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.09 GBytes 937 Mbits/sec
Code:
T:\Downloads\iperf-2.0.5-2-win32>iperf.exe -c freenas -p 5001 -f m -w 128k ------------------------------------------------------------ Client connecting to freenas, TCP port 5001 TCP window size: 0.12 MByte ------------------------------------------------------------ [ 3] local 192.168.1.2 port 51119 connected with 192.168.1.200 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1118 MBytes 938 Mbits/sec
As you can see, a window size of 128K is pretty much saturating my GB link. I can show this by pushing more threads through iperf to see whether I can get any higher Bandwidth:
Code:
[root@freenas] ~# iperf -s -w 128k ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 128 KByte ------------------------------------------------------------ [ 4] local 192.168.1.200 port 5001 connected with 192.168.1.2 port 51406 [ 5] local 192.168.1.200 port 5001 connected with 192.168.1.2 port 51407 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 562 MBytes 471 Mbits/sec [ 5] 0.0-10.0 sec 562 MBytes 471 Mbits/sec [SUM] 0.0-10.0 sec 1.10 GBytes 942 Mbits/sec
Code:
T:\Downloads\iperf-2.0.5-2-win32>iperf.exe -c freenas -p 5001 -f m -w 128k -P 2 ------------------------------------------------------------ Client connecting to freenas, TCP port 5001 TCP window size: 0.12 MByte ------------------------------------------------------------ [ 4] local 192.168.1.2 port 51407 connected with 192.168.1.200 port 5001 [ 3] local 192.168.1.2 port 51406 connected with 192.168.1.200 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 562 MBytes 471 Mbits/sec [ 3] 0.0-10.0 sec 562 MBytes 471 Mbits/sec [SUM] 0.0-10.0 sec 1124 MBytes 942 Mbits/sec
As you can see the SUM above is roughly the same so I know that my network throughput is pretty much at optimum speeds. I ran a lot of the above tests multiple times which produced the same result with little variance.
Next, I decided to start off with some base dd benchmarks.
Code:
[root@freenas] ~# dd if=/dev/zero of=/mnt/tank/testfile bs=4M count=10000 10000+0 records in 10000+0 records out 41943040000 bytes transferred in 20.669191 secs (2029254060 bytes/sec) [root@freenas] ~# dd if=/dev/zero of=/mnt/tank/testfile2 bs=1048576 500854+0 records in 500853+0 records out 525182435328 bytes transferred in 229.041710 secs (2292955440 bytes/sec)
The second dd pass ran for about 5 minutes. I'm open to running some more tests if anyone has any ideas or wants me to try and benchmark my system in any other way. Once I get past these standard tests I will then run the different performance tests against my proper workload, mostly CIFS from my PC.
Cheers,
Tabmow