Some Interesting Performance Numbers on a new Freenas Server

Status
Not open for further replies.

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
...and just for completeness the possible raidz pools with 10 drives:

d) raidz10: 3 raidz1 (3x (2+1))
  • read bandwidth of 6 drives
  • write bandwidth of 6 drives
  • read IOPS of 3 drives
  • write IOPS of 6 drives
  • 1 redundant drive per raidz1 group
e) raidz10: 2 raidz1 (2x (4+1))
  • read bandwidth of 8 drives
  • write bandwidth of 8 drives
  • read IOPS of 2 drives
  • write IOPS of 8 drives
  • 1 redundant drive per raidz1 group
f) raidz20: 2 raidz2 (2x (3+2))
  • read bandwidth of 6 drives
  • write bandwidth of 6 drives
  • read IOPS of 2 drives
  • write IOPS of 6 drives
  • 2 redundant drives per raidz2 group
g) raidz30: 2 raidz3 (2x (2+3))
  • read bandwidth of 4 drives
  • write bandwidth of 4 drives
  • read IOPS of 2 drives
  • write IOPS of 4 drives
  • 3 redundant drives per raidz3 group
h) raidz3 (7+3)
  • read bandwidth of 7 drive
  • write bandwidth of 7 drives
  • read IOPS of 1 drive
  • write IOPS of 7 drives
  • 3 redundant drives
 
Last edited:

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Hi Bestboy -

Here is how I created my striped mirrored vdev with 10 drives (or in this case only eight since I have pulled two for a different set of tests):

15520300958_2f67bbab59_o.png


15086340873_620924ba3c_o.png
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, so here are the first set of performance test numbers after installing a new IBM ServerRAID M1015 controller. The controller was reflashed to IT mode and installed in the server. Since the M1015 only supports 8 drives without an expander, I only used 8 drives for these test. This should not do anything to the performance tests on the drives themselves, but may be interesting to see if we lose performance from losing two spindles when we do the tests against the ZFS file systems. I am hoping to get some of those tests done tonight and uploaded soon.

So firs, as before, the first test was to test the "Initial Serial Array Read" for a baseline server. My particular system's average speed was 150.099MB/second per disk, This is compared to the Intel SATA controller on the motherboard which clocked in at 150.109MB/second per disk. In this case they are almost identical.

The following commands were run for each disk one at a time(da0-da7):
Code:
iostat -c 2 -d -w 60 da0
dd if=/dev/da0 of=/dev/null bs=1048576


This command was wrapped in a for loop that tested each disk one at a time for only 60 seconds with a 5 second wait between each disk test.

The test provided this output:
Array's average speed is 150.099 MB/sec per disk

Disk Disk Size MB/sec %ofAvg
------- ---------- ------ ------
da0 3815447MB 156 104
da1 3815447MB 151 100
da2 3815447MB 148 99
da3 3815447MB 150 100
da4 3815447MB 154 103
da5 3815447MB 140 94
da6 3815447MB 150 100
da7 3815447MB 151 101

Which appears like this when graphed:

15085698994_91bfdce818_z.jpg


The next test was titled "Initial Parallel Array Read" and was run with the same command as the Serial test only this time all disks were checked at the same time:

Code:
dd if=/dev/da0 of=/dev/null bs=1048576


Again eight copies of dd launched simultaneously - one copy of the command against each disk. The command copied the entire disk to /dev/null and recorded the time it took to do so.

This test provided this output:

Performing initial parallel array read
Wed Oct 29 18:37:48 PDT 2014
The disk da0 appears to be 3815447 MB.
Disk is reading at about 157 MB/sec
This suggests that this pass may take around 406 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
da0 3815447MB 156 157 100
da1 3815447MB 151 151 100
da2 3815447MB 148 149 100
da3 3815447MB 150 150 100
da4 3815447MB 154 154 100
da5 3815447MB 140 141 100
da6 3815447MB 150 151 100
da7 3815447MB 151 151 100

And when graphed looks like this:

15520459937_5d5e7f5022_z.jpg


Once all of the hard drives were copied to /dev/null we received this data:

Completed: initial parallel array read

Disk's average time is 34111 seconds per disk

Disk Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
da0 4000787030016 32691 96
da1 4000787030016 33986 100
da2 4000787030016 34220 100
da3 4000787030016 34064 100
da4 4000787030016 33725 99
da5 4000787030016 36062 106
da6 4000787030016 34462 101
da7 4000787030016 33675 99

Which when graphed looks like this:

15703747841_7ec7806351_z.jpg



Soooooo....it appears that the IBM ServerRAID M1015 controller card did nothing to boost the overall performance of the drives. The 1015 and the onboard Intel SATA controller were neck-and-neck for the tests. However it will be interesting to see how the next set of tests go.

Here is what the drives looked like for these tests:
15520816340_eca1a0253d_o.png

15681929636_c3754e648e_o.png

15520816370_0f3e1f28ed_o.png





Stay tuned....
 
Last edited:

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, so here is the final round of tests. I ran the following dd tests against Z2, Z2, Z1, RAID0, RAID10 and RAID60:

No compression, no load, no traffic.

Code:
dd if=/dev/zero of=/mnt/vol1/testfile bs=8M count=32000
dd if=/mnt/vol1/testfile of=/dev/null bs=8M count=32000


This is the same test I ran on the eight drives using only the onboard motherboard controller. I graphed them against each other here:

15521482307_2cd2b06a35_c.jpg


In my particular case with the motherboard I happen to be running, it appears like there is no reason (that I can see) to pay $150.00 for the M1015! I plan on running Z3 or Z2 with eight drives and frankly the M1015 does not do much for me in either (or any) of these cases. Very interesting.

Here is the disk I/O during these tests:

Z2, Z2 & Z1

15520966518_a0d4ce27a0_z.jpg

15707946712_cfcb5379a9_z.jpg

15520966548_1c9a79509c_o.png


And now for RAID0, RAID10 & RAID60:

15086715674_86bd00f20b_z.jpg

15682944016_dfc1749b41_z.jpg

15521261278_cb257740d3_o.png
 
Status
Not open for further replies.
Top