Some Interesting Performance Numbers on a new Freenas Server

Status
Not open for further replies.

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, so using your testing recommendations cyberjock I reran all of the performance tests:

RAIDZ3 - 8 Drives, No Compression
RAIDZ2 - 8 Drives, No Compression
RAIDZ1 - 8 Drives, No Compression
RAID0- 8 Drives, No Compression
RAID10- 8 Drives, No Compression
RAID60- 8 Drives, No Compression

All drives connected to the motherboard sata ports. Non-production system, no data, no traffic, no load. I used the following commands:

Code:
dd if=/dev/zero of=/mnt/vol1/testfile bs=8M count=32000
dd if=/mnt/vol1/testfile of=/dev/null bs=8M count=32000


So first the performance graph:

15407688550_6cc681a5c7_z.jpg


From my reading about the various RAID options, this looks pretty much how I would expect it to look...the more parity the slower the system except that RAID10 appears to be neck-and-neck with RAIDZ3.

Now for the individual hard drive graphs. As before, ada3 and ada7 show greater read speeds than the rest of the drives:

15407005000_68b69be37e_z.jpg
15568927286_639cae394b_z.jpg


But then as I continued to run the various test, I see that those read speeds only appear when testing against a RAIDZ3 pool, none of the other pools:

15406445619_e207df5c16_z.jpg
15406445239_20da2f14c6_z.jpg


This is (left-to-right) RAIDZ2, RAIDZ1, RAID0 and the first part of the RAID10 write test.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK, so right after running the tests above, I downloaded your script jgreco and started to run it. Looks like it will take awhile, but here is what I have so far:

Performing initial serial array read (baseline speeds)
Tue Oct 21 10:05:06 PDT 2014
Tue Oct 21 10:23:06 PDT 2014
Completed: initial serial array read (baseline speeds)

Array's average speed is 150.051 MB/sec per disk

Disk Disk Size MB/sec %ofAvg
------- ---------- ------ ------
ada0 3815447MB 151 101
ada1 3815447MB 150 100
ada2 3815447MB 140 93
ada3 3815447MB 154 103
ada4 3815447MB 150 100
ada5 3815447MB 148 99
ada6 3815447MB 151 101
ada7 3815447MB 156 104

Performing initial parallel array read
Tue Oct 21 10:23:06 PDT 2014
The disk ada0 appears to be 3815447 MB.
Disk is reading at about 152 MB/sec
This suggests that this pass may take around 419 minutes

Serial Parall % of
Disk Disk Size MB/sec MB/sec Serial
------- ---------- ------ ------ ------
ada0 3815447MB 151 151 100
ada1 3815447MB 150 151 100
ada2 3815447MB 140 140 100
ada3 3815447MB 154 154 100
ada4 3815447MB 150 150 100
ada5 3815447MB 148 149 100
ada6 3815447MB 151 151 100
ada7 3815447MB 156 157 100


I will post more information once I get it. So far it "looks like" the motherboard is supporting the full transfer rate of the drives at least on the read. If this is the case, I would not expect to see any better performance with the IBM controller.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@HeloJunkie

Yeah, so I owe you an apology. With a block size of 10MB and a count of 30000, that's 300,000 MB, or 300GB. That's big enough to exceed your ARC size. I swear I saw 3000, which was only 30GB. So yeah, your size is fine. ;)

Numbers are very reasonable. Nice to see the results. Almost want to make a locked thread with your results for others to see. :p
 
Last edited:

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Cyberjock - No problem at all...I am new to freenas and want to learn as much about it (and my system) before putting it in production. All of this testing is good for me and helps me understand more and more about it. My background is networks, not storage. You have given me a lot of good info to work with on the performance testing.

Of course, after all of this I have to fire up iperf and test out the networking side as well...someone said there was a problem with the 10g drivers...it will be fun to check since I have a 6509 with 10g connections!
 

titan_rw

Guru
Joined
Sep 1, 2012
Messages
586
.the more parity the slower the system except that RAID10 appears to be neck-and-neck with RAIDZ3.

For sequential, absolutely. 8 disks in z3 should have the sequential throughput of 5 disks, assuming the cpu can keep up with parity calculations.

Random IO is where striped mirrors will win. A single vdev of raidz anything will give you random io of one disk. Whereas 8 disks in striped mirrors will give you random iops of at least 4 disks, and theoretically the iops of 8 disks as zfs can load balance over the mirrors too. Random writes are normally streamed out as sequential writes, so are not an issue. When read back though, they become random reads.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
So I think we should take all this info, make a pretty PDF and put it in the thread with my noobie guide so people can compare for themselves. ;)
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I think that is a great idea. I would love to work on something like that. I want to run the other tests that jgreco had the script for against just the standard dd test and then we can package it all together as a cool pdf.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
@HeloJunkie

Sounds like a plan. I like your graphs, so if you want to make more graphs I can draft up the PDF and all that jazz. If you are cool with it I'll put your name on the PDF as the person that lent the hardware for testing. ;)
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Sounds like a winner...I will create graphs for all of the various tests I run and post them here!
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
ok so the script that jgreco built is STILL running, spitting out all kinds of data. I am going to start graphing and posting the info here as the test complete. (jgreco - thank you for the script!)

The first test was to test the "Initial Serial Array Read" for a baseline server. My particular system's average speed was 150.109MB/second per disk.

The following commands were run for each disk one at a time(ada0-ada7):
Code:
iostat -c 2 -d -w 60 ada0
dd if=/dev/ada0 of=/dev/null bs=1048576


This command was wrapped in a for loop that tested each disk one at a time for only 60 seconds with a 5 second wait between each disk test.

The test provided this output:
Array's average speed is 150.109 MB/sec per disk

Disk Disk Size MB/sec %ofAvg
------- ---------- ------ ------
ada0 3815447MB 151 101
ada1 3815447MB 150 100
ada2 3815447MB 140 93
ada3 3815447MB 154 103
ada4 3815447MB 150 100
ada5 3815447MB 148 99
ada6 3815447MB 151 101
ada7 3815447MB 156 104
Which appears like this when graphed:

14992359063_e8d0d651d7_z.jpg


This seems to be in line with what Western Digital says my drive is capable of doing:

From the Western Digital Website:
Data transfer rate (max)
Buffer to host: 6 Gb/s
Host to/from drive (sustained): 150 MB/s


The next test was titled "Initial Parallel Array Read" and was run with the same command as the Serial test only this time all disks were checked at the same time:

Code:
dd if=/dev/ada0 of=/dev/null bs=1048576


Eight copies of dd launched simultaneously - one copy of the command against each disk. The command copied the entire disk to /dev/null and recorded the time it took to do so.

This test provided this output:
Performing initial parallel array read
Tue Oct 21 17:40:41 PDT 2014
The disk ada0 appears to be 3815447 MB.
Disk is reading at about 152 MB/sec
This suggests that this pass may take around 419 minutes

Serial Parall % of

Disk Disk Size MB/sec MB/sec Serial
ada0 3815GB 151 151 100
ada1 3815GB 150 150 100
ada2 3815GB 140 140 100
ada3 3815GB 154 154 100
ada4 3815GB 150 150 100
ada5 3815GB 149 149 100
ada6 3815GB 151 151 100
ada7 3815GB 157 157 100

And when graphed looks like this:

15612700735_1380991d76_z.jpg


Notice (in my case) that the parallel reads were as fast as the serial reads.

Once all of the hard drives were copied to /dev/null we received this data:

Completed: initial parallel array read

Disk's average time is 34133 seconds per disk

Disk Bytes Transferred Seconds %ofAvg
------- ----------------- ------- ------
ada0 4000787030016 33698 99
ada1 4000787030016 34484 101
ada2 4000787030016 36086 106
ada3 4000787030016 33746 99
ada4 4000787030016 34086 100
ada5 4000787030016 34241 100
ada6 4000787030016 34007 100
ada7 4000787030016 32712 96

Which when graphed looks like this:

15427393600_21b062129c_c.jpg


So in this case, using the average of 34133 seconds per disk to copy 4,000,787,030,016 Bytes from each hard drive to /dev/null, we can see that we averaged somewhere around 118MB/Sec while the system copied from all drives at the same time.

As of right now, I am waiting for the system to complete the parallel seek-stress array read test. Once that is done I will edit this thread and add those tests.

In the mean time I redid the graphs for all of the test done against the pools.

Read & Write tests against various pool types. These tests were 8 drives, no compression, idle system, no load, no production data, no network traffic:

15426434179_fa509ff538_c.jpg


Here is RAIDZ3 (8 Drives) against RAIDZ2 (8 Drives) against RAIDZ2 (6 Drives) - again no compression and no system load.

15613124165_3fd60b4eb2_c.jpg




Here are the Freenas reports for the hard drives for the testing period. It will be interesting to see what these look like once I install the IBM ServerRAID M1015 controller card that I have ordered!

15430234570_02310fe162_z.jpg
15429818777_881bc8e813_z.jpg
 
Last edited:

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Nice job. And don't panic if the parallel-seek test is a lot more noisy in the results, that's pretty much just the way it sometimes works out and I've seen nothing to indicate that noisy results there (alone) indicate any sort of problem. That test is really more about exercising the stepper and making sure that it is generally able to settle on a track without lots of problems, something a purely sequential test can miss. You will see evidence of trouble in S.M.A.R.T. data if there's actual cause for concern.

Everyone else: doing this kind of testing for awhile on your FreeNAS hardware is the kind of thing that you should do to gain confidence that you do not have stupid hardware gremlins anxiously waiting for you to fill your pool with Precious Data(tm) before they shred it for you. No amount of testing can ever guarantee proper operation, but identifying bad foo during testing is really a great idea.

People seem to like the script and seem to be able to comprehend the data being provided. I suppose I should take it and go further with it, yes?
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I certainly enjoyed dissecting it. Thought the grep option in the menu was brilliant. If you have more I'd say it would be met with open arms. I've always thought something that would auto test a pool for noobs would be cool. i.e ensure compression is off, file size will blow out ARC check, etc, so we'd have sane comparisons. The current magic suggests one could document, drives, pool config, test the drives individually, then as configured, check for SMART issues, and wrap it in a pretty bow. Very inspirational, kind sir.

Great pictures btw, HeloJunkie.
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
I agree with mjws00, great script...I was dding by hand before you posted your link!

And thanks mjw00!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I certainly enjoyed dissecting it. Thought the grep option in the menu was brilliant.

You misspelled "lazy". ;-) But that's really the wonderful power of shell script.

The big problem is that there are substantial differences between the technical details 15 years ago and today. For example in granddad's day you'd be twiddling things with the scsi or camcontrol modepage commands. What I should be looking at is to extend this with something to look at SMART data.
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
I have 2 questions about the pool setups

a) When you say "raid60" does that mean striped raidz2 (or raidz20 if you will)? Or is that real hardware raid60 with the Areca controller?
b) How are the 8 drives distributed in case of the raid10? Is the striping set up as 4x a mirror of 2 vdevs or is it 2x a mirror of 4 vdevs?
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
Hi bestboy -

I did not use the controller in anything but passthrough mode. All configurations were done on the freenas box:

When I say RAID60, I am thinking striped RAIDz2 (RAID6), in this case I now have ten drives, but when I had eight drives I had raidz2 sets with four drives only (two data, two parity):
[root@freenas01] ~# zpool status -v
pool: vol1
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
vol1 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
gptid/60502487-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/60aeeaaa-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/61100182-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/61ea42ce-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/62ce93a5-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
gptid/63a758c3-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/64834afd-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/655b1ff3-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/66385396-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0
gptid/670d433e-5d67-11e4-bedf-002590f9649c ONLINE 0 0 0

errors: No known data errors
 

HeloJunkie

Patron
Joined
Oct 15, 2014
Messages
300
OK cyberjock - As promised here are the test results using TEN Western Digital 4TB Red NAS Drives on the Supermicro X10DRi-T Supermicro Motherboard using all ten builtin SATA ports. I have ordered the IBM M1015 and I will rerun all of these tests (albeit with eight drives only) with that card and again post those results here.

RAIDZ3 - 10 Drives, No Compression
RAIDZ2 - 10 Drives, No Compression
RAIDZ1 - 10 Drives, No Compression
RAID0- 10 Drives, No Compression
RAID10- 10 Drives, No Compression
RAID60- 10 Drives, No Compression

All drives connected to the motherboard sata ports. Non-production system, no data, no traffic, no load. I used the following commands

Code:
dd if=/dev/zero of=/mnt/vol1/testfile bs=8M count=32000
dd if=/mnt/vol1/testfile of=/dev/null bs=8M count=32000


Here is the performance graph:
15452188680_d3fbe9f546_c.jpg


Here are the freenas graphs showing i/o performance. The first set of four graphs are RAIDZ3, RAIDZ2, RAIDZ1 and RAID0:

15451514547_7552a635d6_o.png

15451369628_2e481c50eb_o.png

15638338652_d72dbeefc5_o.png

15016747794_05082888b9_o.png



This next set of graphs is RAID0, RAID10 and RAID60:


15614085406_c371f96091_o.png

15637802465_f88a8b6a68_o.png

15638641182_f479e9efff_o.png

15614085486_b11ef444c0_o.png




Stay tuned for the performance numbers with the IBM ServerRAID M1015!
 

bestboy

Contributor
Joined
Jun 8, 2014
Messages
198
Thank you HeloJunkie for doing these lengthy tests and the nice graphs.

Also can you be more specific about the raid10 pool you tested? Just saying "raid10" is a bit vague, because having 10 disks at your disposal you could come up with at least 3 viable raid10 pools to test:

a) raid10: 5 2-way mirrors (5x2)
  • read bandwidth of 10 drives
  • write bandwidth of 5 drives
  • read IOPS of 10 drives
  • write IOPS of 5 drives
  • 1 redundant drive per mirror
b) raid10: 2 5-way mirrors (2x5)
  • read bandwidth of 10 drives
  • write bandwidth of 2 drives
  • read IOPS of 10 drives
  • write IOPS of 2 drives
  • 4 redundant drives per mirror
c) raid10: 3 3-way mirrors (3x3)
  • read bandwidth of 9 drives
  • write bandwidth of 3 drives
  • read IOPS of 9 drives
  • write IOPS of 3 drives
  • 2 redundant drives per mirror
 
Last edited:

leenux_tux

Patron
Joined
Sep 3, 2011
Messages
238
Positives....
Really interesting information and by the looks of it a lot of time dedicated to the testing and data collection|collation. Great reading.

Negatives....
Oh cr?p. I got two m1015s in my server. If performance tanks compared to the motherboard I'm going to be rethinking my config !
 
Status
Not open for further replies.
Top