HeloJunkie
Patron
- Joined
- Oct 15, 2014
- Messages
- 300
OK, so after a bunch of reading and some suggestions from a couple of folks I decided to put my new freenas box through a series of tests using dd. I wanted to answer some questions about performance on the hard drives with various RAID configurations (both optimal and non-optimal) as well as how well the onboard Intel based sata ports were working on my new motherboard.
Here is my server configuration:
Supermicro Superserver 6028R-TRT
1 x X10DRi-T Supermicro Motherboard
1 x Intel Xeon E5-2650 V3 LGA2011-3 Haswell 10 Core 2.3GHz 25MB 5.0GT/s
4 x 16GB PC4-17000 DDR4 1233Mhz Registered ECC Dual-Ranked 1.2V Memory
9 x 4TB Western Digital WD40EFRX Red NAS SATA Hard Drives
Dual 740 Watt Platinum Power Supplies
Dual APC 1500 UPSs (one for each power supply)
8GB USB Thumb Drive for booting
The X10DRi-T motherboard has two Intel X540 based 10Gb Ethernet adaptors along with an Intel C612 Express chipset and 10 SATA3 ports.
I wanted to test read and write performance of various configuration of my server. I wasted a bunch of time doing this with IOZone but forgot to turn compression off, so I basically threw those numbers out and started fresh with dd and no compression.
There were some suggestions about using a SATA card as opposed to using onboard SATA ports as they are faster (my motherboard has 10 of them). So I not only wanted to test my various configurations (RAIDZ3 with 8 drives (considered non-optimal), RAIDZ2 with 8 drives (considered non-optimal), RAIDZ2 with 6 drives (considered optimal), RAID Z3 with 7 drives (considered optimal)) but I also wanted to test the onboard sata ports for performance.
One very interesting thing I found was how the hard drives responded individually during the tests and Ericloewe suggest here that it was possible that some games were being played with the onboard sata ports and total capabilities.
I submit he may be right at least on my motherboard. Here is the reason I think that might be the case. When I was running my tests, I noticed that some hard drives appeared to have different performance values. This only happened on READs not WRITES and only on two of the ports. Swapping hard drives resulted in the exact same results. So it appears (based on my limited hardware knowledge of these Intel sata ports) that something off is going on with the onboard sata ports.
I was running the following command on a RAIDZ3 with 8 Drives vol no compression:
Same exact time here is another disk on another port:
I noticed there is a significant difference. This is on two of the ten ports I see this behavior. Switching drives did not change the results.
So that is the first thing I found interesting. Maybe the motherboard is playing games with eight of the ten ports, or maybe just two of the ten ports. Someone with a lot better understanding of the hardware side of the house might be able to explain why the motherboard is acting like it is.
OK, so I scratch my head but move forward with my testing. Use the following command (suggested here):
and
I ran these tests three times each. The read test was off the charts. I later changed my testing to increase bs from bs=4M to bs=10M. Someone had suggested that with 64G of ram somehow things were getting cached. I am not sure, but later tests tended to report more predictable read results.
I used the onboard stat ports for one set of test, then I switched it out with a sata Areca 1222 controller card that we had laying around for another set of tests. The 1222 card did not see my 4TB drives right away, so I upgraded the firmware on the card and it was great. I toyed with the idea of using this card, but it has horrible reviews, so this was just a test. What I need to do is buy the IBM M1015 but I am still not convinced the onboard ports won't work for me.
All compression was turned off for these tests and I ran them against the following configurations all with eight drives: RAID10, RAIDz1, RAIDz2, RAIDz3 and RAID60. Here were the results:
Well, that card is pretty much worthless I would say, but in all fairness it was an old card!
Since my reads seems WAY off the charts, I decided to change how I was testing. The next series of tests were run as follows:
These test resulted in much more reasonable read speeds. I didn't bother with the sata controller card, opting instead to stick with my onboard ports.
This test was no compression, 8 drives, bs=10M and a count of 30000. I only ran RAIDZ2 and Z3 this round as those are really what I am looking at running:
And finally, just so I could see for myself if the "Optimal" (power of two + parity) was really more optimal (at least using dd) I ran one more test, RAIDZ2, 6 Drives, no compression. The top results above show that the optimal configuration (in my case, 4 drives + 2 parity drives for a total of six drives) seems to perform worse than a RAIDZ2 with 8 drives!
So after all of this testing, it looks like the motherboard provides pretty good performance, but I still think I may go with the IBM M1015 and rerun these tests to see what difference I might find.
I am very interested to know if you freenas gurus think these tests are reasonable and if dd is the best tool for these types of performance tests.
Thanks!
Here is my server configuration:
Supermicro Superserver 6028R-TRT
1 x X10DRi-T Supermicro Motherboard
1 x Intel Xeon E5-2650 V3 LGA2011-3 Haswell 10 Core 2.3GHz 25MB 5.0GT/s
4 x 16GB PC4-17000 DDR4 1233Mhz Registered ECC Dual-Ranked 1.2V Memory
9 x 4TB Western Digital WD40EFRX Red NAS SATA Hard Drives
Dual 740 Watt Platinum Power Supplies
Dual APC 1500 UPSs (one for each power supply)
8GB USB Thumb Drive for booting
The X10DRi-T motherboard has two Intel X540 based 10Gb Ethernet adaptors along with an Intel C612 Express chipset and 10 SATA3 ports.
I wanted to test read and write performance of various configuration of my server. I wasted a bunch of time doing this with IOZone but forgot to turn compression off, so I basically threw those numbers out and started fresh with dd and no compression.
There were some suggestions about using a SATA card as opposed to using onboard SATA ports as they are faster (my motherboard has 10 of them). So I not only wanted to test my various configurations (RAIDZ3 with 8 drives (considered non-optimal), RAIDZ2 with 8 drives (considered non-optimal), RAIDZ2 with 6 drives (considered optimal), RAID Z3 with 7 drives (considered optimal)) but I also wanted to test the onboard sata ports for performance.
One very interesting thing I found was how the hard drives responded individually during the tests and Ericloewe suggest here that it was possible that some games were being played with the onboard sata ports and total capabilities.
Ericloewe:
However, there's talk the last four ports on the PCH may be obtained via port multipliers internally or that they're otherwise shady. There is no real data on this yet.
I submit he may be right at least on my motherboard. Here is the reason I think that might be the case. When I was running my tests, I noticed that some hard drives appeared to have different performance values. This only happened on READs not WRITES and only on two of the ports. Swapping hard drives resulted in the exact same results. So it appears (based on my limited hardware knowledge of these Intel sata ports) that something off is going on with the onboard sata ports.
I was running the following command on a RAIDZ3 with 8 Drives vol no compression:
Code:
dd if=/dev/zero of=/mnt/vol1/testfile1 bs=10M count=30000 (not shown on graph) dd of=/dev/zero if=/mnt/vol1/testfile1 bs=10M count=30000 dd of=/dev/zero if=/mnt/vol1/testfile1 bs=10M count=30000 dd if=/dev/zero of=/mnt/vol1/testfile1 bs=10M count=30000

Same exact time here is another disk on another port:

I noticed there is a significant difference. This is on two of the ten ports I see this behavior. Switching drives did not change the results.
So that is the first thing I found interesting. Maybe the motherboard is playing games with eight of the ten ports, or maybe just two of the ten ports. Someone with a lot better understanding of the hardware side of the house might be able to explain why the motherboard is acting like it is.
OK, so I scratch my head but move forward with my testing. Use the following command (suggested here):
Code:
dd if=/dev/zero of=/mnt/vol1/testfile bs=4M count=10000
and
Code:
dd of=/dev/zero if=/mnt/vol1/testfile bs=4M count=10000
I ran these tests three times each. The read test was off the charts. I later changed my testing to increase bs from bs=4M to bs=10M. Someone had suggested that with 64G of ram somehow things were getting cached. I am not sure, but later tests tended to report more predictable read results.
I used the onboard stat ports for one set of test, then I switched it out with a sata Areca 1222 controller card that we had laying around for another set of tests. The 1222 card did not see my 4TB drives right away, so I upgraded the firmware on the card and it was great. I toyed with the idea of using this card, but it has horrible reviews, so this was just a test. What I need to do is buy the IBM M1015 but I am still not convinced the onboard ports won't work for me.
All compression was turned off for these tests and I ran them against the following configurations all with eight drives: RAID10, RAIDz1, RAIDz2, RAIDz3 and RAID60. Here were the results:

Well, that card is pretty much worthless I would say, but in all fairness it was an old card!
Since my reads seems WAY off the charts, I decided to change how I was testing. The next series of tests were run as follows:
Code:
dd if=/dev/zero of=/mnt/vol1/testfile1 bs=10M count=30000 dd of=/dev/zero if=/mnt/vol1/testfile1 bs=10M count=30000
These test resulted in much more reasonable read speeds. I didn't bother with the sata controller card, opting instead to stick with my onboard ports.
This test was no compression, 8 drives, bs=10M and a count of 30000. I only ran RAIDZ2 and Z3 this round as those are really what I am looking at running:

And finally, just so I could see for myself if the "Optimal" (power of two + parity) was really more optimal (at least using dd) I ran one more test, RAIDZ2, 6 Drives, no compression. The top results above show that the optimal configuration (in my case, 4 drives + 2 parity drives for a total of six drives) seems to perform worse than a RAIDZ2 with 8 drives!
So after all of this testing, it looks like the motherboard provides pretty good performance, but I still think I may go with the IBM M1015 and rerun these tests to see what difference I might find.
I am very interested to know if you freenas gurus think these tests are reasonable and if dd is the best tool for these types of performance tests.
Thanks!
Last edited: