So I'm confused.
I understand your first test(normal test). I understand your group that gives the results between 25 and 39 seconds.
all the answers to your questions are in my previous post...
What is the -n 10000 do? What is this dd2? Is this a script or an executable? What is supposed to make dd2 better/different than dd for testing?
as written: "I wrote a little utility that write random (almost perfect, it can't be compressed) and work in a similar fashion to dd."
I changed the name of arguments
dd / dd2
bs -> b
count -> n
if -> -i
of -> -o
The difference is that dd2 can generate incompressible random data at a very fast rate (2.4GB/s on this machine, or 6.03GB/s using the SSE accelerated code, but it's less random: but I'm fairly certain it doesn't matter as far as zfs realtime compression goes)
which is why I provided as a reference:
./dd2 -b 1048576 -n 10000 -o /dev/null : wrote 10485760000 bytes in 4.254453s: 2464655104.00 bytes/sec
which shows the speed at which random numbers are generated: 2464655104.00 bytes/sec -> 2.46GB/s (or 2.29GiB/s as per IETF) and how long it took: 4.2s to generate 1TB of data. So 4.2s is to be removed from the results here to get the purely disk related data.
But where does the random come in? As I discussed before using random devices is pointless(its not even a good test if you are using compression) because there's no fast way to generate large amounts of data for low CPU usage. As soon as you start causing CPU usage to increase that isn't directly related to the testing criteria you invalidate the test. If you
really need to test a compressed pool you need to create a dataset that is uncompressed and use /dev/zero to go to the dataset. Read this sticky:
http://forums.freenas.org/threads/notes-on-performance-benchmarks-and-cache.981/ .
No, what it means is that you can't use /dev/random (and I *never* said you could). Which I'm *not* using. And yes you can generate large amount of random data with relatively low-cpu usage, faster than any mechanical drive could sustain. The quality of the randomness isn't as good as /dev/random obviously, but it is sufficient enough as far as speed testing goes and ensuring the data is incompressible (which is all we care about).
2.4GB/s is more than fast enough to test the performance here. The SSE2-accelerated random generated can reach 6GB/s on that machine. Testing very fast SSDs in stripe could be an issue: but that's not the case here...
The aim was that you wouldn't have to worry about compression or caching... It's not going to help, quite the opposite.
If you want the source code of dd2, let me know...
Not sure what your configuration is, but if you are running FreeNAS under ESXi and have more vCPUs allocated than you have physical cores you can expect weird performance results. I know this has confused quite a few people.
I'm not using ESXi; plain FreeNAS. System is a E3-1220V3, on a X10SL7-F motherboard, 32GB of RAM. 8 disks are on the LSI adapters, 4 are on the intel (2 SATA3 and 2 SATA2)
I just did a dd if=/dev/zero of=/mnt/encrypted/testfile bs=1m count=100000 and I got a consistent value within 2 seconds of each other over 5 tests(with ESXi but no other VMs running).
And that was precisely the point of my post: with this new encryption code: results fluctuate greatly...
I'm going to try the beta which I believe doesn't have the new code... and see if that great speed fluctuation is still there[/quote][/quote]