My ASRock C2750 dd results

Status
Not open for further replies.

Oded

Explorer
Joined
Apr 20, 2014
Messages
66
Would appreciate if anyone can comment on whether these are considered good or not. I ran a few dd tests on my RAIDZ2, 6x4tb WD RED drives configuration.

/dev/zero tests seem great, /dev/random tests seem terrible (not surprising, it's very CPU intensive and I don't think it's multithreaded to benefit my 8 cores), and real-world file copy tests are so-so. I guessed how to perform a read-only test, please let me know if it's correct (it's the last one).

What I really want to know is: do you think I'll get any real-world differences if I move to a non-optimal RAIDZ2 8 disks configuration with these results? I mainly use my NAS as a... well, NAS :). Meaning I'm blocked by the speed of gigabit ethernet. So I wonder how much of a difference, if any, I'll be getting.

Results below.

Thanks!
Code:
 
[root@freenas] ~# dd if=/dev/zero of=/mnt/tank/Media/TestFiles/zero1024File bs=1024k count=20000
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 17.248849 secs (1215821421 bytes/sec)
 
 
[root@freenas] ~# dd if=/dev/zero of=/mnt/tank/Media/TestFiles/zero2048File bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 17.525977 secs (1196596335 bytes/sec)
 
//THE FOLLOWING TWO TESTS USED A REAL-WORLD RIPPED BLURAY OF QUEEN LIVE PERFORMANCE, AN M2TS FILE OF ABOUT 21GB.
[root@freenas] ~# dd if=/mnt/tank/Media/Performances/queen/BDMV/STREAM/00000.m2ts of=/mnt/tank/Media/TestFiles/QueenTestFile2048 bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 107.674757 secs (194767284 bytes/sec)
 
 
[root@freenas] ~# dd if=/mnt/tank/Media/Performances/queen/BDMV/STREAM/00000.m2ts of=/mnt/tank/Media/TestFiles/QueenTestFile1024 bs=1024k count=20000
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 110.090797 secs (190492943 bytes/sec)
 
 
//OUCH:
[root@freenas] ~# dd if=/dev/random of=/mnt/tank/Media/TestFiles/randim2048File bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 433.587790 secs (48367414 bytes/sec)
 
//Read-only test (I think that's the best way to test such a thing, no?)
[root@freenas] ~# dd if=/mnt/tank/Media/Performances/queen/BDMV/STREAM/00000.m2ts of=/dev/null bs=1024k count=20000
20000+0 records in
20000+0 records out
 
20971520000 bytes transferred in 33.054869 secs (634445712 bytes/sec)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
There's a reason why /dev/random is never recommended around here. That's all I'll say about that.

/dev/zero is absurdly high because you have compression enabled on your zpool. So the numbers are meaningless. Do you really think you get 1.2GB/sec like the dd tests claim? ;)

Your real-world test seems pretty fair for your hardware though.
 

Oded

Explorer
Joined
Apr 20, 2014
Messages
66
It was clear to me the numbers are too high but I didn't know why. I'll disable compression and check again.
 

Oded

Explorer
Joined
Apr 20, 2014
Messages
66
Updated /dev/zero results:

Code:
[root@freenas] ~# dd if=/dev/zero of=/mnt/tank/Media/TestFiles/zero1024FileNoCompression bs=1024k count=20000                         
20000+0 records in
20000+0 records out
20971520000 bytes transferred in 49.933635 secs (419987850 bytes/sec)
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
And those numbers are exactly what I'd expect for your hardware. :D
 

Oded

Explorer
Joined
Apr 20, 2014
Messages
66
Ok, I decided to do the next-best thing in my situation. Since I don't have an 8 disk setup, I created a 5 disk setup. It's also non-optimal, and 8 disks would be a little faster (more disks to spin), so I figure if I get around 10% penalty hit it's a good choice to go with 8 instead of 6.

Results:

/dev/zero with compression is the same absurd 1.2gb/sec value :).

/dev/zero without compression:

Code:
[root@freenas] /mnt/tank/Media# dd if=/dev/zero of=/mnt/tank/Media/TestFiles/zero2048File bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 67.702856 secs (309758277 bytes/sec)


Same queen file copy task (with much more free space on the pool by the way). Compression is enabled, just like in the original test - but that's fine since it simulates a real-world scenario:

Code:
[root@freenas] /mnt/tank/Media# dd if=/mnt/tank/Media/00000.m2ts of=/mnt/tank/Media/TestFiles/QueenTestFile2048 bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 137.998722 secs (151968944 bytes/sec)



Both tests show a clear decrease in performance of 30% in the non-optimal configuration. That's really bad and makes me think that it's a really bad idea to go with a 5 disk setup.

Just for the heck of it, I did an old-school test of copying the same file from my Macbook Pro over ethernet to the drives, using a stop watch. Results:

Code:
Copy from Mac, 5 disks, RAIDZ2, no compression: 4:44
Copy from Mac, 5 disks, RAIDZ2, lz4 compression: 4:35
 
Copy from Mac, 6 disks, RAIDZ2, lz4 compression: 4:35


The file is 28gb. A quick calculation of 275 seconds divided by 28gb tells us that I'm fully saturating my gigabit ethernet (104mb/second, to be exact). So the real conclusion here is that in real-world scenarios, that 30% penalty is irrelevant for my tasks. I would rarely copy files directly on the server via ssh, and ofcourse scrub operations would take longer. But for 90% of the time, I doubt it will make a difference. Now, these tests were on a fairly empty array. What happens if my drive is at 80% capacity? Unknown to me. Would it impact my network data transfers? I doubt it. But I would appreciate being corrected if needed.

Thanks.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
You are missing out on key facts.

The whole ideal thing is not just for performance. You end up with unused parts of the disk because your disks don't mathematically align with the block sizes. This loss adds up over time, and is invisible as it's lost on every block written that doesn't align, so you don't see the space missing when you create the pool, but if you are following your disk writes closely it does add up. You can lose 20% or more of your pool to this lost space,which sucks when you consider the cost of another disk and the fact you'll get that disk + the lost slack space back!

As for the speed, virtually everyone is limited by their network and not by their disks with regards to throughput. The explanation above is why it's most important for home users that are trying to maximize their disk space while minimizing costs.
 

Oded

Explorer
Joined
Apr 20, 2014
Messages
66
Wow. I didn't even consider the alignment issue. I thought it's just a calculation overhead and that's that. 20% loss of disk space is indeed unacceptable and misses the whole point of adding more disks in the first place.

I wish I knew that before destroying my pool, would have saved me 2 days of having to copy everything back from the old NAS again :D.

Ok. FINALLY I have a decision. use 6 disks for now. If I need more space, I'll setup another separate pool of 2 disks in RAID0 for streaming purposes only - data I don't mind losing. When that isn't enough as well, I'll move on to 6tb drives or whatever the future brings us.

Thanks cyberjock for your help on these forums, it's been a great help.
 

NineFingers

Explorer
Joined
Aug 21, 2014
Messages
77
After having used your setup for a while, how do you like it?
What is your memory configuration (mfg and model numbers)?
 
Last edited:
Status
Not open for further replies.
Top