Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Am I reading this correctly?

Western Digital Drives - The Preferred Drives of FreeNAS and TrueNAS CORE

JBelthoff

Neophyte
Joined
Nov 19, 2015
Messages
4
Greetings,

I just setup a TrueNas using four 4TB magnetic disks in 2 mirrored vdevs.

After doing an initial test my question is this:

Am I correct that this system is getting 5.2GB/s writes and 10GB/s reads? (see below) Am I reading those numbers correctly?

Code:
Write: Zeros
dd if=/dev/zero of=/mnt/tank/JBelthoff/ddfile bs=2048k count=20000
41943040000 bytes transferred in 7.953296 secs (5273667825 bytes/sec)
41943040000 bytes transferred in 7.849670 secs (5343287117 bytes/sec)
41943040000 bytes transferred in 8.232694 secs (5094692116 bytes/sec)

Read: Zeros
dd of=/dev/null if=/mnt/tank/JBelthoff/ddfile bs=2048k count=20000
41943040000 bytes transferred in 4.047308 secs (10363195584 bytes/sec)
41943040000 bytes transferred in 4.013428 secs (10450675817 bytes/sec)
41943040000 bytes transferred in 4.031919 secs (10402749097 bytes/sec)
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,708
Yes, It's compressing very efficiently. If you had a faster CPU, you would have even better speeds.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,384
Use a dataset that has compression disabled
 

JBelthoff

Neophyte
Joined
Nov 19, 2015
Messages
4
Thanks, I guess I have some reading to do. I didn't think SATA3 magnetic disks could read and write at those speeds. Even considering I'm quadrupling performance with the mirrored vdevs, I would have expected 1.2GB/s or 2.4GB/s which would have maxed out the bandwidth.

A "dataset that has compression disabled" is certainly my next venture.

Thanks again! :smile:
 

winnielinnie

Senior Member
Joined
Oct 22, 2019
Messages
420
You can also use /dev/urandom instead of /dev/zero, to take compression out of play, more or less.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,384
You can also use /dev/urandom instead of /dev/zero, to take compression out of play, more or less.
No you can not use urandom. That will require to much cpu overhead and you won't get disk performance you will get cpu performance.
 

wdp

Junior Member
Joined
Apr 16, 2021
Messages
19
From everything I've read, dd is bust for most real world testing. It's best to use fio.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,384
From everything I've read, dd is bust for most real world testing. It's best to use fio.
dd is going to give you a better simple is my pool working statistic. It's going to give you the best case throughout test. Nothing like real world and people usually do it on a full pool so it going to be higher than if you did it on a pool that's 40% full.

Still a decent test because it's just so simple. Things like fio and iozone are much more complicated to analyze the output and get some kind of meaning.
 

JBelthoff

Neophyte
Joined
Nov 19, 2015
Messages
4
Yes, I understand thank you. I was just surprised by the initial results. Certainly my test above is not indicative of actual performance of a working system - particularly when you factor in networking bottlenecks and all the other fun stuff that gets in the way. Eventually once I have the system in production for a spell, I'll run some real world scenarios.
 

winnielinnie

Senior Member
Joined
Oct 22, 2019
Messages
420
No you can not use urandom. That will require to much cpu overhead and you won't get disk performance you will get cpu performance.
I was referencing "taking compression out of play" in regards to,

Code:
dd of=/dev/null if=/mnt/tank/JBelthoff/ddfile bs=2048k count=20000
41943040000 bytes transferred in 4.047308 secs (10363195584 bytes/sec)


In the OP's example, the file /mnt/tank/JBelthoff/ddfile is a very "large" file that contains nothing but 0's, which means it is extremely effeciently compressed, and thus yields crazy read speeds. (Hence why "large" is in double-quotes.)

If they initially created the same "very large file" using /dev/urandom instead of /dev/zero, then the resulting file would barely be compressed (if at all), and thus it should yield a more realistic speed when reading it. (This doesn't take into account any factors of caching.)

I doubt they would get 10GB/s read speeds if they initially created the file with /dev/urandom instead of /dev/zero.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,384
I was referencing "taking compression out of play" in regards to,

Code:
dd of=/dev/null if=/mnt/tank/JBelthoff/ddfile bs=2048k count=20000
41943040000 bytes transferred in 4.047308 secs (10363195584 bytes/sec)


In the OP's example, the file /mnt/tank/JBelthoff/ddfile is a very "large" file that contains nothing but 0's, which means it is extremely effeciently compressed, and thus yields crazy read speeds. (Hence why "large" is in double-quotes.)

If they initially created the same "very large file" using /dev/urandom instead of /dev/zero, then the resulting file would barely be compressed (if at all), and thus it should yield a more realistic speed when reading it. (This doesn't take into account any factors of caching.)

I doubt they would get 10GB/s read speeds if they initially created the file with /dev/urandom instead of /dev/zero.
This works for reads only. Not write testing
 

jgreco

Resident Grinch
Moderator
Joined
May 29, 2011
Messages
13,708
Basically you need to disable compression completely if you want to test actual disk speeds. Then you can do it with dd and /dev/zero and it'll do what you thought it was doing.
 
Top