Raidz6 write performance faster than read ?

Status
Not open for further replies.

sojoknuckle

Cadet
Joined
Feb 3, 2016
Messages
6
very new to freenas and zfs using a freenas mini and gigabit Ethernet

I have configured the unit to for raidz2 ( using 4 - 4TB western digital red drives)

What I am trying to understand , does the performance numbers I am seeing make sense.

What I am finding is about 90 MB/s write speed I am not complaining about this. The part I am not sure about is the read speed where I am finding this to be closer to 65 MB/s.

Do these numbers seem correct or could I be doing something wrong. Given I am new to zfs and raidz I understand raidz2 has a significant performance hit but I was kind of expecting write to the slower of the two and read to be the faster but from my testing I am seeing the opposite.

Thought ?

Thanks in advance for any help anyone can provide to help me understand this.
 
Last edited:

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
I'm guessing you mean RAIDZ2 (not 6, which doesn't exist, AFAIK).

What was your test methodology?
 

sojoknuckle

Cadet
Joined
Feb 3, 2016
Messages
6
First off yes raidz2 not raidz6 a typo and being new to ZFS :) still thinking about hardware raid and using ZFS (yes i understand hardware raid bad) :)

as far as testing method I used the method that seems to be suggested on the forums using the dd command

A bit more of on the Testing setup

The FreeNas Mini I have has 32 GB of RAM, but does not have a dedicated SSD for ZIL or L2ARC
The test was conducted on a machine running MacOSX (El Capitan) using a dedicated gigabit network
(only the mac and the FreeNas Mini are on this network)

Write Test , run multiple times always around the same result
# time sh -c "dd if=/dev/zero of=ddfile2 bs=1m count=49152 && sync"
49152+0 records in
49152+0 records out
51539607552 bytes transferred in 570.797227 secs (90294075 bytes/sec)

Read Test , run multiple times always around the same result
# time dd if=ddfile2 of=/dev/null bs=1m
49152+0 records in
49152+0 records out
51539607552 bytes transferred in 669.563815 secs (76974900 bytes/sec)

The performance meets my needs, I am just trying to understand the technology and to honest storage in general so please forgive me if the question seems to be out of left field. I have found a lot of articles on raidz write performance but none that talk about read. it still seems strange that the tests I have run always seem to show write performance noticeably higher than read.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Client machine specs? What nic does it have? What does iperf show between the two hosts?
 

sojoknuckle

Cadet
Joined
Feb 3, 2016
Messages
6
Client machine specs
Mac Pro (Early 2009)
2 x 2.26 GHz Quad-Core Intel Xeon
24 GB 1066 MHZ DDR3 ECC
The Nic is the Build in one for this Model Mac Pro
(This model has 2 Nics , but in my set-up is only using one of them)
Intel 82574L

Results from Iperf (10.0.2.3 is the IP of the
./iperf -n 1024M -i 1 -c 10.0.2.3 -p 5001

------------------------------------------------------------
Client connecting to 10.0.2.3, TCP port 5001
TCP window size: 129 KByte (default)
------------------------------------------------------------
[ 4] local 10.0.2.2 port 64196 connected with 10.0.2.3 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0- 1.0 sec 113 MBytes 948 Mbits/sec
[ 4] 1.0- 2.0 sec 112 MBytes 941 Mbits/sec
[ 4] 2.0- 3.0 sec 112 MBytes 942 Mbits/sec
[ 4] 3.0- 4.0 sec 112 MBytes 941 Mbits/sec
[ 4] 4.0- 5.0 sec 112 MBytes 942 Mbits/sec
[ 4] 5.0- 6.0 sec 112 MBytes 941 Mbits/sec
[ 4] 6.0- 7.0 sec 112 MBytes 942 Mbits/sec
[ 4] 7.0- 8.0 sec 112 MBytes 941 Mbits/sec
[ 4] 8.0- 9.0 sec 112 MBytes 942 Mbits/sec
[ 4] 0.0- 9.1 sec 1.00 GBytes 942 Mbits/sec

Network performance seems to be what I would expect
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
It looks like you are running that dd command locally, correct?

As for the dd command, for a /dev/zero write to be realistic, you need to ensure compression is turned off for the target dataset. That can artificially inflate the write speeds.

And I would think that a 4 disk RaidZ2 should be able to saturate a streaming 1 Gbps connection (meaning something else might be wrong with the dd command), but I could be wrong.
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Do iperf both directions. With the dd command turn off compression on a dataset and use 128k block size with the command.
 

sojoknuckle

Cadet
Joined
Feb 3, 2016
Messages
6
ok, I did not think about how the compression on the dataset would have impacted that , good to know.
I turned off compression on the dataset (not needed anyway given the data content is already compressed media)

The Test is being run on the client machine connecting to the FreeNas Mini over gigabit ethernet using an AFP share.

the local path at time of execution is /Volumes/Media/speedtest
where /Volumes/Media is the AFP share mapped to the Media dataset and speedtest is just a folder in that dataset (not a child dataset)

Write Test
# time sh -c "dd if=/dev/zero of=ddfile3 bs=1m count=49152 && sync"
49152+0 records in
49152+0 records out
51539607552 bytes transferred in 635.337978 secs (81121559 bytes/sec)

Read Test
# time dd if=ddfile3 of=/dev/null bs=1m
49152+0 records in
49152+0 records out
51539607552 bytes transferred in 602.606757 secs (85527762 bytes/sec)

Once I disabled the compression on the dataset , I see read and write in the same ballpark (or at least close enough for my use-case)

The Question I have , on a raidz2 config read and write being around the same or should the read speed be noticeability faster?
 

sojoknuckle

Cadet
Joined
Feb 3, 2016
Messages
6
iperf from the NAS to the client show the same performance as the test from the client to the NAS

[ 6] local 10.0.2.3 port 59069 connected with 10.0.2.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0- 1.0 sec 113 MBytes 946 Mbits/sec
[ 6] 1.0- 2.0 sec 112 MBytes 938 Mbits/sec
[ 6] 2.0- 3.0 sec 112 MBytes 937 Mbits/sec
[ 6] 3.0- 4.0 sec 112 MBytes 938 Mbits/sec
[ 6] 4.0- 5.0 sec 112 MBytes 936 Mbits/sec
[ 6] 5.0- 6.0 sec 112 MBytes 937 Mbits/sec
[ 6] 6.0- 7.0 sec 112 MBytes 937 Mbits/sec
[ 6] 7.0- 8.0 sec 112 MBytes 937 Mbits/sec
[ 6] 8.0- 9.0 sec 112 MBytes 938 Mbits/sec
[ 6] 0.0- 9.2 sec 1.00 GBytes 939 Mbits/sec

When the run the dd command changing the block size from the 1m to 128k I see the read write performance gap again (dataset has compression disabled).

# time sh -c "dd if=/dev/zero of=ddfile4 bs=128k count=49152 && sync"
49152+0 records in
49152+0 records out
6442450944 bytes transferred in 67.379480 secs (95614435 bytes/sec)

# time dd if=ddfile4 of=/dev/null bs=128k
49152+0 records in
49152+0 records out
6442450944 bytes transferred in 81.036344 secs (79500760 bytes/sec)

As I said I am trying to better understand this, thank you both so far I have learned a lot already
 
Status
Not open for further replies.
Top