Read slower than write in raidz2?

Status
Not open for further replies.

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
I've got 6x 4TB WD red drives hooked up to a HBA powered by a Xeon E3 1245v3 with 32GB of ecc ram. I ran a performance test in the system options and the output file looked like this:
Iozone: Performance Test of File I/O
Version $Revision: 3.420 $
Compiled for 64 bit mode.
Build: freebsd

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
Vangel Bojaxhi, Ben England, Vikentsi Lapa.

Run began: Wed Feb 11 21:09:15 2015

Record Size 128 KB
File size set to 41943040 KB
Command line used: /usr/local/bin/iozone -r 128 -s 41943040k -i 0 -i 1
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
random random bkwd record stride
KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread
41943040 128 433565 464058 211254 210979

iozone test complete.

Screen Shot 2015-02-11 at 9.23.42 PM.png

It looks like it's saying my write speeds were in the mid 450mb/s while my read speeds were around 210mb/s. Is this to be expected by a raidz2 configuration? The graphs seem to correlate to what the output file shows. What kind of performance bump can I expect if I were to drop down to a raidz1 config? What can you do to bring that performance up to saturate 10GbE?

*Thinking about a hypothetical 10 GbE setup and my theoretical max*
 

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
Screen Shot 2015-02-11 at 9.32.40 PM.png

CPU didn't seem to be bothered by the workload. I'm assuming that it's just bottlenecked by the actual drives?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I have somewhat a similar build and my throughput are pretty much in par.
For quite sometime I have been curious to what is limiting the throughput on my system and I have done several configuration change to experiement with different settings.
As of last night I have reconnected one of my SSD as a L2ARC, now that Freenas GUI can actually provide feedback.
It does max at about 100MB/s. I have also experimented on that same SSD mounted as a zvol and I was getting similar results as well.
I am using gstat to look at the activity of the drives and SSD was maxed out, while there was headroom on the RADIZ2 drives.
In the past I did try with two of the SSD in RAID0 and got close to 190MB/s or so. These SSD's Kingston V200+ are rated at 400MB/s+ however I cannot achieve this on Freenas.
I bought a 6TB WD GREEN as an external backup drive and during replication I would steadily be in the 150MB/s+ at the start of the platter down to 100MB/s+ at 90%+ capacity.
This tends to indicate that my SSD are not a good performer in steady throughput than my 6TB GREEN.
This tends to indicate SSD is the limiting factor.
I don't have enough hard drive available to perform a proper test.
When I was performing burn-in of the 6TB GREEN, I could see my drive throughput max out at 180MB/s+.
I have a spare hardware RAID card, but for some reason I cannot plug it on my X10 board as it prevents the system from booting. Interrupt conflict I guess.
It would have been nice if I could have connected a few low end drives in RAID0 config.
As you pointed out, CPU utilization is barely visible, but I don't know if it reflect a single core or all of the threads?
A sure thing would be to populate two hardware RAID cards with enough drives to garantee a minimum throughput. and have them mounted as single two independent zvol. Freenas would then see only one drive on each card.
Then we could perform read and write test and ZFS replication across the volumes.
It has been said that to achieve high throuput is to split the drives into multiple vdev, but for a 6 disk setup this is not very appealing, at least not enough to achieve redundancy and capacity.
For the highest throughtput on read would be to use 6 way mirrors. Terrible in capacity and switching to a biffy SSD setup maybe more interesting.
 
Last edited:

Stanri010

Explorer
Joined
Apr 15, 2014
Messages
81
Whats the expected performance boost if I were to add 6 more identical drives in a new raidz2 vdev so I got two identical 6 drive vdevs?
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
Whats the expected performance boost if I were to add 6 more identical drives in a new raidz2 vdev so I got two identical 6 drive vdevs?
At best twice the throughput seems a fair assessment, however I suspect that will not be the case.
With FreeBSD, I find it difficult to profile the system in order to locate bottlenecks.
I wish there was a way to create virtual pools in RAM to test the computation throughput to know what limit we could expect on the processing side.
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I am going to be looking into it.
However mkfile is not part of Freenas install and there seem to be a package available for Freebsd.
Question, then is: Where should I install the package? Should it be done in a jail as it seems Freenas OS is not writable?
In a jail, it will or should have limited access to the system, in this particular case, would it defeat the purpose of the jail or would this be enough to run the test?
 

Bidule0hm

Server Electronics Sorcerer
Joined
Aug 5, 2013
Messages
3,710
It's just an example, don't install anything for that, you can use dd as the files you need to create aren't that big since you create them in RAM and I assume you don't have 1 TB of RAM... for example dd if=/dev/zero of=/yourRamDisk/yourFile bs=1m count=2000 for a 2GB file.

But first you need to create a RAM disk and I don't know how to do except that you need to use mdmfs: https://www.freebsd.org/cgi/man.cgi?query=mdmfs&sektion=8

Edit: apparently it's not that complicated: https://forums.freenas.org/index.php?threads/yet-another-zfs-tuning-thread.10140/page-2#post-45000 and https://forums.freenas.org/index.ph...rted-or-not-supported.17691/page-2#post-96489 and https://forums.freenas.org/index.ph...rted-or-not-supported.17691/page-2#post-96508 for example. Actually I think you don't need to create the files as you can specify the size directly for the disk, just create a few RAM disks and use them to create a pool. Be careful to leave some RAM to FreeNAS and ZFS...

But maybe you should wait for an answer from a more experienced member to avoid making a mistake.
 
Last edited:
Status
Not open for further replies.
Top