How I test read performance of the disks

Status
Not open for further replies.
Joined
Nov 20, 2011
Messages
8
On my setup I have a whole variety of disks and a few attached to the system via SATA and some via USB.

To test performance I setup my zpools as I'd like to deploy and load them up with spare data I was going to throw away into the bit bucket.

With a bunch of data on the disks I ssh into the host and initiate a "zpool scrub <poolname>" for each pool. Then I run "zpool iostat 5" and watch.

Here is a snapshot of what I received:

Code:
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
fgh         7.57G   288G    590      0  73.6M      0
onetbstripe  15.1G  1.80T    390      0  48.6M      0
raptor      36.1G   238G  1.71K     13   218M  19.3K
twotbdisk   7.57G  1.81T    923     11   115M  14.9K
----------  -----  -----  -----  -----  -----  -----
fgh         7.57G   288G    575      0  71.6M      0
onetbstripe  15.1G  1.80T    397      0  49.4M      0
raptor      36.1G   238G  1.71K      0   217M      0
twotbdisk   7.57G  1.81T    944      0   118M      0


- fgh is an older SATA 1.x drive off of it's own controller.
- onetbstripe is a pair of 1tb drives striped. One is off of USB and the other off the motherboard SATA.
- raptor is a pair of 15K RPM raptors off of the motherboard SATA controllers.
- twotbdisk is a new SATA 1.x 2 TB disk off the motherboard.

Doing this will give you a really good idea of what the max read performance of your zpools. In my experience with ZFS on Solaris hosts with high end storage, a scrub operation will max out those storage devices.

An additional thing is you can initiate a zpool scrub to help load up your freenas box during your acceptance testing before deployment and putting valuable data on it. During any burn in testing, just fire off scrubs at regular intervals.

I would not do scrubs on any SSD drives because of the wear.

Enjoy!
 

Milhouse

Guru
Joined
Jun 1, 2011
Messages
564
Doing this will give you a really good idea of what the max read performance of your zpools. In my experience with ZFS on Solaris hosts with high end storage, a scrub operation will max out those storage devices.

It would be interesting to see how your iostat results compare with a simple dd.

I would not do scrubs on any SSD drives because of the wear.

What wear? Unless you have data that has bit rotted, the scrub will be mostly reads as it is verifying the data. There will be some writes, but not enough to worry about.
 
Joined
Nov 20, 2011
Messages
8
It would be interesting to see how your iostat results compare with a simple dd.
Code:
[root@fatman] /mnt/raptor# zpool iostat raptor 10
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raptor      74.5G   200G  1.55K      0   198M      0
raptor      74.5G   200G  1.55K      0   197M      0
raptor      74.5G   200G  1.52K      0   193M      0


Code:
[root@fatman] /mnt/raptor# ls -alh rap.tar
-rw-r--r--  1 root  wheel    23G Nov 20 22:01 rap.tar
[root@fatman] /mnt/raptor# dd if=rap.tar of=/dev/null bs=2048k count=50k
12019+0 records in
12019+0 records out
25205669888 bytes transferred in 122.872515 secs (205136762 bytes/sec)


What wear? Unless you have data that has bit rotted, the scrub will be mostly reads as it is verifying the data. There will be some writes, but not enough to worry about.

I was incorrect in that statement about wear.
 
Status
Not open for further replies.
Top