Performance Testing with DD

Status
Not open for further replies.

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Thx for the extra info on the block sizes and the way zfs writes to the disks.

Maybe you could answer some of my questions I have about zpool and vdevs.

Is my assumption correct that a zpool with 1 vdev has the same throughput (measured with dd) as a zpool with the same amount of disks, but using 2 or more vdevs?

If my assumption on throughput is correct and more vdevs means more IOPS, what is the benefit of more IOPS? Is it mainly for systems with a lot of concurrent users? Does iSCSI performance increases with more IOPS?
And how can you measure the IOPS of a zpool?
 
Joined
May 27, 2011
Messages
566
Is my assumption correct that a zpool with 1 vdev has the same throughput (measured with dd) as a zpool with the same amount of disks, but using 2 or more vdevs?

If my assumption on throughput is correct and more vdevs means more IOPS, what is the benefit of more IOPS? Is it mainly for systems with a lot of concurrent users? Does iSCSI performance increases with more IOPS?
And how can you measure the IOPS of a zpool?

yeah the performance would be the same, (except for fewer disks). if you have a single vdev of 8 disks in a raidz2, or 2 vdevs of 4 disks in a raidz2 you would get better performance from the single (6 vs 2x2), but if you have the 8 vs 2 sets of 5, they would be relatively even. (6 vs 2x3)

iops is Input/Output Operations Per Second, this is how snappy or responsive the pool is. things like databases or other things that need to be fast access, not high bandwidth. you get more iops from multiple vdevs because each vdev can do something different at the same time. most importantly, one can read and one can write, the magic of ZFS will sort it out for you, if one vdev is busy reading, the other will be used to write.
 

sjieke

Contributor
Joined
Jun 7, 2011
Messages
125
Thx again for the info. It all starts to make sense now :)
For my moderate home use a Raidz2 of 6 drives would be better than 2 raidz1 of 3 drives.
I would have similar performance (mostly media streaming and backup) but better redundancy :)

Originaly I was going for 4 drives in a Raidz array, but reading your posts about failing drives I'm considering raidz2 with 6 drives. The only problem I have is that I only have 4 Sata ports on the motherboard and a free PCI slot. So I'm wondering if adding a PCI-controller card with 2 sata ports would hurt my overall performance? Since PCI is pretty old and not so fast.
 
B

Bohs Hansen

Guest
Looking for some suggestions on what configuration might be wrong for me. I got 4 drives á 2gb in raidz1. Formatted with 4k sectors so that part is ok. The controller reports back properly with sata3 and all the drives as well. I know mechanical drives run around sata1 speed, but still.

I only get around 55mb/s write speed with a dd test of 20gb, this seems a bit low to me even though its mechanical drives. The readspeed of 260mb/s looks more right.

Code:
freenas# dd if=/dev/zero of=/mnt/jcube1/ddtestfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 381.726183 secs (54938647 bytes/sec)
freenas# dd of=/dev/zero if=/mnt/jcube1/ddtestfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 80.876339 secs (259303528 bytes/sec)


using beta3 (but same with beta 1 & 2), initially with the release version i remember i had better results at around 130mb/s
 
Status
Not open for further replies.
Top