Raid 10 pool

Status
Not open for further replies.

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Hey guys,

I'm back again. I have updated my system with an M1015 and four 1TB 7200rpm drives configured in a raid 10 pool. I have this pool configured as a iSCSI device extent for ESXi. I also purchased a Dell X3959 dual port NIC for dedicated iSCSI use; MPIO is working well as far as I can tell. I was wondering if I could get some pointers on benchmarking this new pool so I can be sure everything is working well before I remove all local storage from my ESXi host. I will update this thread with the results of dd commands as I complete them.

Thanks,
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
New Raid 10 pool:

Code:
[root@freenas] /mnt/vm# dd if=/dev/zero of=/mnt/vm/tmp.dat bs=2048k count=10k
10240+0 records in
10240+0 records out
21474836480 bytes transferred in 136.804241 secs (156974932 bytes/sec)
[root@freenas] /mnt/vm# dd if=/mnt/vm/tmp.dat of=/dev/null bs=2048k count=10k
10240+0 records in
10240+0 records out
21474836480 bytes transferred in 67.561841 secs (317854519 bytes/sec)


RAIDZ2 pool for comparison:
Code:
[root@freenas] /mnt/vm# dd if=/dev/zero of=/mnt/volume/tmp.dat bs=2048k count=10k
10240+0 records in
10240+0 records out
21474836480 bytes transferred in 45.080932 secs (476361855 bytes/sec)
[root@freenas] /mnt/vm# dd if=/mnt/volume/tmp.dat of=/dev/null bs=2048k count=10k
10240+0 records in
10240+0 records out
21474836480 bytes transferred in 35.903472 secs (598127010 bytes/sec)
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi KMR,

I don't think you can ask for much better than that, well done!

-Will
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
I was actually thinking those were pretty low considering the performance of the RAIDZ2 pool I have.
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi KMR,

Actually, yes....I was thinking you were "dd"ing from a client connected via iscsi, not doing it locally. Upon closer inspection I see my assumption was incorrect.

-Will
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Can anyone provide some insight on why I am seeing this sort of performance problem? I was under the (perhaps mistaken?) impression that a RAID10 pool would give me better performance than my RAIDZ2 pool for my ESXi datastore. Now that I have been reading more about this and the issues associated with ZFS and iSCSI can anyone give some advise on this configuration? I wondering if I should take ZFS out of the equation or attempt to fix my disk subsystem performance problems.

I should mention about the disks that 2 are new WD blue 1tb 7200RPM drives and 2 are old (maybe really old?) 1tb Seagate 7200RPM drives that I had laying around. I made sure to put a seagate and WD drive in each vdev to ensure I didn't have a vdev with two old drives.
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
I'm thinking about blowing this pool away and setting it up as a RAIDZ2 pool and seeing what I get for performance. Any thoughts here? Any suggestions on tests I can run before I destroy the pool?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Here's some stuff to think about...

RAID10 really stores 2 copies of your data. All reads and writes really go at the speed of, at best, a RAID0 of 2 disks. It's a little more complex than that, but bear with me. If each disk can do a theoretical 75MB/sec then you aren't going to beat 150MB/sec. I believe this phenomenon is why your RAID10 dd test was so low.

RAIDZ2 of 4 drives on the other hand works differently. Your data along with parity data is striped together. At first you'd assume that you'd still be limited to the same 75MB/sec then you aren't going to beat 150MB/sec, but that's actually not true(and your test has proven this). Each disk will be reading parity data along with the actual data. ZFS' prefetching will cause your disks to read the parity data as well was "read ahead" some sectors that will contain actual data. So you do kinda-sorta get all 4 spindles worth of bandwidth. I say kinda-sorta because there is some added complexity so it's not completely true, but I think you get the idea.

Now, when you start doing stuff with iSCSI you are probably going to have latency issues. It seems that everyone does because nobody reads up on the topics that ZFS + iSCSI = non-trivial to get good performance with(kudos for already figuring this out). For iSCSI on ZFS, a RAID10 should provide some relief of latency over RAIDZ2 for disk writes since there is no need to read/calculate/write parity data. Your G2020 isn't exactly a high performing CPU so you may have additional latency with parity data calculations with that CPU and a RAIDZ2 whereas a RAID10 has no parity calculation penalty but you instead are limited by the number of spindles you have.

If you've fallen asleep already you need to keep this in mind... the benchmarks you did provided theoretical maximum speed for sequential reads and writes in a non-networked environment. The loading you are going to get from iSCSI is nothing like a dd command. ZFS is a CoW file system and along with other things kind of makes the benchmark you did somewhat useless for iSCSI purposes. What you really need to do is try one configuration and if you have problems with it then try the other. Or look at adding more hardware(RAM, ZIL, L2ARC, Intel NIC, etc.) or switching to a UFS RAID(I believe those are supported in FreeNAS, I'm not a UFS user myself). Don't let some hypothetical benchmark values be your deciding factor because those aren't necessary going to reflect reality. The game isn't to have a big epeen because you have an awesome benchmark values. The game is for the FreeNAS server to be able to perform well enough for you to be happy.

My guess is that a RAID10 is more likely to work for you for iSCSI than a RAIDZ2 despite the fact that the benchmarks make a RAIDZ2 seem to be the very logical choice on the surface. All you can do is test, test, test and then test some more. This is why alot of people hate ZFS and some of the more popular ZFS tuning guides are called "ZFS Evil Tuning Guide". There's lots of tradeoffs between bandwidth, latency, available CPU horsepower, ZFS efficiency, and more. If you go outside of a small range of uses then you will be forced to learn how all these tradeoffs work and figure out how to get a happy medium between all of them. It might be a few simple tweaks to fix it for you, or it might require all new more powerful hardware. Each admin's needs are slightly different which is why some admins get paid alot of money to figure all this stuff out.

Good luck!
 

KMR

Contributor
Joined
Dec 3, 2012
Messages
199
Thanks for the info, as always Cyberjock. I think I will do some more real world testing with ESXi and my VMs and see if I am happy with the configuration. I'll post my results here.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Sounds like a great plan!
 
Status
Not open for further replies.
Top