Dell PERC5i very slow? iSCSi

Status
Not open for further replies.

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Hi All,

I have a PowerEdge 2900 Server with a PERC5i. Has a battery on the RAID card. We have 8x 10000rpm SAS drives at 300gb each in a RAID 50. The performance compared to our PE 2950 server with 6x 7200rpm SAS drives is not as good for some reason :\ Read speeds are (ok) and the write speeds are terrible. here is what iozone throws at me:

PE 2900:

iozone -R -l 5 -u 5 -r 4k -s 100m
Iozone: Performance Test of File I/O
Version $Revision: 3.397 $
Compiled for 64 bit mode.
Build: freebsd

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
Ben England.

Run began: Wed Dec 5 09:19:23 2012

Excel chart generation enabled
Record Size 4 KB
File size set to 102400 KB
Command line used: iozone -R -l 5 -u 5 -r 4k -s 100m
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 5
Max process = 5
Throughput test with 5 processes
Each process writes a 102400 Kbyte file in 4 Kbyte records

Children see throughput for 5 initial writers = 478166.58 KB/sec
Parent sees throughput for 5 initial writers = 168072.82 KB/sec
Min throughput per process = 91902.90 KB/sec
Max throughput per process = 99366.44 KB/sec
Avg throughput per process = 95633.32 KB/sec
Min xfer = 94712.00 KB

Children see throughput for 5 rewriters = 540908.53 KB/sec
Parent sees throughput for 5 rewriters = 177572.35 KB/sec
Min throughput per process = 104873.34 KB/sec
Max throughput per process = 109998.59 KB/sec
Avg throughput per process = 108181.71 KB/sec
Min xfer = 97700.00 KB

Children see throughput for 5 readers = 1437709.94 KB/sec
Parent sees throughput for 5 readers = 1423870.18 KB/sec
Min throughput per process = 284323.56 KB/sec
Max throughput per process = 291174.59 KB/sec
Avg throughput per process = 287541.99 KB/sec
Min xfer = 99892.00 KB

Children see throughput for 5 re-readers = 1426285.03 KB/sec
Parent sees throughput for 5 re-readers = 1408223.75 KB/sec
Min throughput per process = 278020.78 KB/sec
Max throughput per process = 292840.84 KB/sec
Avg throughput per process = 285257.01 KB/sec
Min xfer = 97220.00 KB

Children see throughput for 5 reverse readers = 1273106.03 KB/sec
Parent sees throughput for 5 reverse readers = 1265005.06 KB/sec
Min throughput per process = 249602.03 KB/sec
Max throughput per process = 258248.09 KB/sec
Avg throughput per process = 254621.21 KB/sec
Min xfer = 98928.00 KB

Children see throughput for 5 stride readers = 1203686.48 KB/sec
Parent sees throughput for 5 stride readers = 1194334.26 KB/sec
Min throughput per process = 226798.36 KB/sec
Max throughput per process = 260433.83 KB/sec
Avg throughput per process = 240737.30 KB/sec
Min xfer = 88868.00 KB

Children see throughput for 5 random readers = 1088267.34 KB/sec
Parent sees throughput for 5 random readers = 1079029.60 KB/sec
Min throughput per process = 214190.69 KB/sec
Max throughput per process = 225236.17 KB/sec
Avg throughput per process = 217653.47 KB/sec
Min xfer = 97376.00 KB

Children see throughput for 5 mixed workload = 769717.92 KB/sec
Parent sees throughput for 5 mixed workload = 305395.73 KB/sec
Min throughput per process = 126605.52 KB/sec
Max throughput per process = 175081.62 KB/sec
Avg throughput per process = 153943.58 KB/sec
Min xfer = 74076.00 KB

Children see throughput for 5 random writers = 533008.86 KB/sec
Parent sees throughput for 5 random writers = 158078.97 KB/sec
Min throughput per process = 101488.95 KB/sec
Max throughput per process = 111429.24 KB/sec
Avg throughput per process = 106601.77 KB/sec
Min xfer = 93136.00 KB

Children see throughput for 5 pwrite writers = 498567.07 KB/sec
Parent sees throughput for 5 pwrite writers = 82874.70 KB/sec
Min throughput per process = 98310.04 KB/sec
Max throughput per process = 100731.76 KB/sec
Avg throughput per process = 99713.41 KB/sec
Min xfer = 100076.00 KB

Children see throughput for 5 pread readers = 1307074.14 KB/sec
Parent sees throughput for 5 pread readers = 1296484.05 KB/sec
Min throughput per process = 254374.42 KB/sec
Max throughput per process = 266300.59 KB/sec
Avg throughput per process = 261414.83 KB/sec
Min xfer = 97912.00 KB



"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"

" Initial write " 478166.58

" Rewrite " 540908.53

" Read " 1437709.94

" Re-read " 1426285.03

" Reverse Read " 1273106.03

" Stride read " 1203686.48

" Random read " 1088267.34

" Mixed workload " 769717.92

" Random write " 533008.86

" Pwrite " 498567.07

" Pread " 1307074.14


iozone test complete.

PE 2950:

Is this ok or am i not doing something right here?

I have a ZFS setup with 4k block forced, as this seems to show the best results up. Anybody have any ideas as to why its slow?

Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well, for starters, there's a bunch of reasons why the manual says....

NOTE: instead of mixing ZFS RAID with hardware RAID, it is recommended that you place your hardware RAID controller in JBOD mode and let ZFS handle the RAID.

That may not apply be your entire problem, there's alot of factors that affect ZFS performance. But I'd definitely start there.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Well, for starters, there's a bunch of reasons why the manual says....

That may not apply be your entire problem, there's alot of factors that affect ZFS performance. But I'd definitely start there.


Hi,

I don't have the ZFS volume in a raid setup. It's just a ZFS volume. The results aren't much different if i just use the drive as a device extent in iSCSi opposed to a file extent.

Not sure what the issue is..
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So wait, this is unclear... where does iSCSI come into this, then? Where does the RAID50 come into this?

Assuming this is a system you can play around with, you'd be well advised to start at the beginning.

If you're using RAID50 on the PERC5/i, don't. Do what noobsauce80 quoted above. Complex pseudo-RAID levels are often not implemented fully in silicon and can be stressy on the poor little CPU's in cheap RAID controllers. If they ARE implemented in silicon, then they're craptacular and have strange performance characteristics resulting from teeny stripe sizes and other tradeoffs commonly made in low end RAID controllers.

Have the controller export the disks to the host as JBOD. Then start there; measure your performance to each disk individually and in aggregate.

Do this BEFORE you create any filesystems, because while the read test is not destructive, the write test is data destructive.

Code:
[jgreco@storageX] /# camcontrol devlist
<NECVMWar VMware IDE CDR10 1.00>   at scbus1 target 0 lun 0 (pass0,cd0)
<VMware Virtual disk 1.0>          at scbus2 target 0 lun 0 (pass1,da0)
<OCZ-AGILITY3 2.15>                at scbus4 target 0 lun 0 (pass2,ada0)
<ST3000DM001-1CH166 CC43>          at scbus5 target 0 lun 0 (pass3,ada1)
<ST3000DM001-1CH166 CC43>          at scbus6 target 0 lun 0 (pass4,ada2)
<ST3000DM001-9YN166 CC4H>          at scbus7 target 0 lun 0 (pass5,ada3)
<ST3000DM001-9YN166 CC4H>          at scbus8 target 0 lun 0 (pass6,ada4)
[jgreco@storageX] /# dd if=/dev/ada1 of=/dev/null bs=1048576
^C3853+0 records in
3853+0 records out
4040163328 bytes transferred in 24.200801 secs (166943371 bytes/sec)
[jgreco@storageX] /# fg %1
dd if=/dev/ada1 of=/dev/null bs=1048576
^C14278+0 records in
14278+0 records out
14971568128 bytes transferred in 84.456721 secs (177269114 bytes/sec)
[jgreco@storageX] /# fg %2
dd if=/dev/ada2 of=/dev/null bs=1048576
^C14917+0 records in
14917+0 records out
15641608192 bytes transferred in 87.686877 secs (178380264 bytes/sec)
[jgreco@storageX] /# fg %3
dd if=/dev/ada3 of=/dev/null bs=1048576
^C15829+0 records in
15829+0 records out
16597909504 bytes transferred in 91.159528 secs (182075422 bytes/sec)
[jgreco@storageX] /# fg %4
dd if=/dev/ada4 of=/dev/null bs=1048576
^C16728+0 records in
16728+0 records out
17540579328 bytes transferred in 94.785611 secs (185055296 bytes/sec)
[jgreco@storageX] /#


I've only let each one run a minute or two because I have other things to do today, but you should ideally let them play out and note the speeds, which should all be very similar if you have the same model of disk. Then you do a write test, using "if=/dev/zero of=/dev/ada${foo}" as appropriate. I'm not doing that because I don't have a spare system up and running at the moment.

People are always anxious to skip this step because they think it's just fine and dandy to do it on top of a filesystem layered on top of the disks, but this step can identify poorly performing disk devices (possible future fail!) or bad cabling or troubled controller ports or PCI bus bottlenecks or any number of other things that are WICKED HARD to debug when all you know is that your two-dozen-disk filesystem is performing pretty poorly and you don't know why.

Note that testing one drive individually and then all of them in parallel are not the same test. The speeds you get from one disk and from all disks at the same time should be very similar, or else you've got bottlenecks. Important test.

Then you can create your proposed filesystem and repeat the single file dd test.

Code:
[jgreco@storageX] /mnt/storageX# dd if=/dev/zero of=testfile bs=1048576 count=65536
65536+0 records in
65536+0 records out
68719476736 bytes transferred in 613.280034 secs (112052363 bytes/sec)
[jgreco@storageX] /mnt/storageX# dd if=testfile of=/dev/null bs=1048576
65536+0 records in
65536+0 records out
68719476736 bytes transferred in 240.292488 secs (285982626 bytes/sec)


Now unfortunately this shows slower write speeds than FreeNAS would offer by default; the side effect of the responsiveness tweaking indicated by bug 1531 is lower write performance - but you can be doing a scrub, large sequential read, and large sequential write on this server without losing responsiveness on this system, the way it's been tuned.

Once you've gotten to this point and you're happy, THEN - and only then - should you layer iSCSI on top of it all. iSCSI adds another layer of complexity to it all

Not sure what the issue is..

If you just look at the end result of throwing something together and then it doesn't perform the way you'd like, you kind of go "hrm" and then waffle about playing find-the-issue. Build it in layers and test each layer.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
So wait, this is unclear... where does iSCSI come into this, then? Where does the RAID50 come into this?

Assuming this is a system you can play around with, you'd be well advised to start at the beginning.

If you're using RAID50 on the PERC5/i, don't. Do what noobsauce80 quoted above. Complex pseudo-RAID levels are often not implemented fully in silicon and can be stressy on the poor little CPU's in cheap RAID controllers. If they ARE implemented in silicon, then they're craptacular and have strange performance characteristics resulting from teeny stripe sizes and other tradeoffs commonly made in low end RAID controllers.

Have the controller export the disks to the host as JBOD. Then start there; measure your performance to each disk individually and in aggregate.

Do this BEFORE you create any filesystems, because while the read test is not destructive, the write test is data destructive.

Code:
[jgreco@storageX] /# camcontrol devlist
<NECVMWar VMware IDE CDR10 1.00>   at scbus1 target 0 lun 0 (pass0,cd0)
<VMware Virtual disk 1.0>          at scbus2 target 0 lun 0 (pass1,da0)
<OCZ-AGILITY3 2.15>                at scbus4 target 0 lun 0 (pass2,ada0)
<ST3000DM001-1CH166 CC43>          at scbus5 target 0 lun 0 (pass3,ada1)
<ST3000DM001-1CH166 CC43>          at scbus6 target 0 lun 0 (pass4,ada2)
<ST3000DM001-9YN166 CC4H>          at scbus7 target 0 lun 0 (pass5,ada3)
<ST3000DM001-9YN166 CC4H>          at scbus8 target 0 lun 0 (pass6,ada4)
[jgreco@storageX] /# dd if=/dev/ada1 of=/dev/null bs=1048576
^C3853+0 records in
3853+0 records out
4040163328 bytes transferred in 24.200801 secs (166943371 bytes/sec)
[jgreco@storageX] /# fg %1
dd if=/dev/ada1 of=/dev/null bs=1048576
^C14278+0 records in
14278+0 records out
14971568128 bytes transferred in 84.456721 secs (177269114 bytes/sec)
[jgreco@storageX] /# fg %2
dd if=/dev/ada2 of=/dev/null bs=1048576
^C14917+0 records in
14917+0 records out
15641608192 bytes transferred in 87.686877 secs (178380264 bytes/sec)
[jgreco@storageX] /# fg %3
dd if=/dev/ada3 of=/dev/null bs=1048576
^C15829+0 records in
15829+0 records out
16597909504 bytes transferred in 91.159528 secs (182075422 bytes/sec)
[jgreco@storageX] /# fg %4
dd if=/dev/ada4 of=/dev/null bs=1048576
^C16728+0 records in
16728+0 records out
17540579328 bytes transferred in 94.785611 secs (185055296 bytes/sec)
[jgreco@storageX] /#


I've only let each one run a minute or two because I have other things to do today, but you should ideally let them play out and note the speeds, which should all be very similar if you have the same model of disk. Then you do a write test, using "if=/dev/zero of=/dev/ada${foo}" as appropriate. I'm not doing that because I don't have a spare system up and running at the moment.

People are always anxious to skip this step because they think it's just fine and dandy to do it on top of a filesystem layered on top of the disks, but this step can identify poorly performing disk devices (possible future fail!) or bad cabling or troubled controller ports or PCI bus bottlenecks or any number of other things that are WICKED HARD to debug when all you know is that your two-dozen-disk filesystem is performing pretty poorly and you don't know why.

Note that testing one drive individually and then all of them in parallel are not the same test. The speeds you get from one disk and from all disks at the same time should be very similar, or else you've got bottlenecks. Important test.

Then you can create your proposed filesystem and repeat the single file dd test.

Code:
[jgreco@storageX] /mnt/storageX# dd if=/dev/zero of=testfile bs=1048576 count=65536
65536+0 records in
65536+0 records out
68719476736 bytes transferred in 613.280034 secs (112052363 bytes/sec)
[jgreco@storageX] /mnt/storageX# dd if=testfile of=/dev/null bs=1048576
65536+0 records in
65536+0 records out
68719476736 bytes transferred in 240.292488 secs (285982626 bytes/sec)


Now unfortunately this shows slower write speeds than FreeNAS would offer by default; the side effect of the responsiveness tweaking indicated by bug 1531 is lower write performance - but you can be doing a scrub, large sequential read, and large sequential write on this server without losing responsiveness on this system, the way it's been tuned.

Once you've gotten to this point and you're happy, THEN - and only then - should you layer iSCSI on top of it all. iSCSI adds another layer of complexity to it all



If you just look at the end result of throwing something together and then it doesn't perform the way you'd like, you kind of go "hrm" and then waffle about playing find-the-issue. Build it in layers and test each layer.

Brilliant. Thank's again jgreco!

It is not being used at the moment so i shall try and JBOD the disks. Although apparently the card must have each disk set up as RAID 0. so in my case 8x RAID 0, one per disk. weird. I shall have a tinker :) then get back to you. any ideas in the meantime please fire away!
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
So the RAID card required each drive to be in a raid 0 so that each disk would be "seen" apparently this is by design.

the same test as above (but this time with a freenas RAID-Z-2 was:

iozone -R -l 5 -u 5 -r 4k -s 100m
Iozone: Performance Test of File I/O
Version $Revision: 3.397 $
Compiled for 64 bit mode.
Build: freebsd

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
Ben England.

Run began: Wed Dec 5 15:51:54 2012

Excel chart generation enabled
Record Size 4 KB
File size set to 102400 KB
Command line used: iozone -R -l 5 -u 5 -r 4k -s 100m
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 5
Max process = 5
Throughput test with 5 processes
Each process writes a 102400 Kbyte file in 4 Kbyte records

Children see throughput for 5 initial writers = 492558.01 KB/sec
Parent sees throughput for 5 initial writers = 101916.13 KB/sec
Min throughput per process = 96392.04 KB/sec
Max throughput per process = 100090.70 KB/sec
Avg throughput per process = 98511.60 KB/sec
Min xfer = 98616.00 KB

Children see throughput for 5 rewriters = 537143.09 KB/sec
Parent sees throughput for 5 rewriters = 72468.78 KB/sec
Min throughput per process = 102936.26 KB/sec
Max throughput per process = 111440.63 KB/sec
Avg throughput per process = 107428.62 KB/sec
Min xfer = 94552.00 KB

Children see throughput for 5 readers = 1249029.50 KB/sec
Parent sees throughput for 5 readers = 1174607.04 KB/sec
Min throughput per process = 188388.02 KB/sec
Max throughput per process = 279932.88 KB/sec
Avg throughput per process = 249805.90 KB/sec
Min xfer = 68972.00 KB

Children see throughput for 5 re-readers = 1275443.09 KB/sec
Parent sees throughput for 5 re-readers = 1263833.71 KB/sec
Min throughput per process = 241081.39 KB/sec
Max throughput per process = 273248.78 KB/sec
Avg throughput per process = 255088.62 KB/sec
Min xfer = 90324.00 KB

Children see throughput for 5 reverse readers = 1238370.75 KB/sec
Parent sees throughput for 5 reverse readers = 1225202.85 KB/sec
Min throughput per process = 235173.36 KB/sec
Max throughput per process = 262758.12 KB/sec
Avg throughput per process = 247674.15 KB/sec
Min xfer = 91656.00 KB

Children see throughput for 5 stride readers = 1164267.42 KB/sec
Parent sees throughput for 5 stride readers = 1149815.07 KB/sec
Min throughput per process = 224601.50 KB/sec
Max throughput per process = 241926.55 KB/sec
Avg throughput per process = 232853.48 KB/sec
Min xfer = 95148.00 KB

Children see throughput for 5 random readers = 1078737.38 KB/sec
Parent sees throughput for 5 random readers = 1069475.34 KB/sec
Min throughput per process = 207075.55 KB/sec
Max throughput per process = 223333.05 KB/sec
Avg throughput per process = 215747.48 KB/sec
Min xfer = 94788.00 KB

Children see throughput for 5 mixed workload = 751472.26 KB/sec
Parent sees throughput for 5 mixed workload = 197017.26 KB/sec
Min throughput per process = 118584.65 KB/sec
Max throughput per process = 168166.31 KB/sec
Avg throughput per process = 150294.45 KB/sec
Min xfer = 72092.00 KB

Children see throughput for 5 random writers = 525531.80 KB/sec
Parent sees throughput for 5 random writers = 103116.85 KB/sec
Min throughput per process = 102272.97 KB/sec
Max throughput per process = 106994.12 KB/sec
Avg throughput per process = 105106.36 KB/sec
Min xfer = 97708.00 KB

Children see throughput for 5 pwrite writers = 343191.22 KB/sec
Parent sees throughput for 5 pwrite writers = 76012.75 KB/sec
Min throughput per process = 52550.70 KB/sec
Max throughput per process = 77775.55 KB/sec
Avg throughput per process = 68638.24 KB/sec
Min xfer = 90192.00 KB

Children see throughput for 5 pread readers = 1338728.94 KB/sec
Parent sees throughput for 5 pread readers = 1327999.28 KB/sec
Min throughput per process = 264179.72 KB/sec
Max throughput per process = 277587.91 KB/sec
Avg throughput per process = 267745.79 KB/sec
Min xfer = 97048.00 KB



"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"

" Initial write " 492558.01

" Rewrite " 537143.09

" Read " 1249029.50

" Re-read " 1275443.09

" Reverse Read " 1238370.75

" Stride read " 1164267.42

" Random read " 1078737.38

" Mixed workload " 751472.26

" Random write " 525531.80

" Pwrite " 343191.22

" Pread " 1338728.94


iozone test complete.

Gonna try a few more things too with iSCSi next :)
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
i have 2 Perc5's in my FreeNAS box. I did not have to set them up with a RAID0 on each disk, i just plugged them in and made sure there were no VDs in the controller BIOS
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
To the OP -

I found that with a 3ware controller I could use the CLI to force direct hard drive access. I don't think the PERC5i is a rebranded 3ware, but you can check it out. But I will say that doing a bunch of RAID0s is not an ideal solution for FreeNAS. It may not let you run SMART diagnostics, give you serial numbers, etc.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Just finished a benchmark on one single drive on a raid 0. Same if not better results than all 8 on the raid-Z-2 config. I read this here: http://en.community.dell.com/support-forums/servers/f/906/p/19339835/19724785.aspx - the config for JBOD is raid 0 apparently :\

I have 6 drives at 7200rpm on a 2950, but that server has a PERC 6i, and apparently i have read that uses LSI 2008, which is why the speed is 6-7x more than the 5i speeds im getting. I'm looking into firmware upgrades for the perc5i but it may be worth me upgrading the raid card.

thoughts?
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
thats odd seeing that. my disks are most certainly not configured in the Controllers BIOS as RAID0. i simply plugged them in, and turned on FreeNAS. checked the PERC BIOS and verified that my disks are not configured in any raid....i have 2 cards running 8 disks each
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
the dell perc5 is an LSI 1068e

Just for completeness, the PERC5i is based off the LSI MegaRAID SAS 8408E, with a few minor differences. My guess would be that it'd be likely to work best with LSI's firmware.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
thats odd seeing that. my disks are most certainly not configured in the Controllers BIOS as RAID0. i simply plugged them in, and turned on FreeNAS. checked the PERC BIOS and verified that my disks are not configured in any raid....i have 2 cards running 8 disks each

I tried that, but freenas didn't see the disks at all. Created VD's and freenas saw them.

I'm going to try a few more setups, but i'm getting rather impatient, as i have some clustering to set up :\
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
hmmmm.....i wonder if i have PERC6i instead of 5i in my server. i have a spare PERC card sitting here and it is definitely a 6 with LSI 1068e....i am willing to bet that whats in my server is 6
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
It could well be, as the PERC6i has the LSI 2008 chip that supports JBOD. I have a spare 2950 with a 6i i think, so i may switch the card over and see the results..
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
I have just finished putting a PERC 6/ir card in and the results are the same as the PERC5/i. very very sad moment :(

I can't easily remove a 6i card from one of our servers as it hosts test vm's. cry.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
Just put a 6i card in, again the disks have to be set to raid 0 then raid-z-2 in freenas. Considering the disks are faster than my other server that also uses a perc6i card, and the performance this time is EXACTLY the same as the 6ir and 5 card, i can only assume it's freeNAS causing the crap performance.

getting slightly annoyed with this now... :\
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Perhaps you should check to see what you're getting out of the I/O subsystem without layering ZFS, RAIDZ, and other stuff on top of it.

Generally speaking, debugging complex systems becomes easier if you try to compartmentalize your tests and qualify subsystems before you treat the thing as a whole. I tried to suggest that earlier in the thread.
 

wedge1988

Dabbler
Joined
Dec 2, 2012
Messages
19
I'm using IOZone to test it though. Can't use it on the phsical disk as there are no writable areas for testing. the only place i can IOZone is /mnt/x/

Tried 3 raid cards, multiple raid setups, turned off raid bios. Added tuneups too. The difference between this machine and the other is that the other has prefetch disabled, as its got 4gb ram, this has 8gb ram with prefetch enabled. Raid card has writeback enabled by default.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You say that as though IOZone is some great test. It isn't. Yes, it can give you some idea of how applications are going to interact with your filesystem. But you have to know how to use it, and it's pretty clear you don't.

Specifically, you start this thread out by showing us a test for iozone on a "problematic" server, don't tell us anything about the sizing of the server, set the file size to 100MB, and then seem to think that performance results in the hundreds-of-megabytes to gigabytes-per second range is "bad" - EACH of those points in turn is ridiculous, and taken as a group, mind-numbing. Your results are all screwed up because the system is caching a huge amount of the data involved in the test. Most people on these lists would be VERY pleased to get a pool that reads at "1423870.18 KB/sec" (that's 1.4 GIGAbytes per second) and write speeds of "478166.58 KB/sec" (that's almost 500 MEGAbytes per second). Many people wouldn't be able to get that even if they striped a couple of SSD's together.

So here's the deal.

You don't understand the tests you're running, and your results are correspondingly bull$#!+.

You've been largely ignored by a bunch of people who might help you, first because you kind of came in here and displayed some obviously silly results, and probably later because when you were given some suggestions for useful tests to run, you disregarded them and continued as you were.

Personally, I'm going to spell this out for you and then bow out, because there aren't enough hours in the day. You're testing an essentially random selection of disk speed, controller speed, memory bus speed, and CPU speed, in an uncontrolled and useless manner. Your server is MUCH slower than you think it is (but that doesn't mean there's anything wrong).

You would be well-served to go inspect some at least theoretically valid tests and their reasonable results such as some of the ones I posted in this thread, including the results for the 60GB SSD which represent very reasonable numbers for a relatively fast single disk I/O subsystem. A pile of hot SAS drives in the right RAIDZ configuration ought to be able to beat those numbers for sequential read, but to start some real testing and get to useful results it is very important to know what the underlying performance characteristics are, and you won't tell me. I don't have your RAID controller and I don't have your disks, so I can't run the tests for you.

So I'm signing off this thread. Lesson for the day: if you come somewhere asking for help, it's a good idea to be cooperative.
 
Status
Not open for further replies.
Top