Hi All,
I have a PowerEdge 2900 Server with a PERC5i. Has a battery on the RAID card. We have 8x 10000rpm SAS drives at 300gb each in a RAID 50. The performance compared to our PE 2950 server with 6x 7200rpm SAS drives is not as good for some reason :\ Read speeds are (ok) and the write speeds are terrible. here is what iozone throws at me:
PE 2900:
PE 2950:
Is this ok or am i not doing something right here?
I have a ZFS setup with 4k block forced, as this seems to show the best results up. Anybody have any ideas as to why its slow?
Thanks!
I have a PowerEdge 2900 Server with a PERC5i. Has a battery on the RAID card. We have 8x 10000rpm SAS drives at 300gb each in a RAID 50. The performance compared to our PE 2950 server with 6x 7200rpm SAS drives is not as good for some reason :\ Read speeds are (ok) and the write speeds are terrible. here is what iozone throws at me:
PE 2900:
iozone -R -l 5 -u 5 -r 4k -s 100m
Iozone: Performance Test of File I/O
Version $Revision: 3.397 $
Compiled for 64 bit mode.
Build: freebsd
Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer.
Ben England.
Run began: Wed Dec 5 09:19:23 2012
Excel chart generation enabled
Record Size 4 KB
File size set to 102400 KB
Command line used: iozone -R -l 5 -u 5 -r 4k -s 100m
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Min process = 5
Max process = 5
Throughput test with 5 processes
Each process writes a 102400 Kbyte file in 4 Kbyte records
Children see throughput for 5 initial writers = 478166.58 KB/sec
Parent sees throughput for 5 initial writers = 168072.82 KB/sec
Min throughput per process = 91902.90 KB/sec
Max throughput per process = 99366.44 KB/sec
Avg throughput per process = 95633.32 KB/sec
Min xfer = 94712.00 KB
Children see throughput for 5 rewriters = 540908.53 KB/sec
Parent sees throughput for 5 rewriters = 177572.35 KB/sec
Min throughput per process = 104873.34 KB/sec
Max throughput per process = 109998.59 KB/sec
Avg throughput per process = 108181.71 KB/sec
Min xfer = 97700.00 KB
Children see throughput for 5 readers = 1437709.94 KB/sec
Parent sees throughput for 5 readers = 1423870.18 KB/sec
Min throughput per process = 284323.56 KB/sec
Max throughput per process = 291174.59 KB/sec
Avg throughput per process = 287541.99 KB/sec
Min xfer = 99892.00 KB
Children see throughput for 5 re-readers = 1426285.03 KB/sec
Parent sees throughput for 5 re-readers = 1408223.75 KB/sec
Min throughput per process = 278020.78 KB/sec
Max throughput per process = 292840.84 KB/sec
Avg throughput per process = 285257.01 KB/sec
Min xfer = 97220.00 KB
Children see throughput for 5 reverse readers = 1273106.03 KB/sec
Parent sees throughput for 5 reverse readers = 1265005.06 KB/sec
Min throughput per process = 249602.03 KB/sec
Max throughput per process = 258248.09 KB/sec
Avg throughput per process = 254621.21 KB/sec
Min xfer = 98928.00 KB
Children see throughput for 5 stride readers = 1203686.48 KB/sec
Parent sees throughput for 5 stride readers = 1194334.26 KB/sec
Min throughput per process = 226798.36 KB/sec
Max throughput per process = 260433.83 KB/sec
Avg throughput per process = 240737.30 KB/sec
Min xfer = 88868.00 KB
Children see throughput for 5 random readers = 1088267.34 KB/sec
Parent sees throughput for 5 random readers = 1079029.60 KB/sec
Min throughput per process = 214190.69 KB/sec
Max throughput per process = 225236.17 KB/sec
Avg throughput per process = 217653.47 KB/sec
Min xfer = 97376.00 KB
Children see throughput for 5 mixed workload = 769717.92 KB/sec
Parent sees throughput for 5 mixed workload = 305395.73 KB/sec
Min throughput per process = 126605.52 KB/sec
Max throughput per process = 175081.62 KB/sec
Avg throughput per process = 153943.58 KB/sec
Min xfer = 74076.00 KB
Children see throughput for 5 random writers = 533008.86 KB/sec
Parent sees throughput for 5 random writers = 158078.97 KB/sec
Min throughput per process = 101488.95 KB/sec
Max throughput per process = 111429.24 KB/sec
Avg throughput per process = 106601.77 KB/sec
Min xfer = 93136.00 KB
Children see throughput for 5 pwrite writers = 498567.07 KB/sec
Parent sees throughput for 5 pwrite writers = 82874.70 KB/sec
Min throughput per process = 98310.04 KB/sec
Max throughput per process = 100731.76 KB/sec
Avg throughput per process = 99713.41 KB/sec
Min xfer = 100076.00 KB
Children see throughput for 5 pread readers = 1307074.14 KB/sec
Parent sees throughput for 5 pread readers = 1296484.05 KB/sec
Min throughput per process = 254374.42 KB/sec
Max throughput per process = 266300.59 KB/sec
Avg throughput per process = 261414.83 KB/sec
Min xfer = 97912.00 KB
"Throughput report Y-axis is type of test X-axis is number of processes"
"Record size = 4 Kbytes "
"Output is in Kbytes/sec"
" Initial write " 478166.58
" Rewrite " 540908.53
" Read " 1437709.94
" Re-read " 1426285.03
" Reverse Read " 1273106.03
" Stride read " 1203686.48
" Random read " 1088267.34
" Mixed workload " 769717.92
" Random write " 533008.86
" Pwrite " 498567.07
" Pread " 1307074.14
iozone test complete.
PE 2950:
Is this ok or am i not doing something right here?
I have a ZFS setup with 4k block forced, as this seems to show the best results up. Anybody have any ideas as to why its slow?
Thanks!