Hi guys, I was hoping to get some insight on how to intrepret some tests I've been running.
So I ran the following Iozones. These are all preformed from a new Ubuntu Test VM with two extra virtual drives mounted one for spinning disk and one for ssd, over MPIO ISCSI to the freeNAS box.
Am I to assume that since the Mirrored SSDs are so close to the raidz2 spinners that this is mostly hitting ARC. And if so shouldn't the results be higher, is 11086 the expected result for 4k random reads from ARC? If this is nominal great, I'll look further into the xenserver side of it. If its not normal, where should I look for the bottleneck, I don't see anything obvious like high CPU loading on either the VM, dom0, or freenas.
After recently migrating from an old freenas box to a newer better server(see sig) everything is fine except that on XenServer when I am installing a new vm from virtualCD(served by freenas NFS) onto the new vm(virtual disk ISCSI freenas) the install process is slow and jerky etc.
This very well could be an issue with xenserver or something else(I'm leaning that way) but after extensive testing I'm not sure how to intrepet the specific test below, or where to look next.
All of my testing Iperf, DD, IOZone, copying files manually etc. all peg out my gig network at around 112MB/s.
The one exception is when I ran the citrix performance vm, it would peg out the network on both my 8xWD reds raidz2, and the mirrored 850pro SSDs, on every test except 4k read. Which i thought could maybe be the cause, but now I'm thinking that the problem is really somewhere from xenserver up, I'm feeling like the network and freenas are pretty solid. I might just throw in the towel and move over to vmware.
This very well could be an issue with xenserver or something else(I'm leaning that way) but after extensive testing I'm not sure how to intrepet the specific test below, or where to look next.
All of my testing Iperf, DD, IOZone, copying files manually etc. all peg out my gig network at around 112MB/s.
The one exception is when I ran the citrix performance vm, it would peg out the network on both my 8xWD reds raidz2, and the mirrored 850pro SSDs, on every test except 4k read. Which i thought could maybe be the cause, but now I'm thinking that the problem is really somewhere from xenserver up, I'm feeling like the network and freenas are pretty solid. I might just throw in the towel and move over to vmware.
So I ran the following Iozones. These are all preformed from a new Ubuntu Test VM with two extra virtual drives mounted one for spinning disk and one for ssd, over MPIO ISCSI to the freeNAS box.
Am I to assume that since the Mirrored SSDs are so close to the raidz2 spinners that this is mostly hitting ARC. And if so shouldn't the results be higher, is 11086 the expected result for 4k random reads from ARC? If this is nominal great, I'll look further into the xenserver side of it. If its not normal, where should I look for the bottleneck, I don't see anything obvious like high CPU loading on either the VM, dom0, or freenas.
Code:
SSD Run began: Wed Dec 2 08:23:10 2015 File size set to 2097152 kB Record Size 4 kB Setting no_unlink Command line used: iozone -s 2048m -r 4k -i 0 -w -f /mnt/localssd/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 2097152 4 110908 128934 Run began: Wed Dec 2 08:26:50 2015 File size set to 2097152 kB Record Size 4 kB Setting no_unlink Command line used: iozone -s 2048m -r 4k -i 2 -w -f /mnt/localssd/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 2097152 4 11086 119903 Spinners Run began: Wed Dec 2 08:31:59 2015 File size set to 2097152 kB Record Size 4 kB Setting no_unlink Command line used: iozone -s 2048m -r 4k -i 0 -w -f /mnt/local/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 2097152 4 129071 128993 Run began: Wed Dec 2 08:33:23 2015 File size set to 2097152 kB Record Size 4 kB Setting no_unlink Command line used: iozone -s 2048m -r 4k -i 2 -w -f /mnt/local/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 2097152 4 9984 119395