What tools/packages are people using to do benchmarking on Linux? I've searched a little and haven't seen anything obvious other than NFSometer and SPEC (which is expensive).
I am also interested in measuring the differences between RAIDZ1/2 with and without encryption. I know that just about any config can saturate a gigabit on an Avoton 2750, but I have 4 gigabit ports and want to see what my maximum is if I decide to put different loads on different ports.
Right now I am using some various block sizes, etc. with dd and writing various amounts of zero data (without compression enabled of course) and comparing, but I was wondering if there are any best practices or good tools for random read/write testing that people use.
My $0.02 is that you shouldn't bother benchmarking RAIDZ1 because you shouldn't be considering using it. Performance bottlenecks will depend heavily on your workload. Test with as realistic a workload as possible. This is because you may hit an IOPS bottleneck before CPU, in which case you may need more vdevs rather than turning off features like encryption.
I've seen mixed commentary regarding RAIDZ1/2. I understand the unaligned write business (with certain drive counts), and also the fact that statistically, when the number of drives goes up, the chance of a second failure on reslivering is high. However, I have a 4-drive FreeNAS Mini style system, and it seems that many people are successfully using RAIDZ1 in this scenario. 4 drives should be relatively ok in terms of likelihood of a second failure, no?
I've seen mixed commentary regarding RAIDZ1/2. I understand the unaligned write business (with certain drive counts), and also the fact that statistically, when the number of drives goes up, the chance of a second failure on reslivering is high. However, I have a 4-drive FreeNAS Mini style system, and it seems that many people are successfully using RAIDZ1 in this scenario. 4 drives should be relatively ok in terms of likelihood of a second failure, no?
If your drives are above 1TB avoid RAID5 / RAIDZ1. For more information google it. I have personally seen the aftermath of multidrive failure in RAID5 (5 * 500GB drives) without adequate backups. The computer wasn't my responsibility, but it was not a fun experience that I want to repeat.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.