I'm currently running a single vdev mirror using two 4TB drives. My use case is mainly media serving and moving files around, which is a workload with queue depth 1. Mainly over SMB, but also one or two iSCSI targets.
Previously I was maxing out my 1 Gbps network, and has since upgraded to point-to-point 10 Gbps between the server and my workstation. This works great when reading files cached in ARC, but my use case doesn't see much benefit from this most of the time. Perhaps it's my imagination, however I still seem to be limited to the read performance of a single drive. I reach no greater speeds than 100-130 MB/s while reading large (hopefully sequentially written) files. This pool is less than 50% full. Perhaps I should be glad I have this level of performance still. :)
I now have the option to move four 640 GB drives to the server, and would like to set these up in the best way possible for my situation.
Would a single four-drive RAIDZ1 vdev, or two striped two-drive mirrors, give me the best performance both short and long term? Expected space requirement for this pool is less than 800 GB, which eventually may translate into ~50% used space for the RAIDZ1 option, and perhaps too close for comfort to 80% used space for the striped mirrors option.
I may also consider skipping ZFS completely for this specific pool, if I can find a decent alternative.
Are there other options for tuning I should look at?
I've tried modest tuning of "vfs.zfs.vdev.cache.bshift", and SMB's "aio read size", with little noticeable effect.
Previously I was maxing out my 1 Gbps network, and has since upgraded to point-to-point 10 Gbps between the server and my workstation. This works great when reading files cached in ARC, but my use case doesn't see much benefit from this most of the time. Perhaps it's my imagination, however I still seem to be limited to the read performance of a single drive. I reach no greater speeds than 100-130 MB/s while reading large (hopefully sequentially written) files. This pool is less than 50% full. Perhaps I should be glad I have this level of performance still. :)
I now have the option to move four 640 GB drives to the server, and would like to set these up in the best way possible for my situation.
Would a single four-drive RAIDZ1 vdev, or two striped two-drive mirrors, give me the best performance both short and long term? Expected space requirement for this pool is less than 800 GB, which eventually may translate into ~50% used space for the RAIDZ1 option, and perhaps too close for comfort to 80% used space for the striped mirrors option.
I may also consider skipping ZFS completely for this specific pool, if I can find a decent alternative.
Are there other options for tuning I should look at?
I've tried modest tuning of "vfs.zfs.vdev.cache.bshift", and SMB's "aio read size", with little noticeable effect.