Hello,
I am still trying to build a FreeNas box which will ultimately serve as datastore for an ESXi cluster. Since earlier attempts did not yield the desired/expected performance this time I am taking it slow and try to establish solid performance at each layer. Unfortunately I hit a snag right at the beginning - pool performance.
Now I realize my use case is special since I am looking to get good write performance at 1 Job QD1 and that's rarely used.
[Actually that use case is the most common one in homelabs I assume, but most home users don't have my expectation level nor the will to throw a ton of hardware at it. Enterprise level is never looking into this/providing numbers that since not the typical business use case]
Now the question I am looking to get answered is based on the "general knowledge" (+ articles like https://www.ixsystems.com/blog/zfs-pool-performance-1/) that a pools (write) performace increases with the number of vdev's in the pool. So if a single vdev is capable of 300 MB/s [streaming writes at lets say 128K] a second vdev [stripe, second mirror pair or second raidz vdev] should increase that to [theoretically] 600 MB/s. Given some overhead the expectation I'd have would be maybe ~500 MB/s.
The question is now - how would one expect this to scale? Being realistic I'd assume to get diminishing returns on each new vdev due to increased overhead, so that at some point it's not worth adding further vdevs - the exact number of vdevs probably would depend on the type of drive being used.
(O/c it will also depend on the other hardware in the box, especially cpu single thread performance, drive attachement options [sata,sas,nvme] etc)
Its not scaling well in my tests
Looking forward to hear feedback, will provide testresults later (after we came up with a common understanding of expectations;))
edit: Changed thread title since we didnt do expectation management at all;)
I am still trying to build a FreeNas box which will ultimately serve as datastore for an ESXi cluster. Since earlier attempts did not yield the desired/expected performance this time I am taking it slow and try to establish solid performance at each layer. Unfortunately I hit a snag right at the beginning - pool performance.
Now I realize my use case is special since I am looking to get good write performance at 1 Job QD1 and that's rarely used.
[Actually that use case is the most common one in homelabs I assume, but most home users don't have my expectation level nor the will to throw a ton of hardware at it. Enterprise level is never looking into this/providing numbers that since not the typical business use case]
Now the question I am looking to get answered is based on the "general knowledge" (+ articles like https://www.ixsystems.com/blog/zfs-pool-performance-1/) that a pools (write) performace increases with the number of vdev's in the pool. So if a single vdev is capable of 300 MB/s [streaming writes at lets say 128K] a second vdev [stripe, second mirror pair or second raidz vdev] should increase that to [theoretically] 600 MB/s. Given some overhead the expectation I'd have would be maybe ~500 MB/s.
The question is now - how would one expect this to scale? Being realistic I'd assume to get diminishing returns on each new vdev due to increased overhead, so that at some point it's not worth adding further vdevs - the exact number of vdevs probably would depend on the type of drive being used.
(O/c it will also depend on the other hardware in the box, especially cpu single thread performance, drive attachement options [sata,sas,nvme] etc)
Its not scaling well in my tests
Looking forward to hear feedback, will provide testresults later (after we came up with a common understanding of expectations;))
edit: Changed thread title since we didnt do expectation management at all;)
Last edited: