GeorgePatches
Dabbler
- Joined
- Dec 10, 2019
- Messages
- 39
So this is a work problem that has me a little baffled. We use an TrueNAS X10 with 10 drives in a RaidZ1 and SMB as our storage target for our Veeam backups. We recently updated from Veeam v10 to v11 and our active full backup times have increased ~30% and our incremental times have increased ~200%. Incrementals now take nearly an entire work day, where as before they started around midnight and ran until about 7am. I'm working with Veeam support on the issue on one of the things they had me do is run Microsoft's diskspd.exe disk benchmarking tool (instructions here https://www.veeam.com/kb2014)(command line parameters https://github.com/Microsoft/diskspd/wiki/Command-line-and-parameters). With a single thread I see around 2Gbit transfer speeds, and with 2 threads I max out the single threaded SMB service around 3Gbit. However, when I use the -Sh flag (as instructed Veeam) to disable caching, the performance plummets to 160Mbit. This seems to explain our performance issue as one of the "features" of Veeam v11 is that the disable all the OS level caching so that fancy storage arrays can better use their own caching schemes. My question to everyone is why does disabling the caching cause the SMB performance to drop off such a cliff? Like I get that not using caching could decrease performance, but a 90% hit seems excessive.
The pool again is a 10 HDD RaidZ1, around 60% full, running on a TrueNAS X10. Anything I should double check? I'm stumped right now.
The pool again is a 10 HDD RaidZ1, around 60% full, running on a TrueNAS X10. Anything I should double check? I'm stumped right now.