So here's an odd one for you guys. I'm playing around with a Server 2019 VM running on ESXi 6.7. The storage is provided via iSCSI in the usual AIO manner. The zvol is sparse/thin and the VM itself is a thin vmdk. This is a new/pristine VM on a new/pristine VMFS 6 datastore residing on a new/pristine zvol in a new/pristine zpool. No other processes or VMs are present.
When I run CrystalDiskMark with a zpool iostat looping the background, I see capacity alloc suddenly increase when the benchmark run begins. No surprise there. What amazes me is watching capacity alloc slowly drift back down to its original number after the benchmark is finished and CrystalDiskMark cleans up after itself. The only possible explanation I can think of is that Server 2019 is pushing UNMAP/TRIM commands down to storage in the background after a file has been deleted.
Thoughts?
When I run CrystalDiskMark with a zpool iostat looping the background, I see capacity alloc suddenly increase when the benchmark run begins. No surprise there. What amazes me is watching capacity alloc slowly drift back down to its original number after the benchmark is finished and CrystalDiskMark cleans up after itself. The only possible explanation I can think of is that Server 2019 is pushing UNMAP/TRIM commands down to storage in the background after a file has been deleted.
Thoughts?