Hi All,
Long time lurker here running the following configuration:
Dell R720 with 256GB RAM
FreeNAS-11.3-U4.1
Dual controller 8 Gbps link to enclosure containing pool disks
I am finding an issue where I migrate virtual machine storage off of the platform and onto another (presented via iSCSI on 10 Gbps w/ Jumbo Frames end-to-end) where the device significantly drops both ARC/L2ARC cache data including neighboring data in-storage.) What is more curious is that this occurs at the end of the "copy" operation and on the final delete of the source VM files. Read latency increases significantly here for all virtual machines running on this pool as the common cache data is flushed. Sync writes are enabled. Total pool size is around 60% used and presented to the hypervisor via iSCSI.
My pool is configured with the following:
17x mirrored vDevs containing 2x 4TB in each
2x 780GB cache SSD's geared towards read-intensivity
2x ZeusRam devices configured as SLOG
Detailed below -- the dips all correspond with a virtual machine being migrated off of the storage platform and occurs immediately after the delete operation after the storage migration is completed.
Thoughts or ideas here? I haven't had throughput issues, but the latency here is killer when the ARC/L2ARC take these hits, it appears that more data than I am familiar with in Oracle ZFS is released when something is deleted.
Long time lurker here running the following configuration:
Dell R720 with 256GB RAM
FreeNAS-11.3-U4.1
Dual controller 8 Gbps link to enclosure containing pool disks
I am finding an issue where I migrate virtual machine storage off of the platform and onto another (presented via iSCSI on 10 Gbps w/ Jumbo Frames end-to-end) where the device significantly drops both ARC/L2ARC cache data including neighboring data in-storage.) What is more curious is that this occurs at the end of the "copy" operation and on the final delete of the source VM files. Read latency increases significantly here for all virtual machines running on this pool as the common cache data is flushed. Sync writes are enabled. Total pool size is around 60% used and presented to the hypervisor via iSCSI.
My pool is configured with the following:
17x mirrored vDevs containing 2x 4TB in each
2x 780GB cache SSD's geared towards read-intensivity
2x ZeusRam devices configured as SLOG
Detailed below -- the dips all correspond with a virtual machine being migrated off of the storage platform and occurs immediately after the delete operation after the storage migration is completed.
Thoughts or ideas here? I haven't had throughput issues, but the latency here is killer when the ARC/L2ARC take these hits, it appears that more data than I am familiar with in Oracle ZFS is released when something is deleted.