Folks,
I've been studying the trials and tribulations of ZFS for a solid week and think I have my gameplan worked out for my new FreeNAS box. The one thing I did want to run by folks with a lot more experience than me is the notion of using common physical SSDs to cover L2ARC for multiple pools and partitioning an Optane card to perform the same for multiple pool SLOGs. My idea on the L2ARC is based on a wider stripe of IOPs and bandwidth being available to service bursts if I took, for example, a 50GB partition on each of 6 SSDs to create my L2ARC instead of just dedicating a single 400GB SSD. This would allow the same group of 6 SSDs to service multiple pools, with each pool having access to a much much more robust L2ARC, albeit sharing IOPs across ZFS pools. This would also help me be decisive in making sure my L2ARC sizes are controlled and proportional to my system DRAM (96GB).
Along similar lines, the capacity and IOPs of the Optane card are way more than what a single pool in my environment would consume for SLOG, so I was thinking about overprovisioning it to get improved lifespan and then carving up the usable space into multiple partitions so it could handle SLOG for multiple pools. The FreeNAS box will be 4x8Gb FC (target) and 2x10Gb Ethernet attached.
At no point will I be mixing L2ARC and SLOG on common devices. The back end spinning rust varies by pool and is a hodgepodge of vdev / pool strategies, some mirrored/striped for performance, some raidz3 for data archival.
Does this seem like a reasonable approach or can someone see where I am missing a critical piece of design logic.
I've been studying the trials and tribulations of ZFS for a solid week and think I have my gameplan worked out for my new FreeNAS box. The one thing I did want to run by folks with a lot more experience than me is the notion of using common physical SSDs to cover L2ARC for multiple pools and partitioning an Optane card to perform the same for multiple pool SLOGs. My idea on the L2ARC is based on a wider stripe of IOPs and bandwidth being available to service bursts if I took, for example, a 50GB partition on each of 6 SSDs to create my L2ARC instead of just dedicating a single 400GB SSD. This would allow the same group of 6 SSDs to service multiple pools, with each pool having access to a much much more robust L2ARC, albeit sharing IOPs across ZFS pools. This would also help me be decisive in making sure my L2ARC sizes are controlled and proportional to my system DRAM (96GB).
Along similar lines, the capacity and IOPs of the Optane card are way more than what a single pool in my environment would consume for SLOG, so I was thinking about overprovisioning it to get improved lifespan and then carving up the usable space into multiple partitions so it could handle SLOG for multiple pools. The FreeNAS box will be 4x8Gb FC (target) and 2x10Gb Ethernet attached.
At no point will I be mixing L2ARC and SLOG on common devices. The back end spinning rust varies by pool and is a hodgepodge of vdev / pool strategies, some mirrored/striped for performance, some raidz3 for data archival.
Does this seem like a reasonable approach or can someone see where I am missing a critical piece of design logic.