Virtualization pool (mirror)

CaraquetFlyer

Dabbler
Joined
Feb 6, 2019
Messages
13
Hello,

We run our setup in a mirror configuration for our pool since it will be used as a datastore for ESXi so we are looking to know if it is required to have Zil or L2ARC devices in our setup.

Dell R720
LSI 9207-8i
16 x 300GB SSD - Pool in mirror (disk 0-1, disk 2-3, disk 4-5, disk 6-7, disk 8-9, disk 10-11, disk 12-13, disk 14-15)
Connected to ESXi as iSCSI

Thanks for any input!

JB
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Will you be using iSCSI? If so your write speed will be low without a ZIL. Ensure you get enterprise NVMe and mirror (2) of them.
Max system RAM (256GB wouldn't be overkill for production system) before looking into a L2ARC.
 

CaraquetFlyer

Dabbler
Joined
Feb 6, 2019
Messages
13
Thanks for the answers Jessep!

Correct the datastore will be connected via iSCSI on a dedicated 10G nic from FreeNAS to ESXi. Memory we have 192GB ECC installed so we can wait for L2ARC?

We are looking at an INTEL® SSD DC P3700 SERIES as a ZIL. Was just waiting for more inputs.

I've made a few test with zilstat so I need to make sure I understand the numbers. We will be running around 60VM's.

JB
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hello there.

L2ARC is intended to be "bigger than RAM, but faster than your pool" - if you're using SSDs in the pool, the value of L2ARC is significantly reduced, since you don't have the worries of spindle contention or coping with slow spinning disks as vdevs.

The P3700 is a very good SLOG device, second only to the Optane drives in the prosumer space. I would suggest using the Intel SSD Data Center Tool (isdct) to change the sector size to a native 4KB, and limit the presented free space to significantly less than the 400GB, as the default tunables limit SLOG size to 4GB per pool.

https://www.intel.com/content/www/u...6238/memory-and-storage/data-center-ssds.html

Additional notes:

Since you intend to use iSCSI, you will have to manually set sync=always on the ZVOLs being presented.

Consider setting the following advanced tunables at the system level that benefit all-flash systems (don't set these if there's spinning disks anywhere in a pool!)
vfs.zfs.metaslab.lba_weighting_enabled: 0 (Disables LBA weighting, all LBAs on NAND are equally fast)
vfs.zfs.metaslab.fragmentation_factor_enabled: 0 (Disables the preference for less fragmented metaslabs; dirty pages are the problem on NAND, not fragmentation)
 
Top