guys, in our TESTING lab : 28GB physical RAM (soon to be expanded to 64GB), 2x dual-port 10Gbit Brocade BR1020 NICs in ethernet mode, each interface with IP on different TCP/IP subnet. iSCSI to five ESX6.0 hosts load balanced with MPIO, FreeNAS 9.3.1 fully updated. Six 3TB Hitachi 7200rpm drives (three RAID1 zvols due to VAAI combined together to single volume, yes I know this is not best for redundancy, default LZ4 compression, atime off, no dedup). No NFS at all. Asus X99-S motherboard X99, i7-5820k. At the moment I'm booting from SSD, but I'll change it to USB (mirrored).
I have two SSDs, 256GB Samsung 850Pro and 256GB Crucial BX100, which I'd like to use as L2ARC. Samsung punches 90.000+ random I/O 4kB read IOPS, with 25% overprovisioning 40.000 random 4kB write IOPS. I'm not worried about endurance. I know this is not PCI-e NVMe performance, but that level of performance would be excellent for our usage.
Soon we will be running virtual machines with primary focus on VMware infrastructure things - this is our VMware lab, so random I/O pattern. No SQL databases, no big data, no movies, nothing sequential. I don't expect to be too much write intensive there, just regular stuff. VMs will be backed up to 6TB disk in external box every night so I'm not too worried about loosing the pool - if it happens, yeah it happens. No customers, no SLAs/contracts. Break-it-and-fix-it lab.
My intention is to accelerate HDD layer which obviously is sssslow by default. I have six disks, providing 100 random IOPS each at best. Our working dataset size is unknown at this moment because we are just building this, but definitely much more than 64GB [RAM] I will have. That drives me to L2ARC usage.
My humble question for now is L2ARC sizing. I feel I saw the "5x RAM = L2ARC max" rule, so in my case that would be around 300GB. Can I break that rule ? My idea is "let's sacrifice SOME more RAM for much bigger SSD caching tier (2x 256GB if possible) ". I know RAM is the primary performance factor, but our stuff won't fit in so I'd be more than happy to use 250GB SSD read-cache, possibly both SSDs so 2x 250GB SSD read cache. I believe to be able to use two physical SSDs as L2ARC for HDD tier - can I ? [RTFM, ZFS Primer, yes I can ; the question is now the capacity which will be 500GB]
Calculation : iSCSI, 4kB blocks, 180 bytes RAM used per block (saw it somewhere here on forum). If I use 250GB as L2ARC, I'd consume approx. 11.25GB RAM for L2ARC index. Is this calculation correct ? With 64GB RAM, I will gladly kill 11.25GB RAM to have 250GB read accelerated SSD tier.
I would HAPPILY consume additional 11.25GB RAM if I can add additional 250GB SSD read-cache and double total L2ARC capacity from 250GB to 500GB (2x 250GB) so we will have to get to spinning disks less and less. That might be a huge benefit for us performance wise. If my numbers are right, out of 64GB RAM approx. 2GB is used for system, 2x 11.25GB for L2ARC index, leaving about 39GB RAM for ARC.
Scenario ............. | Total RAM | L2ARC size .| expected max. ARC size |
no L2ARC ...........| 64GB ....... | 0 GB ............ | ~62GB |
1x 250GB L2ARC | 64GB ....... | ~250 GB ...... | ~ 50GB |
2x 250GB L2ARC | 64GB ....... | ~ 500 GB ..... | ~ 39GB | this is my preferred variant. Less ARC, but as much SSD L2ARC as I possibly can.
Be gentle with me, please, I'm FreeNAS greenhorn. Are my expectations totally off ?
I have two SSDs, 256GB Samsung 850Pro and 256GB Crucial BX100, which I'd like to use as L2ARC. Samsung punches 90.000+ random I/O 4kB read IOPS, with 25% overprovisioning 40.000 random 4kB write IOPS. I'm not worried about endurance. I know this is not PCI-e NVMe performance, but that level of performance would be excellent for our usage.
Soon we will be running virtual machines with primary focus on VMware infrastructure things - this is our VMware lab, so random I/O pattern. No SQL databases, no big data, no movies, nothing sequential. I don't expect to be too much write intensive there, just regular stuff. VMs will be backed up to 6TB disk in external box every night so I'm not too worried about loosing the pool - if it happens, yeah it happens. No customers, no SLAs/contracts. Break-it-and-fix-it lab.
My intention is to accelerate HDD layer which obviously is sssslow by default. I have six disks, providing 100 random IOPS each at best. Our working dataset size is unknown at this moment because we are just building this, but definitely much more than 64GB [RAM] I will have. That drives me to L2ARC usage.
My humble question for now is L2ARC sizing. I feel I saw the "5x RAM = L2ARC max" rule, so in my case that would be around 300GB. Can I break that rule ? My idea is "let's sacrifice SOME more RAM for much bigger SSD caching tier (2x 256GB if possible) ". I know RAM is the primary performance factor, but our stuff won't fit in so I'd be more than happy to use 250GB SSD read-cache, possibly both SSDs so 2x 250GB SSD read cache. I believe to be able to use two physical SSDs as L2ARC for HDD tier - can I ? [RTFM, ZFS Primer, yes I can ; the question is now the capacity which will be 500GB]
Calculation : iSCSI, 4kB blocks, 180 bytes RAM used per block (saw it somewhere here on forum). If I use 250GB as L2ARC, I'd consume approx. 11.25GB RAM for L2ARC index. Is this calculation correct ? With 64GB RAM, I will gladly kill 11.25GB RAM to have 250GB read accelerated SSD tier.
I would HAPPILY consume additional 11.25GB RAM if I can add additional 250GB SSD read-cache and double total L2ARC capacity from 250GB to 500GB (2x 250GB) so we will have to get to spinning disks less and less. That might be a huge benefit for us performance wise. If my numbers are right, out of 64GB RAM approx. 2GB is used for system, 2x 11.25GB for L2ARC index, leaving about 39GB RAM for ARC.
Scenario ............. | Total RAM | L2ARC size .| expected max. ARC size |
no L2ARC ...........| 64GB ....... | 0 GB ............ | ~62GB |
1x 250GB L2ARC | 64GB ....... | ~250 GB ...... | ~ 50GB |
2x 250GB L2ARC | 64GB ....... | ~ 500 GB ..... | ~ 39GB | this is my preferred variant. Less ARC, but as much SSD L2ARC as I possibly can.
Be gentle with me, please, I'm FreeNAS greenhorn. Are my expectations totally off ?
Last edited: