Unable to Create Cache Vdev

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
Unable to Create Cache Vdev

I would also like to assign the second NVME as a single device pool for apps/VMs. Is this possible?

TIA
 

Attachments

  • cache.JPG
    cache.JPG
    62.6 KB · Views: 142
Joined
Oct 22, 2019
Messages
3,641
You're trying to create a pool named "Cache", without adding any required data vdevs...?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
You're running a ~900GB SSD as an L2ARC device. You have previously indicated that you are running TrueNAS SCALE as a VM, if I recall. Are you aware that you should not exceed a 10:1 L2ARC-to-ARC ratio? This means you should have at least 90GB of ARC (probably 96GB) and since you're running SCALE that means you should have at least 192GB of RAM dedicated to the VM. If you're not doing this, you are either wasting an overly large SSD or stressing out the ZFS ARC (or quite possibly both).
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
You're running a ~900GB SSD as an L2ARC device. You have previously indicated that you are running TrueNAS SCALE as a VM, if I recall. Are you aware that you should not exceed a 10:1 L2ARC-to-ARC ratio? This means you should have at least 90GB of ARC (probably 96GB) and since you're running SCALE that means you should have at least 192GB of RAM dedicated to the VM. If you're not doing this, you are either wasting an overly large SSD or stressing out the ZFS ARC (or quite possibly both).
Hello,

Sorry if my mentioned config was not clear before. I am running Truenas Scale bare metal. I initially assumed having 256GB of memory was adequate for caching but decided to add a 960GB NVME as a cache drive. The second 960GB NVME I will use for apps/VMs. Please let me know if my configuration is not optimal. Just did a file transfer and didn't seem to utilize the NVME cache drive.

Lenovo P720
256GB
2 x 256GB NVME RAID1 boot
4 x 10TB RAIDz (1 x 960GB cache)
1 x 960GB NVME Apps/VMs

I don't plan on running much VMs on this machine, maybe later. Main use will be network storage.
 

Attachments

  • cache3.JPG
    cache3.JPG
    118.3 KB · Views: 105

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
initially assumed having 256GB of memory was adequate for caching

That gives you 128GB of ARC. At that size, you're fine. The concern was that you can be sabotaging yourself. The ARC works by keeping track of statistics such as "Most Frequently Used" and "Most Recently Used". If you had, for example, just 100 megabytes of ARC, which could be only enough ARC space to hold very few blocks of data, the mere act of accessing a moderately full directory would be likely to overrun the ARC with read requests for all the metadata blocks. These would then immediately get knocked out when you tried to access a file in there, and then when you accessed another directory, THOSE would then get quickly knocked out too. This is referred to as thrashing. The MFU/MRU mechanism REQUIRES that blocks stick around for awhile so that ZFS can observe which ones gain multiple access requests. If that fails to happen, ZFS cannot make good decisions about which ones to shove out to the L2ARC SSD, and essentially pushes random crap out instead.

The goal is ideally to cache blocks (such as directories you frequently visit) and hold that in ARC. So you might have some blocks that you've accessed 50 times, others you've only accessed 10 times, some that have only been accessed 2-5 times, and then the ones that have been accessed only once. The ones we want to be evicted to L2ARC are the ones that have only been accessed 2-5 times. The 50x and 10x ones are really doing something significant for you. The ones that have only been accessed once are doing nothing for you. See how that works?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
YMMV when it comes to the usefulness of an L2ARC. I have 128 gigs of arc, and 224 gigs of L2ARC, and with my workload the mean L2ARC hit ratio is only about 3%. My workload just doesn't benefit from having it.

1688306949401.png
 

MotorN

Dabbler
Joined
Mar 30, 2021
Messages
20
I deleted my storage. Recreated a RAIDZ2 with 4 HDDs no cache, RAID1 with 2 x 960GB NVME for VM/Apps. I entered min/max arc commands in the init/script. Speeds have increased a little but not what I had hoped. I do get bursts of 550-600MB/s but still mainly at around 300MB/s which is close to my HDD speeds. I opted out of setting the block size to 1M going with 512kb instead, not sure how much of an impact that has.

I'm not going to bother installing more ram. L2ARC is something I can consider adding in the future.
 

Attachments

  • cache4.JPG
    cache4.JPG
    177.2 KB · Views: 127
Top