The SSD pool won't benefit much from a small file VDEV, using it with the HDD pool has its merits.
A single VDEV can't be in two different pools.
You can set L2ARC as metadata-only. It's a different thing from how fusion pools work. It generally also need at least 64GB of RAM in order to be beneficial and not harm performance.
L2ARC is a cache, it doesn't protect from corruption. That's work done at the block-level.
You should read the following resources.
Introduction to ZFS
This is a short introduction to ZFS. It is really only intended to convey the bare minimum knowledge needed to start diving into ZFS and is in no way meant to cut Michael W. Lucas' and Allan Jude's book income. It is a bit of a spiritual...www.truenas.com
Hardware Recommendations Guide
This is the latest edition of the FreeNAS Community hardware recommendations guide. The current major version is R2, dated January 2021, with the last minor update on 2021-01-24. The format has moved away from the forum post form factor, to...www.truenas.com
Small file vdev after the early post was already abandoned for the SSD pool only considered for the HDD pool. Thanks
Yes, I agree l2arc alone does not help corruption. IF a l2arc was chosen instead of the special metadata small file vdev; the potential for the special metadata small file vdev on NVMe to be corrupted (taking down the HDD pool too) is eliminated because the special metadata small file vdev never existed.
Going back to my earlier thinking not mentioned here yet; l2arc did not impress me as a great solution, it did not focus on specific problem areas as much as what was most recently used meaning a large media file could start pushing out metadata and small file data. Special metadata small file vdev directly attacked two significant problems; metadata and small files, but it is risky to implement especially on "commercial" NVMe drives. because the main pool does not have duplicate copies of the data in the special metadata small file vdev and corruption of either took out both. From the earlier paragraph l2arc I believe would have the original copies of the metadata and small files in the HDD pool lowering maybe eliminating the chance loss of the l2arc losing any data.
I have 64GB of ram with the potential total of 128gb, L2arc seemed really to be for memory strapped systems. It seems the easiest way to create the effect of a special metadata and small file cache if it appears that the media files are disrupting the cache of metadata and small files is to increase the ram to 128GB before considering either l2arc or special metadata small file vdev. The extra 64GB of ram has the same most recently used replacement and is faster than any l2arc could ever be.
(zdb -LbbbA -U ...)
The distribution of my file sizes in the first set of columns for the HDD pool indicate I have 1TB of data at 32K and just short of 6TB at 128K of 8tb total. (I expect as an estimate that the 32K area could expand upward considerably (videos). in the small files area 250K files each for 2k or less (0.5G) and 4k (1G) and another 400k at 8k, 100K files (2G) at 16K. The total small file space up to 16K about 8GB. The total TB in the last colum is 8TB. Generations of backups maybe by Asigra (plugin) for 3 desktops (500GB) will be added. More videos will probably increase the 32K usage to matching or exceeding the 128K range. The videos will increase the small file usage for video metadata.
The small files well beyond my considered level are only 8GB; 64GB of ram for four users will probably not replace that data often.
Hardware recommendations guide told me none of hardware I found was appropriate if I read it correctly -- no acceptable NVMes
Quick Hardware recommendation guide said to press an orange button I never found.
in the end lets consider two questions
Can I improve the jails and jellyfin metadata in what I have called fpool with say a paired mirror of two commercial NVMe drives is the system is attached to a UPS for orderly shutdown or is there still a corruption risk for that data? If there is any risk then then just the following:
With the other concerns perhaps maxing the memory, when required, is easiest and there is no potential ssds to corrupt data.
I just may be too old fashioned trying to spread disk access across devices to eliminate bottlenecks when the HDD pool already does some of that.
Last edited: