- Joined
- Jun 7, 2013
- Messages
- 89
CORE Hardware Guide
Describes the hardware specifications and system component recommendations for custom TrueNAS CORE deployment.
If you have any suggestions for this guide, then please comment here.
CORE Hardware Guide
Describes the hardware specifications and system component recommendations for custom TrueNAS CORE deployment.www.ixsystems.com
If you have any suggestions for this guide, then please comment here.
I feel "dirty" making this suggestion as I am not the person to write it...)
A SLOG device need not be large as it only needs to service five seconds of data writes delivered by the network or a local application. A high-endurance, low-latency device between 8 GB and 32 GB in size is adequate for most modern networks, and multiple devices can be striped or mirrored for either performance or redundancy. Paying attention to the published endurance claims of the device is imperative, since a SLOG will be the funnel point for a majority of the writes made to the system.
It is also vital that a SLOG device has power protection. The purpose of the ZFS intent log (ZIL), and thus the SLOG, is to keep sync writes safe in the event of a crash or power failure. If the SLOG isn’t power protected and its data is lost after a power failure, it defeats the purpose of using a SLOG in the first place! Check the manufacturer’s specifications to ensure the SLOG device is power safe or has power loss/failure protection.
sysctl vfs.zfs.dirty_data_max
and defaulting to either 1/10th of your physical RAM or 4GB, whichever is smaller. So without adjustment, no one will use more than 4GB of their SLOG device - although the GUI might still be creating a large swap partition on log vdevs, which is a separate issue (that I think has a bug open in Jira.) Adjustments do exist, but are likely outside the scope of a general "hardware guide." A wording about the general relationship between SSD sizes and performance would be beneficial as well - as size goes up, speed tends to as well, but finding a device that's purpose-built for a write-intensive environment is best. A small 100GB "write intensive" SSD will likely crush a 512GB "general purpose" or "consumer" one.diskinfo -wS
"benchmark" results as well as real-world SLOG speeds.diskinfo -wS
benchmark into the FreeNAS webUI might be beneficial; call it a "simple SLOG benchmark," give the red-screen warning that it will destroy the data on the drive, and provide the results from 4K-128K in a bar or line chart.Keep in mind that for every data block in the L2ARC, the primary ARC needs an 88 byte entry; this can cause the ARC to fill up unexpectedly and actually reduce performance in a poorly-designed system. For example, a 480GB L2ARC filled with 4KiB blocks will need more than 10GiB of metadata storage in the primary ARC!
I just realised that there is nothing here on backup. While not technically part of the a "Hardware Guide" there needs to be some type of reference around the need to back up outside of FreeNAS. "RAID isn't backup!" :) Maybe a guide on backup (I feel "dirty" making this suggestion as I am not the person to write it...)
I don't want to discount the need for a backup, it is one of the first things on my list, but is that part of a "Hardware Guide" document?
A "User Guide" or "Administrator Guide" might be a better place for it, although some discussion of the hardware involved would belong in the hardware guide.
That doesn't seem right. The blocks exist in ARC as they do on disk, which is what makes the whole thing viable in the first place, with no additional compression. Might you be looking at memory compression by the OS (which is fairly popular these days)?Don't the L2ARC headers also benefit from compressed ARC? I swear I've seen lower usage than the quoted 88 bytes/record.
No, I mean the behavior of ZFS to compress its own metadata (including the L2ARC headers) with LZ4, which would further reduce the memory overhead.That doesn't seem right. The blocks exist in ARC as they do on disk, which is what makes the whole thing viable in the first place, with no additional compression. Might you be looking at memory compression by the OS (which is fairly popular these days)?
But for large L2ARC sizes you should consider the option of just building an all-flash pool.