Howto / docu for myself - setting up system for write performance

flashdrive

Patron
Joined
Apr 2, 2021
Messages
264
reminder: in the end TrueNAS is meant for enterprise usage: no standby, spin down disks - use other OS out there for pure home usage


record size = max = 1Mbyte
compression = LZ4 (standard, will break off the attempt if not worth the compression)

Test with: turn off compression, sync writes = off = no performance gain, leave it on!

in detail
RAM Caching and Sync Writes

In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.


Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.


In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.


You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).


Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.


ashift=12 = 4K = 4096 Byte

ashift explanation:


GUI usage for editing:

12 / 4K should be default as per


check with terminal

zdb -U /data/zfs/zpool.cache shows all pools except the boot pool.



atime = off
access time

Jumbo Frames:
  • 9000 or 9014 NIC HP - open question see Forum
  • check switches


Deduplication? no, too much RAM usage

in detail
Deduplication is the process of not creating duplicate copies of data in order to save space. Depending upon the amount of duplicate data, deduplicaton can improve storage capacity as less data is written and stored. However, the process of deduplication is RAM intensive and a general rule of thumb is 5 GB RAM per TB of storage to be deduplicated. In most cases, using compression instead of deduplication will provide a comparable storage gain with less impact on performance.


In FreeNAS®, deduplication can be enabled during dataset creation. Be forewarned that there is no way to undedup the data within a dataset once deduplication is enabled as disabling deduplication has NO EFFECT on existing data. The more data you write to a deduplicated dataset, the more RAM it requires and when the system starts storing the DDTs (dedup tables) on disk because they no longer fit into RAM, performance craters. Furthermore, importing an unclean pool can require between 3-5 GB of RAM per TB of deduped data, and if the system doesn’t have the needed RAM it will panic, with the only solution being to add more RAM or to recreate the pool. Think carefully before enabling dedup! This article provides a good description of the value versus cost considerations for deduplication.


Unless you have a lot of RAM and a lot of duplicate data, do not change the default deduplication setting of “Off”. For performance reasons, consider using compression rather than turning this option on.


If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, Verify is usually not worth the performance hit.


Note
once deduplication is enabled, the only way to disable it is to use the zfs set dedup=off dataset_name command from Shell. However, any data that is already stored as deduplicated will not be un-deduplicated as only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again. Alternately, create a new dataset with “ZFS Deduplication” left as disabled, copy the data to the new dataset, and destroy the original dataset.

  • pool itself will be encrypted, including the datasets
  • System > General: Disable crash reporting and Usage collection
  • System > System Dataset - change the pool to Boot Pool (USB) - check for better HDD Spindown
 
Last edited:

flashdrive

Patron
Joined
Apr 2, 2021
Messages
264
todo:

 
Last edited:
Top