flashdrive
Patron
- Joined
- Apr 2, 2021
- Messages
- 264
reminder: in the end TrueNAS is meant for enterprise usage: no standby, spin down disks - use other OS out there for pure home usage
record size = max = 1Mbyte
compression = LZ4 (standard, will break off the attempt if not worth the compression)
Test with: turn off compression, sync writes = off = no performance gain, leave it on!
in detail
ashift=12 = 4K = 4096 Byte
ashift explanation:
www.truenas.com
GUI usage for editing:
www.truenas.com
12 / 4K should be default as per
www.truenas.com
check with terminal
zdb -U /data/zfs/zpool.cache shows all pools except the boot pool.
www.truenas.com
atime = off
access time
Jumbo Frames:
Deduplication? no, too much RAM usage
www.ixsystems.com
in detail
record size = max = 1Mbyte
compression = LZ4 (standard, will break off the attempt if not worth the compression)
Test with: turn off compression, sync writes = off = no performance gain, leave it on!
in detail
RAM Caching and Sync Writes
In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.
Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.
In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.
You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).
Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.
In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.
Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.
In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.
You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).
Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.
ashift=12 = 4K = 4096 Byte
ashift explanation:
Moving Boot from USB to SSD
Hi My setup currently relies on a 32Gb USB stick to boot from and I would like to know how feasible it is to switch the boot to the SSD that's already part of the setup. The setup is: 2 mirrored HDD which hold the 'data' pools. plus one SSD which houses the System dataset pool. Is there a...

GUI usage for editing:
Manually set ashift=9 on a new pool with 512b drives
Setup: Dell R510 TrueNAS-12.0-U2.1 (Virtualized in Proxmox 6.3-2) H200 in IT mode (fully passed through to the VM) 10x Seagate Constellation ES 3.5" ST2000NM0001 2TB 7.2K SAS, Sector size 512 From everything I've read, TrueNAS should just see the 512b drives, and automatically use ashift 9, but...

12 / 4K should be default as per
4k physical emulating 512 byte sector, Ironwolf drives
How do I use 4k instead of "emulated 512 byte sectors" on my four drives for a ZFS volume?

check with terminal
zdb -U /data/zfs/zpool.cache shows all pools except the boot pool.
SOLVED - How to force FreeNAS to use 4K sectors
I have a vdev made up of 6 4TB WD Red WD40EFRX drives that have a sector size of logical 512 physical 4096. I'm going to destroy the vdev and create a new one with 8 x 4TB drives all of the same type. How do i force 4K sector size? When I run zdb | grep ashift I get ashift: 9. There is no option...

atime = off
access time
Jumbo Frames:
- 9000 or 9014 NIC HP - open question see Forum
- check switches
Deduplication? no, too much RAM usage
8. Storage — FreeNAS User Guide 9.3 Table of Contents
in detail
Deduplication is the process of not creating duplicate copies of data in order to save space. Depending upon the amount of duplicate data, deduplicaton can improve storage capacity as less data is written and stored. However, the process of deduplication is RAM intensive and a general rule of thumb is 5 GB RAM per TB of storage to be deduplicated. In most cases, using compression instead of deduplication will provide a comparable storage gain with less impact on performance.
In FreeNAS®, deduplication can be enabled during dataset creation. Be forewarned that there is no way to undedup the data within a dataset once deduplication is enabled as disabling deduplication has NO EFFECT on existing data. The more data you write to a deduplicated dataset, the more RAM it requires and when the system starts storing the DDTs (dedup tables) on disk because they no longer fit into RAM, performance craters. Furthermore, importing an unclean pool can require between 3-5 GB of RAM per TB of deduped data, and if the system doesn’t have the needed RAM it will panic, with the only solution being to add more RAM or to recreate the pool. Think carefully before enabling dedup! This article provides a good description of the value versus cost considerations for deduplication.
Unless you have a lot of RAM and a lot of duplicate data, do not change the default deduplication setting of “Off”. For performance reasons, consider using compression rather than turning this option on.
If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, Verify is usually not worth the performance hit.
Note
once deduplication is enabled, the only way to disable it is to use the zfs set dedup=off dataset_name command from Shell. However, any data that is already stored as deduplicated will not be un-deduplicated as only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again. Alternately, create a new dataset with “ZFS Deduplication” left as disabled, copy the data to the new dataset, and destroy the original dataset.
In FreeNAS®, deduplication can be enabled during dataset creation. Be forewarned that there is no way to undedup the data within a dataset once deduplication is enabled as disabling deduplication has NO EFFECT on existing data. The more data you write to a deduplicated dataset, the more RAM it requires and when the system starts storing the DDTs (dedup tables) on disk because they no longer fit into RAM, performance craters. Furthermore, importing an unclean pool can require between 3-5 GB of RAM per TB of deduped data, and if the system doesn’t have the needed RAM it will panic, with the only solution being to add more RAM or to recreate the pool. Think carefully before enabling dedup! This article provides a good description of the value versus cost considerations for deduplication.
Unless you have a lot of RAM and a lot of duplicate data, do not change the default deduplication setting of “Off”. For performance reasons, consider using compression rather than turning this option on.
If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, Verify is usually not worth the performance hit.
Note
once deduplication is enabled, the only way to disable it is to use the zfs set dedup=off dataset_name command from Shell. However, any data that is already stored as deduplicated will not be un-deduplicated as only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again. Alternately, create a new dataset with “ZFS Deduplication” left as disabled, copy the data to the new dataset, and destroy the original dataset.
- pool itself will be encrypted, including the datasets
- System > General: Disable crash reporting and Usage collection
- System > System Dataset - change the pool to Boot Pool (USB) - check for better HDD Spindown
Last edited: