TrueNAS SCALE Version DocumentationThis content follows the TrueNAS SCALE 23.10 (Cobia) releases. Use the Product and Version selectors above to view content specific to different TrueNAS software or major version.
Dataset
37 minute read.
Last Modified 2024-03-19 08:38 EDTThe SCALE CLI guide is a work in progress! New namespace and command documentation is continually added and maintained, so check back here often to see what is new!
The dataset namespace has one namespace, user_prop and 22 commands, and is based on dataset creation and management functions found in the SCALE API and web UI. It provides access to storage dataset methods through the dataset commands. Do not use the user_prop commands.
The following dataset commands allow you to create new and manage existing datasets.
You can enter commands from the main CLI prompt or from the dataset namespace prompt.
Enter the
--
flag following any CLI command to open the interactive arguments editor text-based user interface (TUI).
The attachments
command lists services dependent on the dataset matching the ID entered.
Use the storage dataset query
or storage dataset details
command to obtain dataset IDs.
The attachments
command has one required property, id
.
id
is the ID found in the output of the storage dataset query
command.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns a table with type, service, and attachments for the specified dataset ID.
From the CLI prompt, enter:
storage dataset attachments id=tank
Where tank is the ID assigned to the dataset by the system.
storage dataset attachments id=tank
+---------------+------------+-----------------------+
| type | service | attachments |
+---------------+------------+-----------------------+
| Snapshot Task | <null> | tank/minio |
| | | tank/minio |
| | | tank/snapshots |
| NFS Share | nfs | /mnt/tank/shares/nfs |
| | | /mnt/tank/shares/nfs2 |
| | | /mnt/tank/shares/nfs3 |
| | | /mnt/tank/shares/nfs4 |
| Rsync Task | <null> | /mnt/tank/minio |
| Kubernetes | kubernetes | tank |
+---------------+------------+-----------------------+
Use the change_key
command to change the encryption key properties for the dataset matching the ID entered.
The TrueNAS CLI guide for SCALE is a work in progress! This command has not been fully tested and validated. Full documentation is still being developed. Check back for updated information.
The checksum_choices
command lists checksums supported for the ZFS dataset.
The checksum_choices
command does not require entering a property argument.
Enter the command then press Enter.
The command returns a table with a list of checksums supported by ZFS.
Checksums are ON, FLETCHER2, FLETCHER4, SHA256, SHA512, SKEIN, and EDNOR.
From the CLI prompt, enter:
storage dataset checksum_choices
storage dataset checksum_choices
+-----------+-----------+
| ON | ON |
| FLETCHER2 | FLETCHER2 |
| FLETCHER4 | FLETCHER4 |
| SHA256 | SHA256 |
| SHA512 | SHA512 |
| SKEIN | SKEIN |
| EDONR | EDONR |
+-----------+-----------+
The compression_choices
command lists compression alogrithms supported by ZFS.
The commpression_choices
command does not require entering a property argument.
Enter the command then press Enter.
The command returns a table listing compression algorithms supported by ZFS.
From the CLI prompt, enter:
storage dataset compression_choices
storage dataset compression_choices
+----------------+----------------+
| OFF | OFF |
| LZ4 | LZ4 |
| GZIP | GZIP |
| GZIP-1 | GZIP-1 |
| GZIP-9 | GZIP-9 |
| ZSTD | ZSTD |
| ZSTD-FAST | ZSTD-FAST |
| ZLE | ZLE |
| LZJB | LZJB |
| ZSTD-1 | ZSTD-1 |
| ZSTD-2 | ZSTD-2 |
| ZSTD-3 | ZSTD-3 |
| ZSTD-4 | ZSTD-4 |
| ZSTD-5 | ZSTD-5 |
| ZSTD-6 | ZSTD-6 |
| ZSTD-7 | ZSTD-7 |
| ZSTD-8 | ZSTD-8 |
| ZSTD-9 | ZSTD-9 |
| ZSTD-10 | ZSTD-10 |
| ZSTD-11 | ZSTD-11 |
| ZSTD-12 | ZSTD-12 |
| ZSTD-13 | ZSTD-13 |
| ZSTD-14 | ZSTD-14 |
| ZSTD-15 | ZSTD-15 |
| ZSTD-16 | ZSTD-16 |
| ZSTD-17 | ZSTD-17 |
| ZSTD-18 | ZSTD-18 |
| ZSTD-19 | ZSTD-19 |
| ZSTD-FAST-1 | ZSTD-FAST-1 |
| ZSTD-FAST-2 | ZSTD-FAST-2 |
| ZSTD-FAST-3 | ZSTD-FAST-3 |
| ZSTD-FAST-4 | ZSTD-FAST-4 |
| ZSTD-FAST-5 | ZSTD-FAST-5 |
| ZSTD-FAST-6 | ZSTD-FAST-6 |
| ZSTD-FAST-7 | ZSTD-FAST-7 |
| ZSTD-FAST-8 | ZSTD-FAST-8 |
| ZSTD-FAST-9 | ZSTD-FAST-9 |
| ZSTD-FAST-10 | ZSTD-FAST-10 |
| ZSTD-FAST-20 | ZSTD-FAST-20 |
| ZSTD-FAST-30 | ZSTD-FAST-30 |
| ZSTD-FAST-40 | ZSTD-FAST-40 |
| ZSTD-FAST-50 | ZSTD-FAST-50 |
| ZSTD-FAST-60 | ZSTD-FAST-60 |
| ZSTD-FAST-70 | ZSTD-FAST-70 |
| ZSTD-FAST-80 | ZSTD-FAST-80 |
| ZSTD-FAST-90 | ZSTD-FAST-90 |
| ZSTD-FAST-100 | ZSTD-FAST-100 |
| ZSTD-FAST-500 | ZSTD-FAST-500 |
| ZSTD-FAST-1000 | ZSTD-FAST-1000 |
+----------------+----------------+
Use the Create
command to create datasets or zvols.
The create
command has one required property and 38 optional properties.
Of these, set share_type
and casesensitivity
as these cannot be changed after creating a new dataset.
See Create Properties below for details.
The create
command is a complex command.
Enter the storage dataset create --
command string to open the interactive argument editor/text user interface (TUI) and make configuring a dataset or zvol easier.
Enter the CLI command string then press Enter.
The command creates a new dataset and returns an empty line.
Enter property arguments using the =
delimiter to separate property and value. Double-quote values that include special characters.
Property arguments enclosed in curly backets {}
have double-quoted properties and values separated by the :
delimiter, and separate multiple property arguments with a comma. For example:
create name="tank/tank-e" type=FILESYSTEM share_type=GENERIC inherit_encryption=false encryption=true encryption_options= {"pbkdf2iters":"350000","passphrase":"abcd1234"}
Property | Required | Description | Syntax Example |
---|---|---|---|
name | Yes | Enter the full name for the dataset as pool/dataset. Enter the value in double-quote. | name="tank/dataset" |
type | No | Enter FILESYSTEM to create a dataset or VOLUME to create a zvol. Include the volsize property argument if using VOLUME . | type=FILESYSTEM or type=VOLUME |
volsize | Yes* | *Required if setting type=VOLUME . Enter the value which is a multiple of the block size. Options are 512 , 512B , 1K , 2K , 4K , 8K , 16K , 32K , 64K , 128K . | volsize=8k |
volblocksize | No | Only used when setting type=VOLUME . Enter the block size for the zvol. For example, 10GiB. Use the recommended_zvol_blocksize command to get a blocksize value. | volblocksize=10GiB |
sparse | No | Only used when setting type=VOLUME . Enter true to or false | sparse=true or sparse=false |
force_size | No | Only used when setting type=VOLUME . The system restricts creating a ZVol that brings a pool to over 80% capacity. Enter true to force creating of a zvol in this case (not recommended). Default is false . | force_size=false or force_size=true |
comments | No | Enter comments using upper and lowercase alphanumeric and special characters as a description about this dataset. Enclose value in double quotes. | |
sync | No | Enter the option for the desired sync setting.STANDARD to use the standard sync settings requested by the client software.ALWAYS to wait for data write to complete.DISABLED to never wait for writes to complete. | comments="my comments" |
snapdev | No | Enter the option to set whether the volume snapshot devices under /dev/zvol/poolname are HIDDEN or VISIBLE . Default inherits HIDDEN . | snapdev=HIDDEN or snapdev=VISIBLE |
compression | No | Enter the compression level to use from the available ZFS supported options. Use storage dataset compression_choices to list ZFS supported compression algorithms. Enter the value in double quotes. | compression=OFF</> |
atime | No | Set the access time for the dataset. Options are:ON updates the access time for files when they are read.OFF disables creating log traffic when reading files to maximize performance. | atime=ON or atime=OFF |
exec | No | Enter ON to allow executing processes from within the dataset or OFF to prevent executing processes from within the dataset. We recommend setting this to ON . | exec=ON or exec=OFF |
managedby | No | Not used. Query command includes a reference the router/switch by default. | N/A |
quota | No | Enter a value to define the maximum overall allowed space for the dataset and the dataset descendants. Default 0 disables quotas. Default is Null . | |
quota_warning | No | Enter a percentage value that when reached or exceeded generates a warning alert or enter Null . | |
quota_crtical | No | Enter a percentage value that when reached or exceeded generates a critical alert or enter Null . | |
refquota | No | Enter a value to define the maximum allow space for just the dataset. Default 0 disables quotas. Default is Null . | |
rfquota_warning | No | Enter a percentage value that when reached or exceeded generates a warning alert or enter Null . | |
refquota_crtical | No | Enter a percentage value that when reached or exceeded generates a critical alert or enter Null . | |
reservation | No | Enter a value to reserve additional space for this dataset and the dataset descendants. 0 is unlimited. | |
refreservation | No | Enter a value to reserve additional space for just this dataset. 0 is unlimited. | |
special_small_block_size | No | Enter the threshold block size for including small file blocks into the special allocation class fusion pool. Blocks smaller than or equal to this value are assigned to to the special allocation class while greater blocks are assigned to the regular class. Valid values are zero or a power of two from 512B up to 1M. Default is 0 which means no small file blocks are allocated in the special class. Add a special class VDev to the pool before setting this value. | special_small_block_size=0 |
copies | No | Enter a number for allowed duplicates of ZFS user data stored on this dataset. | copies=2 |
snapdir | No | Enter the visibility of the .zfs directory on the dataset as HIDDEN or VISIBLE . | snapdir=HIDDEN or snapdir=VISIBLE |
deduplication | No | Enter the option to transparently reuse a single copy of duplicated data to save space. Options are:ON to use deduplication.VERIFY to do a byte-to-byte comparison when two blocks have the same signature to verify the block contents are identical.OFF to not use deduplication | deduplication=OFF |
checksum | No | Enter the checksum to use from the options: ON , OFF , FLETCHER2 , FLETCHER4 , SHA256 , SHA512 , SKEIN , or EDONR . | checksum=OFF |
readonly | No | Enter ON to make the dataset readonly, or OFF to allow write access. | readonly=ON |
recordsize | No | Set the logical block size in the dataset matching the fixed size of data, as in a database. This can result in better performance. Use the recordsize_choices command to return a list of options to use with this command. | recordsize=Null |
casesensitivity | No | Enter SENSITIVE to assume file names are case sensitive or INSENSITIVE for mixed case or case-insensitivity. You cannot change case sensitivity after saving the dataset. Default is INSENSITIVE . | casesensitivity=INSENSITIVE |
aclmode | No | Enter the option that determines how chmod behaves when adjusting file. See zfs(8) aclmod property for more information. Options are:PASSTHROUGH only updates ACL entries that are related to the file or directory mode.RESTRICTED does not allow chmod to make changes to files or directories with a non-trivial ACL. A trivial ACL can be fully expressed as a file mode without losing any access rules. Use this to optimize a dataset for SMB sharing.DISCARD acl_type determines the acl_mode options available in the UI. | aclmodes=PASSTHROUGH |
acltype | No | acltype is inherited from the parent or root dataset. Enter the access control type from these options:OFF specifies neither NFSV4 or POSIX protocols.NFSV4 is used to cleanly migrate Windows-style ACLs across Active Directory domains (or stand-alone servers) that use ACL models richer than POSIX. Use to maintain compatibility with TrueNAS CORE, FreeBSD, or other non-Linux ZFS implementations.POSIX use when an organization data backup target does not support native NFSV4 ACLs. Linux platforms use POSIX and many backup products that access the server outside the SMB protocol cannot understand or preserve native NFSV4 ACLs. Datasets with share_type set to GENERIC or APPS have POSIX ACL types. | acltype=POSIX |
share_type | Yes | Enter the option to define the type of data sharing the dataset uses to optimize the dataset for that sharing protocol. Options are:GENERIC to use for all datasets except those using SMB shares.SMB for datasets using SMB shares.APPS for datasets created to use with applications and to optimize the dataset for use by any application. | share_type=GENERIC |
xattr | No | Set SA to store extended attributes as System Attributes. This allows storing of tiny xattrs (~100 bytes) with the dnode and storing up to 64k of xattrs in the spill block. This results in fewer IO requests when extended attributes are in use. Set ON to store extended attributes in hidden sub directories but this can require multiple lookups when accessing a file. | xatter=SA |
encryption_options | *No | Use to specify the type of encryption, hex-encoded key or passphrase. Enter the property arguments that apply:generate_key enter true to have the system generate a hex-encoded key. Default is false to use key to enter a hex-encoded key of your choice.key_file enter true to use a key file for key encryptiont. Default is false if not using an uploaded key file.pbkdf2iters enter the number of password-based key deviations function 2 (PBKDF2) iterations to use for reducing vulnerability to brute-fore attacks. Enter a value greater than 100000 or use the default value 350000 .passphrase enter the double-quoted password of your choice. Must be specified to use password encryption. Default value is Null or use any string of alpha-numeric and special characters of your choice.key enter the hex-encoded key of your choice. Default is Null . | encryption_options={“generate_key”:"false</>",“key”:"my_hex_ecoded_string"} |
encryption | No | Enter true to encrypt the dataset. Default is false if the parent dataset is not encrypted. You must enter inherit_encryption=false to change encryption for a child of an unencrypted dataset and if changing from key to passphrase encryption. | encryption=true or encryption=false |
inherit_encryption | *No | Required if encrypting a dataset that is a child of an unencrypted dataset. Enter true to inherit encryption from the parent dataset or false to encrypt a dataset that is a child of an unencrypted dataset or changing or if changing from key to passphrase encryption. You cannot create an unencrypted child dataset of an encrypted parent dataset. | inherit_encryption=true or inherit_encryption=false |
user_properties | No | Do not use. | N/A |
create_ancestors | No | Enter true to create ancestors. Default is false . | create_ancestors=true or create_ancestors=false |
From the CLI prompt, enter:
storage dataset create name=pool/dataset_name share_type=GENERIC
Where:
- pool/dataset_name is the full name (including root/parent) for the dataset.
- GENERIC is the share type for the dataset
storage dataset create name=tank/apps share_type=GENERIC
Use the delete
command to delete a dataset or zvol matching the ID entered.
The delete
command has one required property, id
, and one optional property, dataset_delete
.
id
is the found in the output of the storage dataset query
command.
Enter the property argument using the =
delimiter to separate property and double-quoted value.
Enter the command string then press Enter.
The command returns an empty line.
From the CLI prompt, enter:
storage dataset delete id="tank/tank-e3"
Where tank/tank-e3 is identifier for the dataset.
storage dataset delete id="tank/tank-e3"
Use the destroy_snapshots
command to destroy snapshots for the dataset matching the ID entered.
Use the storage snapshot query
command to obtain a list of snapshots on the system.
If the system is performing a snapshot task for the dataset specified, the command returns an error stating the dataset is busy.
The destroy_snapshot
command has two required properties, name
and snapshots
.
name
is the dataset name found in the output of the storage dataset query
command.
snapshots
has four optional properties. See Snapshots Properties below for details.
Use the default {}
value to destroy all datasets for the dataset matching the ID entered.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns progress status in percentage validated and then the name of the snapshot.
Enter snapshots
optional property arguments inside the curly brackets {}
, where the properties and values are double-quoted and separated by the :
delimiter, and with each argument separated with a comma.
Use the default value snapshots={}
without specifying any optional property to destroy all snapshots for the specified dataset.
Property | Description | Syntax Example |
---|---|---|
start | Enter the start date and time for the snapshot range. | “start”:"snapshot_start" |
end | Enter the end date and time for the snapshot range. | “start”:"snapshot_send" |
snapshot_spec | Enter the start and ending date and time range in an object array. | “start”:"snapshot_start,"snapshot_end" |
snapshot_name | Enter the name of the snapshot as found in the output of the storage snapshot query command. | “snapshot_name”:"snapshotname" |
From the CLI prompt, enter:
storage dataset destroy_snapshots name="tank/snapshots" snapshots={}
Where tank/snapshots is the name of the dataset.
storage dataset destroy_snapshots name="tank/snapshots" snapshots={}
[20%] Initial validation complete...
[100%] Initial validation complete...
tank/snapshots@auto-2023-09-05_08-35
Use the details
command to list all datasets on the system and the services or tasks that might be consuming them.
The details
command does not require entering a property argument.
Enter the command then press Enter.
The command returns a table with the same information found in the query
command output and any services consuming the dataset.
From the CLI prompt, enter:
storage dataset details
storage dataset details
+--------------------------+------------+--------------------------+-------+-----------+-----------------+------------+--------------+----------------+-------------+-------------+---------------+-------------------------------+--------+-------------+--------+-------------+-------------+-------------+----------------+-------------+------------+----------------------+--------+----------------+---------------+-----------------+-----------+-----------------+--------+-------+---------------+----------+-------------------+--------------+--------------+--------------+--------------+--------------+-------------------------+----------------------+-----------------------+-------------------+
| id | type | name | pool | encrypted | encryption_root | key_loaded | children | snapshot_count | comments | managedby | deduplication | mountpoint | sync | compression | origin | quota | refquota | reservation | refreservation | volsize | key_format | encryption_algorithm | used | usedbychildren | usedbydataset | usedbysnapshots | available | user_properties | locked | atime | casesensitive | readonly | thick_provisioned | nfs_shares | smb_shares | iscsi_shares | vms | apps | replication_tasks_count | snapshot_tasks_count | cloudsync_tasks_count | rsync_tasks_count |
+--------------------------+------------+--------------------------+-------+-----------+-----------------+------------+--------------+----------------+-------------+-------------+---------------+-------------------------------+--------+-------------+--------+-------------+-------------+-------------+----------------+-------------+------------+----------------------+--------+----------------+---------------+-----------------+-----------+-----------------+--------+-------+---------------+----------+-------------------+--------------+--------------+--------------+--------------+--------------+-------------------------+----------------------+-----------------------+-------------------+
| tank | FILESYSTEM | tank | tank | false | <null> | false | <list> | 0 | <undefined> | <undefined> | <dict> | /mnt/tank | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false | false | true | false | false | <empty list> | <empty list> | <empty list> | <empty list> | <list> | 0 | 0 | 0 | 0 |
| tank/zvols | FILESYSTEM | tank/zvols | tank | false | <null> | false | <list> | 0 | <dict> | <dict> | <dict> | /mnt/tank/zvols | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false | false | true | false | false | <empty list> | <empty list> | <empty list> | <empty list> | <empty list> | 0 | 0 | 0 | 0 |
| tank/zvols/zvol1 | VOLUME | tank/zvols/zvol1 | tank | false | <null> | false | <empty list> | 0 | <dict> | <dict> | <dict> | <null> | <dict> | <dict> | <dict> | <undefined> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false | true | true | false | true | <empty list> | <empty list> | <empty list> | <empty list> | <empty list> | 0 | 0 | 0 | 0 |
| tank/ix-applications | FILESYSTEM | tank/ix-applications | tank | false | <null> | false | <empty list> | 0 | <undefined> | <undefined> | <dict> | /mnt/tank/ix-applications | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false | false | true | false | false | <empty list> | <empty list> | <empty list> | <empty list> | <empty list> | 0 | 0 | 0 | 0 |
+--------------------------+------------+--------------------------+-------+-----------+-----------------+------------+--------------+----------------+-------------+-------------+---------------+-------------------------------+--------+-------------+--------+-------------+-------------+-------------+----------------+-------------+------------+----------------------+--------+----------------+---------------+-----------------+-----------+-----------------+--------+-------+---------------+----------+-------------------+--------------+--------------+--------------+--------------+--------------+-------------------------+----------------------+-----------------------+-------------------+
Use the encryption_alogorithm_choices
command to list encryption alogrithms supported by ZFS.
The encryption_alogorithm_choices
command does not require entering a property argument.
Enter the command then press Enter.
The command returns a list of ZFS-supported encryption alogrithms.
From the CLI prompt, enter:
storage dataset encryption_algorithm_choices
storage dataset encryption_algorithm_choices
+-------------+-------------+
| AES-128-CCM | AES-128-CCM |
| AES-192-CCM | AES-192-CCM |
| AES-256-CCM | AES-256-CCM |
| AES-128-GCM | AES-128-GCM |
| AES-192-GCM | AES-192-GCM |
| AES-256-GCM | AES-256-GCM |
+-------------+-------------+
Use the encryption_summary
command to retrieve a summary of all encrypted root datasets under the entered ID.
The encryption_summary
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns progress in percentage complete followed by the encryption root datasets under the identifier entered or (empty list)
if none exist.
From the CLI prompt, enter:
storage dataset encryption_summary id="tank"
Where tank is the identifier for the dataset.
storage dataset encryption_summary id="tank"
[0%] ...
[100%] ...
+--------------+------------+-------------------------+-----------+--------+--------------+-------------------+
| name | key_format | key_present_in_database | valid_key | locked | unlock_error | unlock_successful |
+--------------+------------+-------------------------+-----------+--------+--------------+-------------------+
| tank/tank-e2 | PASSPHRASE | false | false | false | <null> | true |
| tank/tank-e | PASSPHRASE | false | false | false | <null> | true |
+--------------+------------+-------------------------+-----------+--------+--------------+-------------------+
Use the export_key
command to export the encryption key for the dataset matching the ID entered.
Use with storage dataset encryption_summary
to identify dataset encryption types for datasets on the system.
The export_key
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns the encryption key for the dataset matching the id entered.
From the CLI prompt, enter:
storage dataset export_key id="tank/tank-e"
Where tank/tank-e2 is the identifier for the dataset.
storage dataset export_key id="tank/tank-e"
[0%] ...
[100%] ...
abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234
Use the export_keys
command to export keys for the ID entered and all children of it stored in the system.
The TrueNAS CLI guide for SCALE is a work in progress! This command has not been fully tested and validated. Full documentation is still being developed. Check back for updated information.
Use the get_instance
command to list detials for the dataset matching the ID entered.
The get_instance
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns the query information for the dataset matching the id entered.
From the CLI prompt, enter:
storage dataset get_instance id="tank/tank2"
Where tank/tank2 is the id for the dataset.
storage dataset get_instance id="tank2"
+--------------------------+--------------+
| id | tank2 |
| type | FILESYSTEM |
| name | tank2 |
| pool | tank2 |
| encrypted | false |
| encryption_root | <null> |
| key_loaded | false |
| children | <empty list> |
| deduplication | <dict> |
| mountpoint | /mnt/tank2 |
| aclmode | <dict> |
| acltype | <dict> |
| xattr | <dict> |
| atime | <dict> |
| casesensitivity | <dict> |
| checksum | <dict> |
| exec | <dict> |
| sync | <dict> |
| compression | <dict> |
| compressratio | <dict> |
| origin | <dict> |
| quota | <dict> |
| refquota | <dict> |
| reservation | <dict> |
| refreservation | <dict> |
| copies | <dict> |
| snapdir | <dict> |
| readonly | <dict> |
| recordsize | <dict> |
| key_format | <dict> |
| encryption_algorithm | <dict> |
| used | <dict> |
| usedbychildren | <dict> |
| usedbydataset | <dict> |
| usedbyrefreservation | <dict> |
| usedbysnapshots | <dict> |
| available | <dict> |
| special_small_block_size | <dict> |
| pbkdf2iters | <dict> |
| creation | <dict> |
| snapdev | <dict> |
| user_properties | <dict> |
| locked | false |
+--------------------------+--------------+
Use the get_quota
command to return a list of the specified quota_type of quotas on the ZFS dataset ds
.
The get_quota
command has two required properties, id
and quota_type
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
quota_type
has four options: USER
, GROUP
, DATASET
, or PROJECT
.
PROJECT
displays quotas on each user in the specified filesystem, snapshot, or path, and space consumed by the file system.
If specifying a path, the file system containing that path is used.
This corresponds to the userused@user, userobjused@user, userquota@user, and userobjquota@user properties.
Enter the property arguments using the =
delimiter to separate properties and values.
Enter the command string then press Enter.
The command returns quota type details for the dataset matching the id entered.
Details include the quota type, ID, name for the dataset, the quota, refquota, and bytes used values.
If entering quota_type=PROJECT
, information returned is the quota type and ID entered, the bytes used, and number of user objects.
From the CLI prompt, enter:
storage dataset get_quota ds="tank" quota_type=DATASET
Where:
- tank is the identifier for the dataset.
- DATASET is the quota type to return details on.
storage dataset get_quota ds="tank" quota_type= DATASET
+------------+------+------+-------+----------+-------------+
| quota_type | id | name | quota | refquota | used_bytes |
+------------+------+------+-------+----------+-------------+
| DATASET | tank | tank | 0 | 0 | 26452549632 |
+------------+------+------+-------+----------+-------------+
The inherit_parent_encryption_properties
command allows inheriting parent dataset encryption root disregarding the current encryption settings.
Use only when the specified dataset ID is an encrypted parent and ID itself is an encryption root (parent to encrypted child datasets).
The inherit_parent_encryption_properties
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns an empty line.
From the CLI prompt, enter:
storage dataset inherit_parent_encryption_properties id="tank/tank-e/child-k"
Where tank/tank-e/child-k specifies the encrypted child dataset that is a root (parent) dataset to other encrypted datasets.
storage dataset inherit_parent_encryption_properties id="tank/tank-e/child-k"
Use the lock
command to lock the dataset matching the ID entered.
Only works with datasets using passphrase encryption. Datasets with key encryption return an error.
The TrueNAS CLI guide for SCALE is a work in progress! This command has not been fully tested and validated. Full documentation is still being developed. Check back for updated information.
Use the mountpoint
command to obtain the mountpoint for the dataset matching the ID entered.
The mountpoint
command has one required property, id
, and one optional property, raise
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
raise
default value is true
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns the mount path for the dataset identifier entered.
From the CLI prompt, enter:
storage dataset mountpoint id="tank/minio"
Where tank/minio is the identifier for the dataset.
storage dataset mountpoint dataset="tank/minio"
/mnt/tank/minio
Use the permission
command to set the owner and group, and other dataset permission options (i.e., recursive, traverse, etc.) for the dataset matching the ID entered.
The permissions
command is complex. Use either the UI Edit ACL screen or the the interactive arguments editor/text user interface (TUI) to configure ACL permissions.
The permissions
command has two required properties, id
and pool_dataset_permissions
.
See Pool_Dataset_Permissions Properties below for details.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns a table with user and options for the specified dataset identifier.
Permissions are specified as either a POSIX or NFSV4 acl. This method is a wrapper around filesystem.setperm, filesystem.setacl, and filesystem.chown.
Enter the pool_dataset_permissions
property arguments inside the curly brackets {}
, use the :
delimiter to separate double-quoted properties and values, and separate each argument with a comma and a space. For example:
pool_dataset_permissions={“user”:"admin", “group”:"admin"}
Enter pool_dataset_permissions={}
Property | Required | Description | Syntax Example |
---|---|---|---|
user | *Yes | *Must enter user but can enter both user and group . Enter the name if the user (owner) of permissions for the dataset matching the id entered. | {“user”:"admin"} |
group | No | Enter the name if the group (owner) of permissions for the dataset matching the id entered. | {“group”:"admin"} |
mode | No | Enter the ACL mode from these options: INHERIT, RESTRICTED, or PASSTHROUGH. If specified filesystem.perm is called. If neither mode or acl are specified, filesystem.chown is called. | {“mode”:"??"} |
From the CLI prompt, enter:
storage dataset permission id=“tank/tank-e” pool_dataset_permission={“user”:"admin"}}
Where:
- tank/tank-e is the identifier for the dataset.
- admin is the user owner for the dataset permissions.
storage dataset permission id="tank/tank-e" pool_dataset_permission={"user":"admin"}
[100%] ...
+---------+--------+
| user | admin |
| options | <dict> |
+---------+--------+
Use the processes
command lists the processes using the dataset matching the ID entered.
The processes
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns (empty list)
if no processes are using the dataset matching the id
entered.
From the CLI prompt, enter:
storage dataset processes id="tank/ix-applications"
Where tank/ix-applications is the identifier for the dataset.
storage dataset processes id="tank/ix-applications"
(empty list)
Use the promote
command to promote a the cloned dataset matching the ID entered.
Use the storage snapshot query
command to list snapshots on the system.
The promote
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns an empty line.
From the CLI prompt, enter:
storage dataset promote id="tank/minio-miniosnaps-clone"
Where tank/minio-miniosnaps-clone is the is the identifier for the dataset.
storage dataset promote id="tank/minio-miniosnaps-clone"
Use the query
command to list all configured datasets, enter storage dataset query
.
Information provided includes id (name), type, name, pool encryption settings, child datasets, comments, ACL mode and type, checksum, compression settings, quota settings, and other settings found on the Dataset add and edit screens in the UI.
To include the services consuming the dataset use the storage dataset details
command.
The query
command does not require entering a property argument.
Enter with id
or any other option to refine the output to the information requested
Enter the command then press Enter.
The command returns a table with information for all datasets on the system.
From the CLI prompt, enter:
storage dataset query
storage dataset query
+----------------------+------------+----------------------+-------+-----------+-----------------+------------+--------------+-------------+-------------+---------------+-----------------------+---------+-------------+-------------+-------------+-----------------+----------+-------------+--------+-------------+---------------+--------+-------------+-------------+-------------+----------------+--------+-------------+----------+-------------+--------------+-------------+------------+----------------------+--------+----------------+---------------+----------------------+-----------------+-----------+--------------------------+-------------+----------+---------+-----------------+--------+
| id | type | name | pool | encrypted | encryption_root | key_loaded | children | comments | managedby | deduplication | mountpoint | aclmode | acltype | xattr | atime | casesensitivity | checksum | exec | sync | compression | compressratio | origin | quota | refquota | reservation | refreservation | copies | snapdir | readonly | volsize | volblocksize | recordsize | key_format | encryption_algorithm | used | usedbychildren | usedbydataset | usedbyrefreservation | usedbysnapshots | available | special_small_block_size | pbkdf2iters | creation | snapdev | user_properties | locked |
+----------------------+------------+----------------------+-------+-----------+-----------------+------------+--------------+-------------+-------------+---------------+-----------------------+---------+-------------+-------------+-------------+-----------------+----------+-------------+--------+-------------+---------------+--------+-------------+-------------+-------------+----------------+--------+-------------+----------+-------------+--------------+-------------+------------+----------------------+--------+----------------+---------------+----------------------+-----------------+-----------+--------------------------+-------------+----------+---------+-----------------+--------+
| tank | FILESYSTEM | tank | tank | false | <null> | false | <list> | <undefined> | <undefined> | <dict> | /mnt/tank | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false |
| tank/reptest1 | FILESYSTEM | tank/reptest1 | tank | false | <null> | false | <empty list> | <dict> | <dict> | <dict> | /mnt/tank/reptest1 | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false |
| tank/zvols | FILESYSTEM | tank/zvols | tank | false | <null> | false | <list> | <dict> | <dict> | <dict> | /mnt/tank/zvols | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <undefined> | <undefined> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | <dict> | false |
+----------------------+------------+----------------------+-------+-----------+-----------------+------------+--------------+-------------+-------------+---------------+-----------------------+---------+-------------+-------------+-------------+-----------------+----------+-------------+--------+-------------+---------------+--------+-------------+-------------+-------------+----------------+--------+-------------+----------+-------------+--------------+-------------+------------+----------------------+--------+----------------+---------------+----------------------+-----------------+-----------+--------------------------+-------------+----------+---------+-----------------+--------+
The recommended_zvol_blocksize
command is a helper method to get recommended size for a new zvol (dataset of type VOLUME).
Use when creating a zvol using the storage dataset create
command volblocksize
property argument to enter a blocksize.
The recommended_zvol_blocksize
command has one required property, pool
.
pool
is the name of the pool found in the output of the storage pool query
or storage dataset query id
commands.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns a blocksize recommendation.
From the CLI prompt, enter:
storage dataset recommended_zvol_blocksize pool="tank"
Where tank is the name of the pool.
storage dataset recommended_zvol_blocksize pool="tank"
16K
The recordsize_choices
command lists record size options to use with the
The recordsize_choices
command does not require entering a property argument.
Enter the command then press Enter.
The command returns a list of record sizes.
From the CLI prompt, enter:
storage dataset recordsize_choices
storage dataset recordsize_choices
512
512B
1K
2K
4K
8K
16K
32K
64K
128K
256K
512K
1M
2M
4M
8M
16M
Use the set_quota
command to set quotas for the dataset matching the identifier specified.
There are three over-arching types of quotas for ZFS datasets:
- Dataset quotas and refquotas.
If specifying aDATASET
quota type, then the command acts as a wrapper for pool.dataset.update. - User and group quotas. These limit the amount of disk space consumed by files that are owned by the specified users or groups. If specifying object quota types is specified, then the quota limits the number of objects the specified user or group can own.
- Project quotas. These limit the amount of disk space consumed by files that are owned by the specified project. Project quotas are not yet implemented.
This command allows users to set multiple quotas simultaneously by submitting a list of quotas. The list can contain all supported quota types.
Use the account user query
command or the UI to obtain the UID for the user entered into the command string.
The set_quota
command has two required properties, ds
and quotas
.
ds
is the name of the target ZFS dataset found in the output of the storage dataset query
.
See Quota Properties below for details on entering quota properties.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns an empty line.
quotas
specifies three required properties to apply to dataset.
Enter property arguments inside curly brackets {}
, using the :
to separate double-quoted property and values, and separating with a comma and space. The quota_value
property value does not require double quotes.
Enter the entire string inside square brackets []
. For example:
quotas=[{“quota_type”: “USER”, “id”: “3000”, “quota_value”: 0}]
Property | Description |
---|---|
quota_type | Enter the type of quota to apply to the dataset. Options are:USER USEROBJ limits the number of objects consumed by the specified user or group. |
id | Enter the uid, gid, or name to apply the quota to. If quota_type is DATASET , then id must be either QUOTA or REFQUOTA . Only the root user can specify 0 as the id value. |
quota_value | the quota size in bytes. Setting a value of 0 removes the user or group quota. |
From the CLI prompt, enter:
storage dataset set_quota ds=“tank/zvols” quotas= [{“quota_type”: “USER”, “id”: “3000”, “quota_value”: 0}]
Where:
- USER is the quota type.
- 3000 is the UID for the user.
- 0 is the quota value.
storage dataset set_quota ds="tank/shares" quotas= [{"quota_type": "USER", "id": "3000", "quota_value": 0}]
The snapshot_count
command lists the snapshot count for the dataset matching the name entered.
The snapshot_count
command has one required property, dataset
.
dataset
is the name of the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns the number of snapshots for the dataset specified.
From the CLI prompt, enter:
storage dataset snapshot_count dataset="tank/snapshots"
Where tank/snapshots is full name of the dataset.
storage dataset snapshot_count dataset="tank/snapshots"
1
Use the unlock
command to unlock the dataset or zvol matching the ID entered.
This command only works with datasets locked with a password.
The TrueNAS CLI guide for SCALE is a work in progress! This command has not been fully tested and validated. Full documentation is still being developed. Check back for updated information.
Use the unlock_services_restart_choices
command to get mapping of services identifiers and labels that can be restarted on dataset unlock.
The TrueNAS CLI guide for SCALE is a work in progress! This command has not been fully tested and validated. Full documentation is still being developed. Check back for updated information.
Use the update
command to update settings for the dataset or zvol matching the ID entered.
The Update
command has one required property, id
.
id
is the identifier for the dataset found in the output of the storage dataset query
.
Enter the property argument using the =
delimiter to separate property and value.
Enter the command string then press Enter.
The command returns an empty line.
Enter property arguments using the =
delimiter to separate property and value. Double-quote values that include special characters.
Property arguments enclosed in curly backets {}
have double-quoted properties and values separated by the :
delimiter, and separate multiple property arguments with a comma. For example:
update id="tank/tank-e" sync=ALWAYS
Property | Description | Syntax Example | |
---|---|---|---|
volsize | *Required if setting type=VOLUME . Enter the value which is a multiple of the block size. Options are 512 , 512B , 1K , 2K , 4K , 8K , 16K , 32K , 64K , 128K . | volsize=8k | |
force_size | Only used when setting type=VOLUME . The system restricts creating a zvol that brings a pool to over 80% capacity. Enter true to force creating of a zvol in this case (not recommended). Default is false . | force_size=false or force_size=true | |
comments | Enter comments using upper and lowercase alphanumeric and special characters as a description about this dataset. Enclose value in double quotes. | ||
sync | Enter the option for the desired sync setting.STANDARD to use the standard sync settings requested by the client software.ALWAYS to wait for data write to complete.DISABLED to never wait for writes to complete. | comments="my comments" | |
snapdev | Enter the option to set whether the volume snapshot devices under /dev/zvol/poolname are HIDDEN or VISIBLE . Default inherits HIDDEN . | snapdev=HIDDEN or snapdev=VISIBLE | |
compression | Enter the compression level to use from the available ZFS supported options. Use storage dataset compression_choices to list ZFS supported compression algorithms. Enter the value in double quotes. | compression=OFF</> | |
atime | Set the access time for the dataset. Options are:ON updates the access time for files when they are read.OFF disables creating log traffic when reading files to maximize performance. | atime=ON or atime=OFF | |
exec | Enter ON to allow executing processes from within the dataset or OFF to prevent executing processes from within the dataset. We recommend setting this to ON . | exec=ON or exec=OFF | |
managedby | Not used. Query command includes a reference the router/switch by default. | N/A | |
quota | Enter a value to define the maximum overall allowed space for the dataset and the dataset descendants. Default 0 disables quotas. Default is Null . | quota=Null | |
quota_warning | Enter a percentage value that when reached or exceeded generates a warning alert or enter Null . | quota_warning=Null | |
quota_crtical | Enter a percentage value that when reached or exceeded generates a critical alert or enter Null . | quota_critical=Null | |
refquota | Enter a value to define the maximum allow space for just the dataset. Default 0 disables quotas. Default is Null . | refquota=Null | |
rfquota_warning | Enter a percentage value that when reached or exceeded generates a warning alert or enter Null . | refquota_warning=Null | |
refquota_crtical | Enter a percentage value that when reached or exceeded generates a critical alert or enter Null . | refquota_critical=Null | |
reservation | Enter a value to reserve additional space for this dataset and the dataset descendants. 0 is unlimited. | reservation=0 | |
refreservation | Enter a value to reserve additional space for just this dataset. 0 is unlimited. | refreservation=0 | |
special_small_block_size | Enter the threshold block size for including small file blocks into the special allocation class fusion pool. Blocks smaller than or equal to this value are assigned to to the special allocation class while greater blocks are assigned to the regular class. Valid values are zero or a power of two from 512B up to 1M. Default is 0 which means no small file blocks are allocated in the special class. Add a special class VDev to the pool before setting this value. | special_small_block_size=0 | |
copies | Enter a number for allowed duplicates of ZFS user data stored on this dataset. | copies=2 | |
snapdir Enter the visibility of the .zfs directory on the dataset as HIDDEN or VISIBLE . | snapdir=HIDDEN or snapdir=VISIBLE | ||
deduplication | Enter the option to transparently reuse a single copy of duplicated data to save space. Options are:ON to use deduplication.VERIFY to do a byte-to-byte comparison when two blocks have the same signature to verify the block contents are identical.OFF to not use deduplication | deduplication=OFF | |
checksum | Enter the checksum to use from the options: ON , OFF , FLETCHER2 , FLETCHER4 , SHA256 , SHA512 , SKEIN , or EDONR . | checksum=OFF | |
readonly | Enter ON to make the dataset readonly, or OFF to allow write access. | readonly=ON | |
recordsize | Set the logical block size in the dataset matching the fixed size of data, as in a database. This can result in better performance. Use the | ||
aclmode | Enter the option that determines how chmod behaves when adjusting file. See zfs(8) aclmod property for more information. Options are:PASSTHROUGH only updates ACL entries that are related to the file or directory mode.RESTRICTED does not allow chmod to make changes to files or directories with a non-trivial ACL. A trivial ACL can be fully expressed as a file mode without losing any access rules. Use this to optimize a dataset for SMB sharing.DISCARD acl_type determines the acl_mode options available in the UI. | aclmodes=PASSTHROUGH | |
acltype | acltype is inherited from the parent or root dataset. Enter the access control type from these options:OFF specifies neither NFSV4 or POSIX protocols.NFSV4 is used to cleanly migrate Windows-style ACLs across Active Directory domains (or stand-alone servers) that use ACL models richer than POSIX. Use to maintain compatibility with TrueNAS CORE, FreeBSD, or other non-Linux ZFS implementations.POSIX use when an organization data backup target does not support native NFSV4 ACLs. Linux platforms use POSIX and many backup products that access the server outside the SMB protocol cannot understand or preserve native NFSV4 ACLs. Datasets with share_type set to GENERIC or APPS have POSIX ACL types. | acltype=POSIX | |
xattr | Set SA to store extended attributes as System Attributes. This allows storing of tiny xattrs (~100 bytes) with the dnode and storing up to 64k of xattrs in the spill block. This results in fewer IO requests when extended attributes are in use. Set ON to store extended attributes in hidden sub directories but this can require multiple lookups when accessing a file. | xatter=SA | |
user_properties | Do not use. | N/A | |
create_ancestors | Enter true to create ancestors. Default is false . | create_ancestors=true or create_ancestors=false | |
user_properties_update | Do not use. | N/A |
From the CLI prompt, enter:
storage dataset update id="tank/shares" property=value
Where:
- tank/shares is the identifier for the dataset.
- property is a property option and value is the new value for this property.
storage dataset update id="tank/zvols" sync= ALWAYS
Related CLI Storage Articles
Related Dataset Articles
- Adding and Managing Datasets
- Advanced Settings Screen
- Capacity Settings Screen
- Managing User or Group Quotas
- Snapshots Screen
- User and Group Quota Screens
- Snapshot
- Encryption Settings
- Storage Encryption
Related Snapshot Articles
Have more questions or want to discuss your specific configuration? For further discussion or assistance, see these resources:
- TrueNAS Community Forum
- TrueNAS Community Discord
- iXsystems Enterprise Support (requires paid support contract)
Found content that needs an update? You can suggest content changes directly! To request changes to this content, click the Feedback button located on the middle-right side of the page (might require disabling ad blocking plugins).