This guide collects various how-tos for both simple and complex tasks using primarily the TrueNAS web interface.
Tutorials are organized parallel to the TrueNAS web interface structure and grouped by topic.
Tutorials are living articles and continually updated with new content or additional in-depth tutorials that guide in unlocking the full potential of TrueNAS.
To display all tutorials in a linear HTML format, export it to PDF, or physically print it, please select ⎙ Download or Print.
TrueNAS SCALE Tutorials
⎙ Download or Print: View all TrueNAS SCALE Tutorials as a single page for download or print.
Top Toolbar: Tutorials about options available from the TrueNAS SCALE top toolbar.
Managing API Keys: This tutorial shows how to add, create, or edit an API key in TrueNAS SCALE.
Dashboard: Tutorials related to using the TrueNAS SCALE Dashboard.
Synchronizing System and SCALE Time: Provides instructions on synchronizing the system server and TrueNAS SCALE time when both are out of alignment with each other.
Storage: Tutorials for configuring the various features contained within the Storage area of the TrueNAS SCALE web interface.
Import Pool: Provides information on ZFS importing for storage pools in TrueNAS SCALE. It also addresses GELI-encrypted pools.
Disks: Articles with instructions for managing, replacing, and wiping disks.
Create Pool: Provides background considerations and a simple tutorial on creating storage pools in TrueNAS SCALE.
Fusion Pools: Provides information on setting up and using fusion pools.
Managing Pools: Provides instructions on managing storage pools, VDEVs, and disks in TrueNAS SCALE.
Datasets: Tutorials for creating and managing datasets in TrueNAS SCALE.
Creating Snapshots: Provides instructions on creating ZFS snapshots in TrueNAS Scale.
Managing Snapshots: Provides instructions on managing ZFS snapshots in TrueNAS Scale.
Storage Encryption: Provides information on SCALE storage encryption for pools, root datasets, datasets, and zvols.
Setting Up Permissions: Provides instructions on editing and viewing ACL permissions, using the ACL editor screens, and general information on ACLs.
Shares: Tutorials for configuring the various data sharing features in TrueNAS SCALE.
AFP Migration: Provides information on migrating AFP shares from CORE to SCALE.
Block Shares (iSCSI): Describes the iSCSI protocol and has tutorials for various configuration scenarios.
Adding NFS Shares: Provides instructions on adding NFS shares, starting NFS service, and accessing the share.
Multiprotocol Shares: Provides instructions on setting up SMB and NFSv4 mixed-mode shares.
Windows Shares (SMB): Provides information on SMB shares and instruction creating a basic share and setting up various specific configurations of SMB shares.
Data Protection: Tutorials related to configuring data backup features in TrueNAS SCALE.
Creating VMWare Snapshots: Provides instructions for creating ZFS snapshots when using TrueNAS as a VMWare datastore.
Managing S.M.A.R.T. Tests: Provides instructions on running S.M.A.R.T. tests manually or automatically, using Shell to view the list of tests, and configuring the S.M.A.R.T. test service.
Replication Tasks: Tutorials for configuring ZFS snapshot replication tasks in TrueNAS SCALE.
Network: Tutorials for configuring network interfaces and connections in TrueNAS SCALE.
Interface Configurations: Tutorials about configuring the various types of network interfaces available in TrueNAS SCALE.
Adding Network Settings: Provides instructions on adding network settings during initial SCALE installation or after a clean install of SCALE.
Configuring Static Routes: Provides instructions on configuring a static route using the SCALE web UI.
Setting Up IPMI: Guides you through setting up Intelligent Platform Management Interface (IPMI) on TrueNAS SCALE.
Credentials: Tutorials for configuring the different credentials needed for TrueNAS SCALE features.
Using Administrator Logins: Explains role-based administrator logins and related functions. Provides instructions on properly configuring SSH and working with the admin and root user passwords.
Managing Users: Provides instructions on adding and managing administrator and local user accounts.
Accessing NAS From a VM: Provides instructions on how to create a bridge interface for the VM and provides Linux and Windows examples.
Apps: Expanding TrueNAS SCALE functionality with additional applications.
Using SCALE Catalogs: Provides basic information on adding or managing application catalogs in TrueNAS SCALE.
Using Install Custom App: Provides information on using Install Custom App to configure custom or third-party applications in TrueNAS SCALE.
Securing Apps: Securing TrueNAS applications with VPNs and Zero Trust.
Community Apps: Notes about community applications and individual tutorials for applications.
Enterprise Applications: Tutorials for using TrueNAS SCALE applications in an Enterprise-licensed deployment.
Sandboxes (Jail-like Containers): Provides advanced users information on deploying custom FreeBSD jail-like containers in SCALE.
Reporting: Provides information on changing settings that control how SCALE displays report graphs, how to interact with graphs, and configuring reporting exporters.
System Settings: Tutorials for configuring the system management options in the System Settings area of the TrueNAS SCALE web interface.
Updating SCALE: Provides instructions on updating SCALE releases in the UI.
Using Shell: Provides information on using the TrueNAS SCALE Shell.
Audit Logs: Provides information on using the System and SMB Share auditing screens and function in TrueNAS.
1 - Top Toolbar
Tutorials about options available from the TrueNAS SCALE top toolbar.
The SCALE top navigation top toolbar provides access to functional areas of the UI that you might want to directly access while on other screens in the UI.
Icon buttons provide quick access to dropdown lists of options, dropdown panels with information on system alerts or tasks, and can include access to other information or configuration screens.
It also shows the name of admin user currently logged into the system to the left of the Settings and Power icons.
You can also collapse or expand the main function menu on the left side of the screen.
The iXsystems logo opens the iXsystems home page where users can find information about iXsystems storage and server systems.
Users can also use the iXsystems home page to access their customer portal and the community section for support.
Send Feedback
The Send Feedback icon opens a feedback window.
Alternately, go to System Settings > General, find the Support widget, and click File Ticket to see the feedback window.
The feedback window allows users to send page ratings, comments, and report issues or suggest improvements directly to the TrueNAS development team.
Submitting a bug report requires a free Atlassian account.
Click between the tabs at the top of the window to see options for your specific feedback.
Rate this page
Use the Rate this page tab to quickly review and provide comments on the currently active TrueNAS user interface screen.
You can include a screenshot of the current page and/or upload additional images with your comments.
Report a bug
Use the Report a bug tab to notify the development team when a TrueNAS screen or feature is not working as intended.
For example, report a bug when a middleware error and traceback appears while saving a configuration change.
Enter a descriptive summary in the Subject.
TrueNAS can show a list of existing Jira tickets with similar summaries.
When there is an existing ticket about the issue, consider clicking on that ticket and leaving a comment instead of creating a new one.
Duplicate tickets are closed in favor of consolidating feedback into one report.
Enter details about the issue in the Message.
Keep the details concise and focused on how to reproduce the issue, what the expected result of the action is, and what the actual result of the action was.
This helps ensure a speedy ticket resolution.
Include system debug and screenshot files to also speed up the issue resolution.
Bug Reports from Enterprise Licensed Systems
TrueNAS Enterprise
When an Enterprise license is applied to the system, the Report a bug tab has additional environment and contact information fields for sending bug reports directly to iXsystems.
Click on History to open the Tasks screen with lists of all successful, active, and failed jobs.
Click on the All, Active, or Failed button at the top of the screen to show the log of jobs that fit that classification.
Click View next to a task to see the log information and error message for that task.
The Alertsnotifications icon displays a list of current alert notifications.
To remove an alert notification click Dismiss below it or use Dismiss All Alerts to remove all notifications from the list.
Use the settings icon to display the Alerts dropdown list with two options: Alert Settings and Email.
Select Alert Settings to add or edit existing system alert services and configure alert options such as the warning level and frequency and how the system notifies you.
See Alerts Settings Screens for more information.
TrueNAS Enterprise
The Alert Settings Screens article includes information about the SCALE Enterprise high availability (HA) alert settings.
Select Email to configure the method for the system to send email reports and alerts.
See Setting Up System Email for information about configuring the system email service and alert emails.
Settings
The Settingsaccount_circle icon opens a dropdown list of options for passwords, API keys, and TrueNAS information.
Change Password
Click on the Change Passworddialpad icon button to display the change password dialog where you can enter a new password for the currently logged-in user.
Click on the visibility_off icon to display entered passwords.
To stop displaying the password, click on the visibility icon.
API Keys
Click on API Keyslaptop to add an API key.
API keys identify an outside resource or application without a principal.
For example, when adding a new system to TrueCommand if you are required to add an API key to authenticate the system.
Use this function to create an API key for this purpose.
Click API Docs to access the API documentation portal with information on TrueNAS SCALE API commands.
See API Keys for more information on adding or managing API keys.
Guide and About
Click on Guidelibrary_books to open the TrueNAS Documentation Hub in a new tab.
Click on About to display the information window with links to the TrueNAS Documentation Hub, TrueNAS Community Forums, FreeNAS Open Source Storage Appliance GitHub repository, and iXsystems home page.
Click the Powerpower_settings_new button to open the dropdown list of power options.
Options are Log Out which logs you out of the SCALE UI but does not power off the system, Restart which logs you out of the SCALE UI and restarts the server, or Shut Down which logs you out of the SCALE UI and powers off the system as though you pressed the power button on the physical server.
With the implementation of administrator roles, the power options displayed could be locked based on the level of privileges for the administrator role.
The local administrator has access to all three power options.
The Read-Only and Sharing Manager admin roles only have access to the Log Out option.
The other power options display with a lock icon indicating the function is not permitted.
Content
Managing API Keys: This tutorial shows how to add, create, or edit an API key in TrueNAS SCALE.
This tutorial shows how to add, create, or edit an API key in TrueNAS SCALE.
The API Keys option on the top right toolbar Settings (user icon) dropdown menu displays the API Keys screen.
This screen displays a list of API keys added to your system and allows you to add, edit, or delete keys.
Select the Reset to remove the existing API key and generate a new random key. The dialog displays the new key and the Copy to Clipboard option to copy the key to the clipboard.
Always back up and secure keys. The key string displays only one time, at creation!
To delete, select Confirm on the delete dialog to activate the Delete button.
Click API Docs to access API documentation that is built into the system.
1.2 - Shutting Down SCALE Enterprise HA
Describes the procedure for shutting down an Enterprise HA system in TrueNAS SCALE.
TrueNAS Enterprise
This procedure applies to SCALE Enterprise High Availability (HA) systems only.
If you need to power down your SCALE Enterprise system with HA enabled, this is the procedure:
Shut Down From the SCALE Web UI
While logged into the SCALE Web UI using the virtual IP (VIP), click the power button in the top right corner of the screen.
Select Shut Down from the dropdown list.
This shuts down the active controller.
The system fails over to the standby controller.
When the SCALE Web UI login screen displays, log back in to the system. This logs you in to the standby controller.
Click the power button in the top right corner of the screen.
Select Shut Down from the dropdown list.
This shuts down the standby controller.
2 - Dashboard
Tutorials related to using the TrueNAS SCALE Dashboard.
This section contains tutorials involving the SCALE Dashboard.
Contents
Synchronizing System and SCALE Time: Provides instructions on synchronizing the system server and TrueNAS SCALE time when both are out of alignment with each other.
2.1 - Synchronizing System and SCALE Time
Provides instructions on synchronizing the system server and TrueNAS SCALE time when both are out of alignment with each other.
TrueNAS SCALE allows users to synchronize SCALE and system server time when they get out of sync.
This function does not correct time differences over 30 days out of alignment.
The System Information widget on the Dashboard displays a message and provides an icon button that executes the time-synchronization operation only when SCALE detects a discrepancy between SCALE and system server time.
Click the Synchronize Timeloop icon button to initiate the time-synchronization operation.
What if my time is off by more than 30 days?
If your time is off by more than 30 days, TrueNAS SCALE does not allow you to sync since the system probably has one of the following underlying issues:
The BIOS timezone is incorrect
The motherboard CMOS battery is failing
To check the BIOS timezone, reboot your system. During boot, press the indicated key that takes you to the BIOS setup screen. The key varies by manufacturer (F2, Delete, Esc, etc.). If you do not know which key to use, check the manufacturer documentation for your server.
After you enter the BIOS setup, ensure the timezone is UTC. If not, set it to UTC, save the configuration changes, and reboot the system.
A dying motherboard CMOS battery can also cause the system clock to be incorrect. If you intend to replace your CMOS, be sure to follow electrostatic discharge (ESD) safety measures.
I corrected my system time issue. Why won't my apps start?
If you corrected your system time issues (changed BIOS time, replaced CMOS, etc.) and your apps do not start, ensure all apps have their timezones set to ‘UTC’ timezone.
3 - Storage
Tutorials for configuring the various features contained within the Storage area of the TrueNAS SCALE web interface.
The SCALE Storage section has controls for pools, snapshots, and disk management.
This section also provides access to datasets, zvols, quotas, and permissions.
Use the Import Pool button to reconnect pools exported/disconnected from the current system or created on another system.
This also reconnects pools after users reinstall or upgrade the TrueNAS system.
Use the Disks button to manage, wipe, and import storage disks that TrueNAS uses for ZFS data storage.
Use the Create Pool to create ZFS data storage “pools” from physical disks. Pools efficiently store and protect data.
The Storage screen displays all the pools added to the system.
Each pool shows statistics and status, along with buttons to manage the different elements of the pool.
The articles in this section offer specific guidance for the different storage management options.
Storage Articles
Import Pool: Provides information on ZFS importing for storage pools in TrueNAS SCALE. It also addresses GELI-encrypted pools.
Disks: Articles with instructions for managing, replacing, and wiping disks.
Replacing Disks: Provides disk replacement instructions that includes taking a failed disk offline and replacing a disk in an existing VDEV. It automatically triggers a pool resilver during the replacement process.
Wiping a Disk: Provides instructions for wiping a disk.
SLOG Over-Provisioning: Provides information on the disk_resize command in TrueNAS SCALE.
Managing Self-Encrypting Drives (SED): Covers self-encrypting drives including supported specifications, implementing and managing SEDs in TrueNAS, and managing SED passwords and data.
Create Pool: Provides background considerations and a simple tutorial on creating storage pools in TrueNAS SCALE.
Fusion Pools: Provides information on setting up and using fusion pools.
Managing Pools: Provides instructions on managing storage pools, VDEVs, and disks in TrueNAS SCALE.
3.1 - Import Pool
Provides information on ZFS importing for storage pools in TrueNAS SCALE. It also addresses GELI-encrypted pools.
ZFS pool importing works for pools exported or disconnected from the current system, those created on another system, and for pools you reconnect after reinstalling or upgrading the TrueNAS system.
The import procedure only applies to disks with a ZFS storage pool.
Do I need to do anything different with disks installed on a different system?
When physically installing ZFS pool disks from another system, use the zpool export poolname command in the Linux command line or a web interface equivalent to export the pool on that system.
Shut down that system and move the drives to the TrueNAS system.
Shutting down the original system prevents an in use by another machine error during the TrueNAS import.
To import a pool, go to the Storage Dashboard and click Import Pool at the top of the screen.
TrueNAS detects the pools that are present but not connected and adds them to the Pools dropdown list.
Select a pool from the Pool dropdown list, then click Import.
Can I import GELI-encrypted pools?
GELI encryption is specific to FreeBSD so TrueNAS SCALE cannot import GELI-encrypted pools.
See the GELI Pool Migrations section in the CORE Storage Encryption article.
The Preparing to Migrate article provides information on what you can and cannot migrate and a checklist of actions to take before migrating from CORE with GELI-encrypted pools to SCALE.
3.2 - Disks
Articles with instructions for managing, replacing, and wiping disks.
To manage disks, go to Storage and click Disks on the top right of the screen to display the Storage Disks screen.
Select the disk on the list, then select Edit.
The Disks page lets users edit disks, perform manual tests, and view S.M.A.R.T. test results. Users may also delete obsolete data off an unused disk.
Performing Manual S.M.A.R.T. Testing
Select the disk(s) you want to perform a S.M.A.R.T. test on and click Manual Test.
Long runs SMART Extended Self Test. This scans the entire disk surface and can take many hours on large-volume disks.
Short runs SMART Short Self Test (usually under ten minutes). These are basic disk tests that vary by manufacturer.
Conveyance runs a SMART Conveyance Self Test.
This self-test routine is intended to identify damage incurred during transporting of the device.
This self-test routine requires only minutes to complete.
Offline runs SMART Immediate Offline Test.
The effects of this test are visible only in that it updates the SMART Attribute values, and if the test finds errors, they appear in the SMART error log.
Click Start to begin the test. Depending on the test type you choose, the test can take some time to complete. TrueNAS generates alerts when tests discover issues.
For information on automated S.M.A.R.T. testing, see the S.M.A.R.T. tests article.
S.M.A.R.T. Test Results
To review test results, expand the disk and click S.M.A.R.T. Test Results.
Replacing Disks: Provides disk replacement instructions that includes taking a failed disk offline and replacing a disk in an existing VDEV. It automatically triggers a pool resilver during the replacement process.
Wiping a Disk: Provides instructions for wiping a disk.
SLOG Over-Provisioning: Provides information on the disk_resize command in TrueNAS SCALE.
Managing Self-Encrypting Drives (SED): Covers self-encrypting drives including supported specifications, implementing and managing SEDs in TrueNAS, and managing SED passwords and data.
3.2.1 - Replacing Disks
Provides disk replacement instructions that includes taking a failed disk offline and replacing a disk in an existing VDEV. It automatically triggers a pool resilver during the replacement process.
Hard drives and solid-state drives (SSDs) have a finite lifetime and can fail unexpectedly.
When a disk fails in a Stripe (RAID0) pool, you must recreate the entire pool and restore all data backups.
We always recommend creating non-stripe storage pools that have disk redundancy.
To prevent further redundancy loss or eventual data loss, always replace a failed disk as soon as possible!
TrueNAS integrates new disks into a pool to restore it to full functionality.
TrueNAS requires you to replace a disk with another disk of the same or greater capacity as a failed disk.
You must install the disk in the TrueNAS system.
It should not be part of an existing storage pool.
TrueNAS wipes the data on the replacement disk as part of the process.
Disk replacement automatically triggers a pool resilver.
Replacing a Failed Disk
If you configure your main SCALE Dashboard to include individual Pool or the Storage widgets they show the status of your system pools as on or offline, degraded, or in an error condition.
From the main Dashboard, you can click the on either the Pool or Storage widget to go to the Storage Dashboard screen, or you can click Storage on the main navigation menu to open the Storage Dashboard and locate the pool in the degraded state.
My disk is faulted. Should I replace it?
If a disk shows a faulted state, TrueNAS has detected an issue with that disk and you should replace it.
To replace a failed disk:
Locate the failed drive.
a. Go to the Storage Dashboard and click Manage Devices on the Topology widget for the degraded pool to open the Devices screen for that pool.
b. Click anywhere on the VDEV to expand it and look for the drive with the Offline status.
a. Click Replace on the Disk Info widget on the Devices screen for the disk you off-lined.
b. Select the new drive from the Member Disk dropdown list on the Replacing disk diskname dialog.
Add the new disk to the existing VDEV. Click Replace Disk to add the new disk to the VDEV and bring it online.
Disk replacement fails when the selected disk has partitions or data present.
To destroy any data on the replacement disk and allow the replacement to continue, select the Force option.
When the disk wipe completes, TrueNAS starts replacing the failed disk.
TrueNAS resilvers the pool during the replacement process.
For pools with large amounts of data, this can take a long time.
When the resilver process completes, the pool status returns to Online status on the Devices screen.
Taking a Disk Offline
We recommend users off-line a disk before starting the physical disk replacement.
Off-lining a disk removes the device from the pool and can prevent swap issues.
Can I use a disk that is failing but still active?
There are situations where you can leave a disk that has not completely failed online to provide additional redundancy during the replacement procedure.
We do not recommend leaving failed disks online unless you know the exact condition of the failing disk.
Attempting to replace a heavily degraded disk without off-lining it significantly slows down the replacement process.
The offline failed?
If the off-line operation fails with a Disk offline failed - no valid replicas message, go to Storage Dashboard and click Scrub on the ZFS Health widget for the pool with the degraded disk. The Scrub Pool confirmation dialog opens. Select Confirm and then click Start Scrub.
When the scrub operation finishes, return to the Devices screen, click on the VDEV and then the disk, and try to off-line it again.
Click on Manage Devices to open the Devices screen, click anywhere on the VDEV to expand VDEV and show the drives in the VDEV.
Click Offline on the ZFS Info widget. A confirmation dialog displays. Click Confirm and then Offline.
The system begins the process to take the disk offline. When complete, the disk displays the status of the failed disk as Offline.
The button toggles to Online.
You can physically remove the disk from the system when the disk status is Offline.
If the replacement disk is not already physically installed in the system, do it now.
Use Replace to bring the new disk online in the same VDEV.
Restoring the Hot Spare
After a disk fails, the hot spare takes over. To restore the hot spare to waiting status after replacing the failed drive, remove the hot spare from the pool, then re-add it to the pool as a new hot spare.
3.2.2 - Wiping a Disk
Provides instructions for wiping a disk.
The disk wipe option deletes obsolete data from an unused disk.
Wipe is a destructive action and results in permanent data loss!
Back up any critical data before wiping a disk.
TrueNAS only shows the Wipe option for unused disks.
Ensure you have backed-up all data and are no longer using the disk.
Triple check that you have selected the correct disk for the wipe.
Recovering data from a wiped disk is usually impossible.
Click Wipe to open a dialog with additional options:
Quick erases only the partitioning information on a disk without clearing other old data, making it easy to reuse. Quick wipes take only a few seconds.
Full with zeros overwrites the entire disk with zeros and can take several hours to complete.
Full with random overwrites the entire disk with random binary code and takes even longer than the Full with zeros operation to complete.
After selecting the appropriate method, click Wipe and confirm the action. A Confirmation dialog opens.
Verify the name to ensure you have chosen the correct disk. When satisfied you can wipe the disk, set Confirm and click Continue.
Continue starts the disk wipe process and opens a progress dialog with the Abort button.
Abort stops the disk wipe process. At the end of the disk wipe process a success dialog displays.
Close closes the dialog and returns you to the Disks screen.
3.2.3 - SLOG Over-Provisioning
Provides information on the disk_resize command in TrueNAS SCALE.
For more general information on SLOG disks, see SLOG Devices.
Because this is a potentially disruptive procedure, contact iXsystems Support to review your overprovisioning needs and schedule a maintenance window.
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
TCG Pyrite Version 1 and
Version 2 are similar to Opalite, but with hardware encryption removed
Pyrite provides a logical equivalent of the legacy ATA security for non-ATA devices. Only the drive firmware protects the device.
Pyrite Version 1 SEDs do not have PSID support and can become unusable if the password is lost.
TCG Enterprise designed for systems with many data disks
These SEDs cannot unlock before the operating system boots.
See this Trusted Computing Group and NVM Express® joint white paper for more details about these specifications.
TrueNAS Implementation
TrueNAS implements the security capabilities of camcontrol for legacy devices and sedutil-cli for TCG devices.
When managing a SED from the command line, it is recommended to use the sedhelper wrapper script for sedutil-cli to ease SED administration and unlock the full capabilities of the device. See provided examples of using these commands to identify and deploy SEDs below.
You can configure a SED before or after assigning the device to a pool.
By default, SEDs are not locked until the administrator takes ownership of them. Ownership is taken by explicitly configuring a global or per-device password in the web interface and adding the password to the SEDs. Adding SED passwords in the web interface also allows TrueNAS to automatically unlock SEDs.
A password-protected SED protects the data stored on the device when the device is physically removed from the system. This allows secure disposal of the device without having to first wipe the contents. Repurposing a SED on another system requires the SED password.
For TrueNAS High Availability (HA) systems, SED drives only unlock on the active controller!
Deploying SEDs
Enter command sedutil-cli --scan in the Shell to detect and list devices. The second column of the results identifies the drive type:
Character
Standard
no
non-SED device
1
Opal V1
2
Opal V2
E
Enterprise
L
Opalite
p
Pyrite V1
P
Pyrite V2
r
Ruby
Example:
root@truenas1:~ # sedutil-cli --scan
Scanning for Opal compliant disks
/dev/ada0 No 32GB SATA Flash Drive SFDK003L
/dev/ada1 No 32GB SATA Flash Drive SFDK003L
/dev/da0 No HGST HUS726020AL4210 A7J0
/dev/da1 No HGST HUS726020AL4210 A7J0
/dev/da10 E WDC WUSTR1519ASS201 B925
/dev/da11 E WDC WUSTR1519ASS201 B925
TrueNAS supports setting a global password for all detected SEDs or setting individual passwords for each SED. Using a global password for all SEDs is strongly recommended to simplify deployment and avoid maintaining separate passwords for each SED.
Setting a Global Password for SEDs
Go to System Settings > Advanced > Self-Encrypting Drive and click Configure. A warning displays stating Changing Advanced settings can be dangerous when done incorrectly. Please use caution before saving. Click Close to display the settings form. Enter the password in SED Password and Confirm SED Password and click Save.
Record this password and store it in a safe place!
Now configure the SEDs with this password. Go to the Shell and enter command sedhelper setup <password>, where <password> is the global password entered in System > Advanced > SED Password.
sedhelper ensures that all detected SEDs are properly configured to use the provided password:
Rerun command sedhelper setup <password> every time a new SED is placed in the system to apply the global password to the new SED.
Creating Separate Passwords for Each SED
Go to Storage click the Disks dropdown in the top right of the screen and select Disks. From the Disks screen, click the expand_more for the confirmed SED, then Edit. Enter and confirm the password in the SED Password fields to override the global SED password.
You must configure the SED to use the new password. Go to the Shell and enter command sedhelper setup --disk <da1> <password>, where <da1> is the SED to configure and <password> is the created password from Storage > Disks > Edit Disks > SED Password.
Repeat this process for each SED and any SEDs added to the system in the future.
Remember SED passwords! If you lose the SED password, you cannot unlock SEDs or access their data.
After configuring or modifying SED passwords, always record and store them in a secure place!
Check SED Functionality
When SED devices are detected during system boot, TrueNAS checks for configured global and device-specific passwords.
Unlocking SEDs allows a pool to contain a mix of SED and non-SED devices. Devices with individual passwords are unlocked with their password. Devices without a device-specific password are unlocked using the global password.
To verify SED locking is working correctly, go to the Shell. Enter command sedutil-cli --listLockingRange 0 <password> <dev/da1>, where <dev/da1> is the SED and <password> is the global or individual password for that SED. The command returns ReadLockEnabled: 1, WriteLockEnabled: 1, and LockOnReset: 1 for drives with locking enabled:
This section contains command line instructions to manage SED passwords and data. The command used is sedutil-cli(8).
Most SEDs are TCG-E (Enterprise) or TCG-Opal (Opal v2.0).
Commands are different for the different drive types, so the first step is to identify the type in use.
These commands can be destructive to data and passwords. Keep backups and use the commands with caution.
Check SED version on a single drive, /dev/da0 in this example:
root@truenas:~ # sedutil-cli --isValidSED /dev/da0
/dev/da0 SED --E--- Micron_5N/A U402
To check all connected disks at once:
root@truenas:~ # sedutil-cli --scan
Scanning for Opal compliant disks
/dev/ada0 No 32GB SATA Flash Drive SFDK003L
/dev/ada1 No 32GB SATA Flash Drive SFDK003L
/dev/da0 E Micron_5N/A U402
/dev/da1 E Micron_5N/A U402
/dev/da12 E SEAGATE XS3840TE70014 0103
/dev/da13 E SEAGATE XS3840TE70014 0103
/dev/da14 E SEAGATE XS3840TE70014 0103
/dev/da2 E Micron_5N/A U402
/dev/da3 E Micron_5N/A U402
/dev/da4 E Micron_5N/A U402
/dev/da5 E Micron_5N/A U402
/dev/da6 E Micron_5N/A U402
/dev/da9 E Micron_5N/A U402
No more disks present ending scan
root@truenas:~ #
Instructions for Specific Drives
TCG-Opal Instructions
Reset the password without losing data with command:
Wipe data and reset password using the PSID with this command:
sedutil-cli --yesIreallywanttoERASEALLmydatausingthePSID <PSINODASHED> </dev/device> where is the PSID located on the pysical drive with no dashes (-).
TCG-E Instructions
Changing or Resetting the Password without Destroying Data
Run these commands for every LockingRange or band on the drive.
To determine the number of bands on a drive, use command sedutil-cli -v –listLockingRanges </dev/device>.
Increment the BandMaster number and rerun the command with –setPassword for every band that exists.
Use all of these commands to reset the password without losing data:
Provides background considerations and a simple tutorial on creating storage pools in TrueNAS SCALE.
TrueNAS uses ZFS data storage pools to efficiently store and protect data.
What is a pool?
Storage pools attach drives organized into virtual devices called VDEVs.
Drives arranged inside VDEVs provide varying amounts of redundancy and performance.
ZFS and VDEVs combined create high-performance pools that maximize data lifetime.
ZFS and TrueNAS periodically review and heal when discovering a bad block in a pool.
Reviewing Storage Needs
We strongly recommend that you review your available system resources and plan your storage use case before creating a storage pool. Consider the following:
Allocating more drives to a pool increases redundancy when storing critical information.
Maximizing total available storage at the expense of redundancy or performance entails allocating large-volume disks and configuring a pool for minimal redundancy.
Maximizing pool performance entails installing and allocating high-speed SSD drives to a pool.
Security requirements can mean the pool must be created with ZFS encryption.
RAIDz pool layouts are well-suited for general use cases and especially smaller (<10) data VDEVS or storage scenarios that involve storing multitudes of small data blocks.
dRAID pool layouts are useful in specific situations where large disk count (>100) arrays need improved resilver times due to increased disk failure rates and the array is intended to store large data blocks.
TrueNAS recommends defaulting to a RAIDz layout generally and whenever a dRAID vdev would have fewer than 10 data storage devices.
Determining your specific storage requirements is a critical step before creating a pool.
The ZFS and dRAID primers provide a starting point to learn about the strengths and costs of different storage pool layouts.
You can also use the ZFS Capacity Calculator and ZFS Capacity Graph to compare configuration options.
Creating a Pool
Click Create Pool to open the Pool Creation Wizard.
Pool Creation Wizard Fields (Click to expand)
This wizard screen lets you configure a VDEV using the Automated Disk Selection fields.
To individually find and select disks for a VDEV, click Manual Disk Selection in the Advanced Options area.
Choosing a dRAID VDEV layout removes the Manual Disk Selection button and adds different options to the Automated Disk Selection area.
It also removes the Spare VDEV section from the pool creation wizard and replaces it with the Distributed Hot Spares option in the Data VDEV section.
Designates that each disk is used sequentially in the VDEV.
Requires at least one disk and has no redundancy.
A data VDEV with a stripe layout irretrievably loses all stored data if a single disk in the VDEV fails.
Not recommended for data VDEVs storing critical data.
Mirror
Denotes that each disk in the VDEV stores an exact data copy.
Requires at least 2 disks in the VDEV.
Storage capacity is the size of a single disk in the VDEV.
RAIDZ and dRAID
Each of these layouts has 1, 2, and 3 options.
The options indicate the number of disks reserved for data parity and the number of disks that can fail in the VDEV without data loss to the pool.
For example, a RAIDZ2 layout reserves two additional disks for parity, and two disks can fail without data loss.
Automated Disk Selection - Stripe, Mirror, and RAIDZ layouts
Select the disk size from the list that displays. The list shows disks by size in GiB and type (SSD or HDD).
Treat Disk Size as Minimum
Select to use disks of the size selected in Disk Size or larger. If not selected, only disks of the size selected in Disk Size are used.
Width
Select the number of disks from the options provided on the dropdown list.
Number of VDEVs
Select the number of VDEVs from the options provided on the dropdown list.
Automated Disk Selection - dRAID layouts
Similar to RAIDZ, dRAID layout numbers (1, 2, or 3) indicate the parity level and how many disks can fail without data loss to the pool.
TrueNAS defaults to allocating 10 disks minimum as dRAID VDEV in Children.
If creating a data VDEV with fewer than 10 disks, using a RAIDZ layout is strongly recommended for better performance and capacity optimization.
Setting
Description
Disk Size
Select the disk size from the list that displays. The list shows disks by size in GiB and type (SSD or HDD).
Treat Disk Size as Minimum
Select to use disks of the size selected in Disk Size or larger. If not selected, only disks of the size selected in Disk Size are used.
Data Devices
Data stripe width for the VDEV. Select the number of disks from the options provided on the dropdown list. TrueNAS recommends that dRAID layouts have data devices allocated in multiples of 2.
Distributed Hot Spares
Number of disk areas to actively provide spare capacity to the entire VDEV. These areas are active within the pool and function in of adding a Spare VDEV to the pool. It is recommended to set this to at least 1. The Distributed Hot Spares number cannot be modified after the pool is created.
Children
The total number of disks to allocate in the dRAID VDEV. The field selection and options update dynamically based on the chosen dRAID Layout, Disk Size, Data Devices, and Distributed Hot Spares. Increasing the number of Children in the dRAID VDEV can reduce the options for Number of VDEVs.
Number of VDEVs
Select the number of VDEVs from the options provided on the dropdown list. Options are populated dynamically depending on the selections made in all the other fields.
Enter a name of up to 50 lowercase alpha-numeric characters.
Use only the permitted special characters that conform to ZFS naming conventions.
The pool name contributes to the maximum character length for datasets, so it is limited to 50 characters.
You cannot change the pool name after creation.
Create the required data VDEV.
Select the layout from the Layout dropdown list, then either use the Automated Disk Selection fields to select and add the disks, or click Manual Disk Selection to add specific disks to the chosen Layout.
dRAID layouts do not show the Manual Disk Selection button but do show additional Automated Disk Selection fields.
When configuring a dRAID data VDEV, first choose a Disk Size then select a Data Devices number.
The remaining fields update based on the Data Devices and dRAID layout selections.
Click Save And Go To Review if you do not want to add other VDEV types to the pool, or click Next to move to the next wizard screens.
Add any other optional VDEVs as determined by your specific storage redundancy and performance requirements.
Click Create Pool on the Review wizard screen to add the pool.
3.4 - Fusion Pools
Provides information on setting up and using fusion pools.
Fusion Pools are also known as ZFS allocation classes, ZFS special vdevs, and metadata vdevs (Metadata vdev type on the Pool Manager screen.).
What's a special VDEV?
A special VDEV can store metadata such as file locations and allocation tables.
The allocations in the special class are dedicated to specific block types.
By default, this includes all metadata, the indirect blocks of user data, and any deduplication tables.
The class can also be provisioned to accept small file blocks.
This is a great use case for high-performance but smaller-sized solid-state storage.
Using a special vdev drastically speeds up random I/O and cuts the average spinning-disk I/Os needed to find and access a file by up to half.
Creating a Fusion Pool
Go to the Storage Dashboard and click Create Pool.
A pool must always have one normal (non-dedup/special) VDEV before you assign other devices to the special class.
Enter a name for the pool using up to 50 lowercase alpha-numeric and permitted special characters that conform to ZFS naming conventions.
The pool name contributes to the maximum character length for datasets, so it is limited to 50 characters.
Click ADD VDEV and select Metadata to add the VDEV to the pool layout.
Add disks to the primary Data VDevs, then to the Metadata VDEV.
Add SSDs to the new Metadata VDev and select the same layout as the Data VDevs.
Metadata VDEVs are critical for pool operation and data integrity. Protect them with redundancy measures such as mirroring, and optionally hot spare(s) for additional fault tolerance. It is suggested to use an equal or greater level of failure tolerance in each of your metadata VDEVs; for example, if your data VDEVs are configured as RAIDZ2, consider the use of 3-way mirrors for your metadata VDEVs.
UPS Recommendation
When using SSDs with an internal cache, add an uninterruptible power supply (UPS) to the system to help minimize the risk from power loss.
Using special VDEVs identical to the data VDEVs (so they can use the same hot spares) is recommended, but for performance reasons, you can make a different type of VDEV (like a mirror of SSDs).
In that case, you must provide hot spare(s) for that drive type as well. Otherwise, if the special VDEV fails and there is no redundancy, the pool becomes corrupted and prevents access to stored data.
While the metadata VDEV can be adjusted after its addition by attaching or detaching drives, the entire metadata VDEV itself can only be removed from the pool when the pool data VDEVs are mirrors. If the pool uses RAIDZ data VDEVs, a metadata VDEV is a permanent addition to the pool and cannot be removed.
When more than one metadata VDEV is created, then allocations are load-balanced between all these devices.
If the special class becomes full, then allocations spill back into the normal class.
Deduplication table data is placed first onto a dedicated Dedup VDEV, then a Metadata VDEV, and finally the data VDEVs if neither exists.
Create a fusion pool and Status shows a Special section with the metadata SSDs.
3.5 - Managing Pools
Provides instructions on managing storage pools, VDEVs, and disks in TrueNAS SCALE.
The Storage Dashboard widgets provide access to pool management options to keep the pool and disks healthy, upgrade pools and VDEVs, open datasets, snapshots, data protection screens, and manage S.M.A.R.T. tests.
This article provides instructions on pool management functions available in the SCALE UI.
Select Storage on the main navigation panel to open the Storage Dashboard.
Locate the ZFS Health widget for the pool, then click the Edit Auto TRIM. The Pool Options for poolname dialog opens.
With Auto TRIM selected and active, TrueNAS periodically checks the pool disks for storage blocks it can reclaim.
Auto TRIM can impact pool performance, so the default setting is disabled.
For more details about TRIM in ZFS, see the autotrim property description in zpool.8.
Exporting/Disconnecting or Deleting a Pool
Use the Export/Disconnect button to disconnect a pool and transfer drives to a new system where you can import the pool.
It also lets you completely delete the pool and any data stored on it.
Click on Export/Disconnect on the Storage Dashboard.
A dialog displays showing any system services affected by exporting the pool, and options based on services configured on the system.
To delete the pool and erase all the data on the pool, select Destroy data on this pool.
Enter the pool name in the field shown at the bottom of the window.
Do not select this option if only exporting the pool.
Select Delete saved configurations from TrueNAS? to delete shares and saved configurations on this pool.
Select Confirm Export/Disconnect
Click Export/Disconnect. A confirmation dialog displays when the export/disconnect completes.
Adding a VDEV to a Pool
ZFS supports adding VDEVs to an existing ZFS pool to increase the capacity or performance of the pool.
You cannot change the original encryption or data VDEV configuration.
To add a VDEV to a pool:
Click Manage Devices on the Topology widget to open the Devices screen.
Click Add VDEV on the Devices screen to open the Add Vdevs to Pool screen.
Adding a vdev to an existing pool follows the same process as documented in Create Pool.
Click on the type of vdev you want to add, for example, to add a spare, click on Spare to show the vdev spare options.
To use the automated option, select the disk size from the Automated Disk Selection > Disk Size dropdown list, then select the number of vdevs to add from the Width dropdown.
To add the vdev manually, click Manual Disk Selection to open the Manual Selection screen.
Click Add to show the vdev options available for the vdev type.
The example image shows adding a stripe vdev for the spare.
Vdev options are limited by the number of available disks in your system and the configuration of any existing vdevs of that type in the pool.
Drag the disk icon to the stripe vdev, then click Save Selection.
You have the option to accept the change or click Edit Manual Disk Selection to change the disk added to the strip vdev for the spare, or click Reset Step to clear the strip vdev from the spare completely.
Click either Next or a numbered item to add another type of vdev to this pool.
Repeat the same process above for each type of vdev to add.
Click Save and Go to Review to go to the Review screen when ready to save your changes.
To make changes, click either Back or the vdev option (i.e., Log, Cache, etc.) to return to the settings for that vdev.
To clear all changes, click Start Over.
Select Confirm then click Start Over to clear all changes.
To save changes click Update Pool.
Extending a Vdev
You cannot add more drives to an existing data VDEV but you can stripe a new VDEV of the same type to increase the overall pool size.
To extend a pool, you must add a data VDEV of the same type as existing VDEVs.
For example, create another mirror, then stripe the new mirror VDEV to the existing mirror VDEV.
While on the Devices screen, click on the data vdev, then click Extend.
Extending VDEV Examples
To make a striped mirror, add the same number of drives to extend a ZFS mirror.
For example, you start with ten available drives. Begin by creating a mirror of two drives, and then extending the mirror by adding another mirror of two drives. Repeat this three more times until you add all ten drives.
To make a stripe of two RAIDZ1 VDEVs (similar to RAID 50 on a hardware controller), add another three drives to extend the three-drive RAIDZ1.
To make a stripe of RAIDZ2 VDEVs (similar to RAID 60 on a hardware controller), add another four drives to extend the four-drive RAIDZ2.
Removing VDEVs
You can always remove the L2ARC (cache) and SLOG (log) VDEVs from an existing pool, regardless of topology or VDEV type.
Removing these devices does not impact data integrity, but can significantly impact performance for reads and writes.
In addition, you can remove a data VDEV from an existing pool under specific circumstances.
This process preserves data integrity but has multiple requirements:
The pool must be upgraded to a ZFS version that includes the device_removal feature flag.
The system shows the Upgrade button after upgrading SCALE when new ZFS feature flags are available.
All top-level VDEVs in the pool must be only mirrors or stripes.
Special VDEVs cannot be removed when RAIDZ data VDEVs are present.
All top-level VDEVs in the pool must use the same basic allocation unit size (ashift).
The remaining data VDEVs must contain sufficient free space to hold all of the data from the removed VDEV.
When a RAIDZ data VDEV is present, it is generally not possible to remove a device.
To remove a VDEV from a pool:
Click Manage Devices on the Topology widget to open the Devices screen.
Click the device or drive to remove, then click the Remove button in the ZFS Info pane.
If the Remove button is not visible, check that all conditions for VDEV removal listed above are correct.
Confirm the removal operation and click the Remove button.
The VDEV removal process status shows in the Task Manager (or alternately with the zpool status command).
Avoid physically removing or attempting to wipe the disks until the removal operation completes.
Running a Pool Data Integrity Check (Scrub)
Use Scrub on the ZFS Health pool widget to start a pool data integrity check.
Click Scrub to open the Scrub Pool dialog.
Select Confirm, then click Start Scrub.
If TrueNAS detects problems during the scrub operation, it either corrects them or generates an alert in the web interface.
By default, TrueNAS automatically checks every pool on a recurring scrub schedule.
The ZFS Health widget displays the state of the last scrub or disks in the pool.
To view scheduled scrub tasks, click View all Scrub Tasks on the ZFS Health widget.
Managing Pool Disks
The Storage Dashboard screen Disks button and the Manage Disks button on the Disk Health widget both open the Disks screen.
Manage Devices on the Topology widget opens the Devices screen.
To manage disks in a pool, click on the VDEV to expand it and show the disks in that VDEV.
Click on a disk to see the devices widgets for that disk.
Use the options on the disk widgets to take a disk offline, detach it, replace it, manage the SED encryption password, or perform other disk management tasks.
See Replacing Disks for more information on the Offline, Replace and Online options.
Expanding a Pool
Click Expand on the Storage Dashboard to increase the pool size to match all available disk space.
An example is expanding a pool when resizing virtual disks apart from TrueNAS.
Upgrading a Pool
Storage pool upgrades are typically not required unless the new OpenZFS feature flags are deemed necessary for required or improved system operation.
Do not do a pool-wide ZFS upgrade until you are ready to commit to this SCALE major version and lose the ability to roll back to an earlier major version!
The Upgrade button displays on the Storage Dashboard for existing pools after an upgrade to a new TrueNAS major version that includes new OpenZFS feature flags.
Newly created pools are always up to date with the OpenZFS feature flags available in the installed TrueNAS version.
The upgrade itself only takes a few seconds and is non-disruptive.
It is not necessary to stop any sharing services to upgrade the pool.
However, the best practice is to upgrade when the pool is not in heavy use.
The upgrade process suspends I/O for a short period but is nearly instantaneous on a quiet pool.
4 - Datasets
Tutorials for creating and managing datasets in TrueNAS SCALE.
This section has several tutorials about dataset configuration and management.
Creating Snapshots: Provides instructions on creating ZFS snapshots in TrueNAS Scale.
Managing Snapshots: Provides instructions on managing ZFS snapshots in TrueNAS Scale.
Storage Encryption: Provides information on SCALE storage encryption for pools, root datasets, datasets, and zvols.
Setting Up Permissions: Provides instructions on editing and viewing ACL permissions, using the ACL editor screens, and general information on ACLs.
4.1 - Adding and Managing Datasets
Provides instructions on creating and managing datasets.
A TrueNAS dataset is a file system within a data storage pool.
Datasets can contain files, directories, and child datasets, and have individual permissions or flags.
Datasets can also be encrypted.
TrueNAS automatically encrypts datasets created in encrypted pools, but you can change the encryption type from key to passphrase.
You can create an encrypted dataset if the pool is not encrypted and set the type as either key or passphrase.
We recommend organizing your pool with datasets before configuring data sharing, as this allows for more fine-tuning of access permissions and using different sharing protocols.
Creating a Dataset
To create a basic dataset, go to Datasets.
Default settings include those inherited from the parent dataset.
Select a dataset (root, parent, or child), then click Add Dataset.
Select the Dataset Preset option you want to use. Options are:
Generic for non-SMB share datasets such as iSCSI and NFS share datasets or datasets not associated with application storage.
Multiprotocol for datasets optimized for SMB and NFS multi-mode shares or to create a dataset for NFS shares.
SMB for datasets optimized for SMB shares.
Apps for datasets optimized for application storage.
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset.
If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators.
Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
ACL Settings for Dataset Presets
ACL Type
ACL Mode
Case Sensitivity
Enable atime
Generic
POSIX
n/a
Sensitive
Inherit
SMB
NFSv4
Restricted
Insensitive
On
Apps
NFSv4
Passthrough
Sensitive
Off
Multiprotocol
NFSv4
Passthrough
Sensitive
Off
If creating an SMB or multi-protocol (SMB and NFS) share the dataset name value auto-populates the share name field with the dataset name.
If you plan to deploy container applications, the system automatically creates the ix-applications dataset, but this dataset is not used for application data storage.
If you want to store data by application, create the dataset(s) first, then deploy your application.
When creating a dataset for an application, select Apps as the Dataset Preset. This optimizes the dataset for use by an application.
If you want to configure advanced setting options, click Advanced Options.
For the Sync option, we recommend production systems with critical data use the default Standard choice or increase to Always.
Choosing Disabled is only suitable in situations where data loss from system crashes or power loss is acceptable.
Select either Sensitive or Insensitive from the Case Sensitivity dropdown.
The Case Sensitivity setting is found under Advanced Options and is not editable after saving the dataset.
Click Save.
Review the Dataset Preset and Case Sensitivity under Advanced Options on the Add Dataset screen before clicking Save.
You cannot change these or the Name setting after clicking Save.
Setting Dataset Compression Levels
Compression encodes information in less space than the original data occupies.
We recommend choosing a compression algorithm that balances disk performance with the amount of saved space.
Select the compression algorithm that best suits your needs from the Compression dropdown list of options.
LZ4 maximizes performance and dynamically identifies the best files to compress. LZ4 provides lightning-fast compression/decompression speeds and comes coupled with a high-speed decoder. This makes it one of the best Linux compression tools for enterprise customers.
ZSTD offers highly configurable compression speeds, with a very fast decoder.
Gzip is a standard UNIX compression tool widely used for Linux. It is compatible with every GNU software which makes it a good tool for remote engineers and seasoned Linux users. It offers the maximum compression with the greatest performance impact. The higher the compression level implemented the greater the impact on CPU usage levels. Use with caution especially at higher levels.
ZLE or Zero Length Encoding, leaves normal data alone but only compresses continuous runs of zeros.
LZJB compresses crash dumps and data in ZFS. LZJB is optimized for performance while providing decent compression. LZ4 compresses roughly 50% faster than LZJB when operating on compressible data, and is greater than three times faster for uncompressible data. LZJB was the original algorithm used by ZFS but it is now deprecated.
Setting Dataset Quotas
You can set dataset quotas while adding datasets using the quota management options in the Add Dataset screen under Advanced Options.
You can also add or edit quotas for an existing dataset, by clicking Edit on the Dataset Space Management widget to open the Capacity Settings screen.
Setting a quota defines the maximum allowed space for the dataset.
You can also reserve a defined amount of pool space to prevent automatically generated data like system logs from consuming all of the dataset space.
You can configure quotas for only the new dataset or both the new dataset and any child datasets of the new dataset.
Define the maximum allowed space for the dataset in either the Quota for this dataset or Quota for this dataset and all children field.
Enter 0 to disable quotas.
Dataset quota alerts are based on the percentage of storage used.
To set up a quota warning alert, enter a percentage value in Quota warning alert at, %.
When consumed space reaches the defined percentage it sends the alert.
To set up the quota critical level alerts, enter the percentage value in Quota critical alert at, %.
When setting quotas or changing the alert percentages for both the parent dataset and all child datasets, use the fields under This Dataset and Child Datasets.
Enter a value in Reserved space for this dataset to set aside additional space for datasets that contain logs, which could eventually take all available free space.
Enter 0 for unlimited.
By default, many dataset options inherit their values from the parent dataset.
When settings on the Advanced Options screen are set toInherit the dataset uses the setting from the parent dataset.
For example, the Encryption or ACL Type settings.
To change any setting that datasets inherit from the parent, select an available option other than Inherit.
Select the root dataset of the pool (with the metadata VDEV), then click Add Dataset to add the dataset.
Click Advanced Options. Enter the dataset name, select the Dataset Preset, then scroll down to Metadata (Special) Small Block Size setting to set a threshold block size for including small file blocks into the special allocation class (fusion pools).
Blocks smaller than or equal to this value are assigned to the special allocation class while greater blocks are assigned to the regular class.
Valid values are zero or a power of two from 512B up to 1M.
The default size 0 means no small file blocks are allocated in the special class.
Enter a threshold block size for including small file blocks into the special allocation class (fusion pools).
Managing Datasets
After creating a dataset, users can manage additional options from the Datasets screen.
Select the dataset, then click Edit on the dataset widget for the function you want to manage.
The Datasets Screen article describes each option in detail.
Editing a Dataset
Select the dataset on the tree table, then click Edit on the Dataset Details widget to open the Edit Dataset screen and change the dataset configuration settings. You can change all settings except Name, Case Sensitivity, or Device Preset.
Editing Dataset Permissions
To edit the dataset ACL permissions, click Edit on the Permissions widget.
If the ACL type is NFSv4, the Permissions widget shows ACE entries for the dataset.
Each entry opens a checklist of flag options you can select/deselect without opening the Edit ACL screen.
To modify ownership, configure new or change existing ACL entries, click Edit to open the ACL Editor screen.
To edit a POSIX ACL type, click Edit on the Permissions widget to open the Unix Permissions Editor screen.
To access the Edit ACL screen for POSIX ACLs, select Create a custom ACL on the Select a preset ACL window.
Select the dataset on the tree table, then click Delete on the Dataset Details widget.
This opens a delete window where you enter the dataset path (root/parent/child) and select Confirm to delete the dataset, all stored data, and any snapshots from TrueNAS.
To delete a root dataset, use the Export/Disconnect option on the Storage Dashboard screen to delete the pool.
Deleting datasets can result in unrecoverable data loss!
Move any critical data stored on the dataset off to a backup copy or obsolete the data before performing the delete operation.
4.2 - Adding and Managing Zvols
Provides instructions on creating, editing, and managing zvols.
A ZFS Volume (zvol) is a dataset that represents a block device or virtual disk drive.
TrueNAS requires a zvol when configuring iSCSI Shares.
Adding a virtual machine also creates a zvol to use for storage.
Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused.
Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume.
Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see SCALE Hardware Guide for CPU, memory and storage capacity information.
Adding a Zvol
To create a zvol, go to Datasets.
Select the root or non-root parent dataset where you want to add the zvol, and then click Add Zvol.
To create a basic zvol with default options, enter a name and a value in Size for the zvol, then click Save.
Managing Zvols
Options to manage a zvol are on the zvol widgets shown on the Dataset screen when you select the zvol on the dataset tree table.
Delete Zvol removes the zvol from TrueNAS.
Deleting a zvol also deletes all snapshots of that zvol. Click Delete on the Zvol Details widget.
Deleting zvols can result in unrecoverable data loss!
Remove critical data from the zvol or verify it is obsolete before deleting a zvol.
Edit on the Zvol Details widget opens the Edit Zvol screen where you can change settings. Name is read-only and you cannot change it.
To create a snapshot, click Create Snapshot on the Data Protection widget.
Cloning a Zvol from a Snapshot
To clone a zvol from an existing snapshot, select the zvol on the Datasets tree table, then click Manage Snapshots on the Data Protection widget to open the Snapshots screen.
You can also access the Snapshots screen from the Periodic Snapshot Tasks widget on the Data Protection screen.
Click Snapshots to open the Snapshots screen.
Click on the snapshot you want to clone and click Clone to New Dataset.
Enter a name for the new dataset or accept the one provided, then click Clone.
Provides information on managing user and group quotas.
TrueNAS allows setting data or object quotas for user accounts and groups cached on, or connected to the system.
You can use the quota settings on the Add Dataset or Edit Dataset configuration screens in the Advanced Options settings to set up alarms and set aside more space in a dataset.
See Adding and Managing Datasets for more information.
To manage the dataset overall capacity, use Edit on the Dataset Space Management widget to open the Capacity Settings screen.
Configuring User Quotas
To view and edit user quotas, go to Datasets and click Manage User Quotas on the Dataset Space Management widget to open the User Quotas screen.
Click Add to open the Add User Quota screen.
Click in the field to view a list of system users including any users from a directory server that is properly connected to TrueNAS.
Begin typing a user name to filter all users on the system to find the desired user, then click on the user to add the name.
Add additional users by repeating the same process. A warning dialog displays if there are no matches found.
To edit individual user quotas, click anywhere on a user row to open the Edit User Quota screen where you can edit the User Data Quota and User Object Quota values.
User Data Quota is the amount of disk space that selected users can use. User Object Quota is the number of objects selected users can own.
Configuring Group Quotas
Click Add to open the Add Group Quota screen.
Click in the Group field to view a list of system groups on the system.
Begin typing a name to filter all groups on the system to find the desired group, then click on the group to add the name.
Add additional groups by repeating the same process. A warning dialog displays if there are not matches found.
To edit individual group quotas, click anywhere on a group name to open the Edit Group Quota screen where you can edit the Group Data Quota and Group Object Quota values.
Group Data Quota is the amount of disk space that the selected group can use. Group Object Quota is the number of objects the selected group can own.
4.4 - Creating Snapshots
Provides instructions on creating ZFS snapshots in TrueNAS Scale.
Snapshots are one of the most powerful features of ZFS.
A snapshot provides a read only point-in-time copy of a file system or volume.
This copy does not consume extra space in the ZFS pool.
The snapshot only records the differences between storage block references whenever the data is modified.
Why do I want to keep snapshots?
Snapshots keep a history of files and provide a way to recover an older or even deleted files.
For this reason, many administrators take regular snapshots, store them for some time, and copy them to a different system.
This strategy allows an administrator to roll the system data back to a specific point in time.
In the event of catastrophic system or disk failure, off-site snapshots can restore data up to the most recent snapshot.
Taking snapshots requires the system have all pools, datasets, and zvols already configured.
Creating a Snapshot
Consider making a Periodic Snapshot Task to save time and create regular, fresh snapshots.
There are two ways to access snapshot creation:
From the Data Protection Screen
To access the Snapshots screen, go to Data Protection > Periodic Snapshot Tasks and click the Snapshots button in the lower right corner of the widget.
If you click Create Snapshot the Snapshots screen opens filtered for the selected dataset.
Clear the dataset from the search field to see all snapshots.
You can also click the Manage Snapshots link on the Data Protection widget to open the Snapshots screen.
Click Add at the top right of the screen to open the Add Snapshot screen.
Select a dataset or zvol from the Dataset dropdown list.
Accept the name suggested by the TrueNAS software in the Name field or enter any custom string to override the suggested name.
(Optional) Select an option from the Naming Schema dropdown list that the TrueNAS software populated with existing periodic snapshot task schemas.
If you select an option, TrueNAS generates a name for the snapshot using that naming schema from the selected periodic snapshot and replicates that snapshot.
You cannot enter a value in both Naming Schema and in Name as selecting or entering a value in Naming Schema populates the other field.
(Optional) Select Recursive to include child datasets with the snapshot.
Click Save to create the snapshot.
4.5 - Managing Snapshots
Provides instructions on managing ZFS snapshots in TrueNAS Scale.
Viewing the List of Snapshots
File Explorer limits the number of snapshots Windows presents to users. If TrueNAS responds with more than the File Explorer limit, File Explorer shows no available snapshots.
TrueNAS displays a dialog stating the dataset snapshot count has more snapshots than recommended and states performance or functionality might degrade.
There are two ways to view the list of snapshots:
Go to Datasets > Data Protection widget > Manage Snapshots link to open the Snapshots screen.
The Snapshots screen displays a list of snapshots on the system. Use the search bar at the top to narrow the selection. Clear the search bar to list all snapshots.
Use the Clone to New Dataset button to create a clone of the snapshot.
The clone appears directly beneath the parent dataset in the dataset tree table on the Datasets screen.
Click Clone to New Dataset to open a clone confirmation dialog.
The Delete option destroys the snapshot.
You must delete child clones before you can delete their parent snapshot.
While creating a snapshot is instantaneous, deleting one is I/O intensive and can take a long time, especially when deduplication is enabled.
Why?
ZFS has to review all allocated blocks before deletion to see if another process is using that block. If not used, the ZFS can free that block.
Click the Delete button. A confirmation dialog displays. Select Confirm to activate the Delete button.
Deleting with Batch Operations
To delete multiple snapshots, select the left column box for each snapshot to include. Click the deleteDelete button that displays.
To search through the snapshots list by name, type a matching criteria into the searchFilter Snapshots text field.
The list now displays only the snapshot names that match the filter text.
Confirm activates the Delete button. If the snapshot has the Hold options selected, an error displays to prevent you from deleting that snapshot.
Using Rollback to Revert a Dataset
The Rollback option reverts the dataset to the point in time saved by the snapshot.
Rollback is a dangerous operation that causes any configured replication tasks to fail.
Replications use the existing snapshot when doing an incremental backup, and rolling back can put the snapshots out of order.
A less disruptive method to restore data from a point in time is to clone a specific snapshot as a new dataset:
Clone the desired snapshot.
Share the clone with the share type or service running on the TrueNAS system.
Allow users to recover their needed data.
Delete the clone from Datasets.
This approach does not destroy any on-disk data or disrupt automated replication tasks.
TrueNAS asks for confirmation before rolling back to the chosen snapshot state.
Select the radio button for how you want the rollback to operate.
All dataset snapshots are accessible as an ordinary hierarchical file system, accessed from a hidden .zfs located at the root of every dataset.
A snapshot and any files it contains are not accessible or searchable if the snapshot mount path is longer than 88 characters.
The data within the snapshot is safe but to make the snapshot accessible again shorten the mount path.
A user with permission to access the dataset contents can view the list of snapshots by going to the dataset .zfs directory from a share, like SMB, NFS, and iSCSI, or in the TrueNAS SCALE CLI.
Users can browse and search any files they have permission to access throughout the entire dataset snapshot collection.
When creating a snapshot, permissions or ACLs set on files within that snapshot might limit access to the files.
Snapshots are read-only, so users do not have permission to modify a snapshot or its files, even if they had write permissions when creating the snapshot.
From the Datasets screen, select the dataset and click Edit on the Dataset Details widget.
Click Advanced Options and set Snapshot Directory to Visible.
To access snapshots:
Using a share, configure the client system to view hidden files.
For example, in a Windows SMB share, enable Show hidden files, folders, and drives in Folder Options.
From to the dataset root folder, open the .zfs directory and navigate to the snapshot.
Using the TrueNAS SCALE CLI, enter storage filesystem listdir path="/PATH/TO/DATASET/.zfs/PATH/TO/SNAPSHOT" to view snapshot contents.
Provides information on SCALE storage encryption for pools, root datasets, datasets, and zvols.
TrueNAS SCALE offers ZFS encryption for your sensitive data in pools and datasets or Zvols.
Users are responsible for backing up and securing encryption keys and passphrases!
Losing the ability to decrypt data is similar to a catastrophic data loss.
The local TrueNAS system manages keys for data-at-rest.
Users are responsible for storing and securing their keys.
TrueNAS SCALE includes the Key Management Interface Protocol (KMIP).
Pool and Dataset Encryption
Encryption is for users storing sensitive data.
Pool-level encryption does not apply to the storage pool or the disks in the pool.
It only applies to the root dataset that shares the same name as the pool.
Child datasets or zvols inherit encryption from the parent dataset.
TrueNAS automatically generates a root dataset when you create a pool.
This root dataset inherits the encryption state of the pool through the Encryption option on the Pool Creation Wizard screen when you create the pool.
Because encryption is inherited from the parent, all data within that pool is encrypted.
Selecting the Encryption option for the pool (root dataset) forces encryption for all datasets and zvols created within the root dataset.
You cannot create an unencrypted dataset within an encrypted pool or dataset.
This change does not affect existing datasets created in earlier releases of SCALE but does affect new datasets created in 22.12.3 and later releases.
Leave the Encryption option on the Pool Creation Wizard screen cleared to create an unencrypted pool.
You can create both unencrypted and encrypted datasets within an unencrypted pool (root dataset).
If you create an encrypted dataset within an unencrypted dataset, all datasets or zvol created within that encrypted dataset are automatically encrypted.
If you have only one pool on your system, do not select the Encryption option for this pool.
Can I change dataset encryption?
Before you save a new dataset, you can change the type of encryption of an encrypted dataset to key to passphrase.
After you save a dataset with encryption applied you cannot change the dataset to unencrypted.
After saving a dataset with encryption, if the encryption type is set to passphrase you can change it to key type, but you cannot change from key type to passphrase.
Can I unencrypt my data?
Yes, you can move encrypted data to an unencrypted pool or dataset using either rsync or replication.
You can also move data from an unencrypted pool or dataset to an encrypted dataset using rsync or replication.
If your system loses power or you reboot the system, the datasets, zvols, and all data in an encrypted pool automatically lock to protect the data in that encrypted pool.
Encryption Visual Cues
SCALE uses lock icons to indicate the encryption state of a root, parent, or child dataset in the tree table on the Datasets screen.
Each icon shows a text label with the state of the dataset when you hover the mouse over the icon.
The Datasets tree table includes lock icons and descriptions that indicate the encryption state of datasets.
Icon
State
Description
Locked
Displays for locked encrypted root, non-root parent and child datasets.
Unlocked
Displays for unlocked encrypted root, non-root parent and child datasets.
Locked by ancestor
Displays for locked datasets that inherit encryption properties from the parent.
Unlocked by ancestor
Displays for unlocked datasets that inherit encryption properties from the parent.
A dataset that inherits encryption shows the mouse hover-over label Locked by ancestor or Unlocked by ancestor.
Select an encrypted dataset to see the ZFS Encryption widget on the Datasets screen.
The dataset encryption state is unlocked until you lock it using the Lock button on the ZFS Encryption widget.
After locking the dataset, the icon on the tree table changes to locked, and the Unlock button appears on the ZFS Encryption widget.
Implementing Encryption
Before creating a pool with encryption decide if you want to encrypt all datasets, zvols, and data stored on the pool.
You cannot change a pool from encrypted to non-encrypted. You can only change the dataset encryption type (key or passphrase) for the encrypted pool.
If your system does not have enough disks to allow you to create a second storage pool, we recommend that you not use encryption at the pool level. Instead, apply encryption at the dataset level to non-root parent or child datasets.
You can mix encrypted and unencrypted datasets on an unencrypted pool.
All pool-level encryption is key-based encryption. When prompted, download the encryption key and keep it stored in a safe place where you can back up the file.
You cannot use passphrase encryption at the pool level.
Adding Encryption to a New Pool
Go to Storage and click Create Pool on the Storage Dashboard screen.
You can also click Add to Pool on the Unassigned Disks widget and select the Add to New to open the Pool Creation Wizard.
Enter a name for the pool, select Encryption next to Name, then select the layout for the data VDEV and add the disks.
A warning dialog displays after selecting Encryption.
Read the warning, select Confirm, and then click I UNDERSTAND.
A second dialog opens where you click Download Encryption Key for the pool encryption key.
Click Done to close the window.
Move the encryption key to safe location where you can back up the file.
Add any other VDEVS to the pool you want to include, then click Save to create the pool with encryption.
Adding Encryption to a New Dataset
To add an encrypted dataset, go to Datasets.
Select the dataset on the tree table where you want to add a new dataset.
The default dataset selected when you open the Datasets screen is the root dataset of the first pool on the tree table list.
If you have more than one pool and want to create a dataset in a pool other than the default, select the root dataset for that pool or any dataset under the root where you want to add the new dataset.
Click Add Dataset to open the Add Dataset screen, then click Advanced Options.
Enter a value in Name.
Select the Dataset Preset option you want to use. Options are:
Generic for non-SMB share datasets such as iSCSI and NFS share datasets or datasets not associated with application storage.
Multiprotocol for datasets optimized for SMB and NFS multi-mode shares or to create a dataset for NFS shares.
SMB for datasets optimized for SMB shares.
Apps for datasets optimized for application storage.
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset.
If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators.
Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
ACL Settings for Dataset Presets
ACL Type
ACL Mode
Case Sensitivity
Enable atime
Generic
POSIX
n/a
Sensitive
Inherit
SMB
NFSv4
Restricted
Insensitive
On
Apps
NFSv4
Passthrough
Sensitive
Off
Multiprotocol
NFSv4
Passthrough
Sensitive
Off
To add encryption to a dataset, scroll down to Encryption Options and select the inherit checkbox to clear the checkmark.
If the parent dataset is unencrypted and you want to encrypt the dataset, clear the checkmark to show the Encryption option.
If the parent dataset is encrypted and you want to change the type, clearing the checkmark shows the other encryption options.
To keep the dataset encryption settings from the parent, leave inherited checkmarked.
Decide if you want to use the default key type encryption and if you want to let the system generate the encryption key.
To use key encryption and your own key, clear the Generate key checkbox to display the Key field. Enter your key in this field.
You can select the encryption algorithm to use from the Encryption Standard dropdown list of options or use the recommended default.
Leave the default selection if you do not have a particular encryption standard you want to use. What are these options?
TrueNAS supports AES Galois Counter Mode (GCM) and Counter with CBC-MAC (CCM) algorithms for encryption.
These algorithms provide authenticated encryption with block ciphers.
The passphrase must be longer than 8 and less than 512 characters.
Keep encryption keys and/or passphrases safeguarded in a secure and protected place.
Losing encryption keys or passphrases can result in permanent data loss!
Changing Dataset (or Zvol) Encryption
You cannot add encryption to an existing dataset.
You can change the encryption type for an already encrypted dataset using the Edit option on the ZFS Encryption widget for the dataset.
Save any change to the encryption key or passphrase, and update your saved passcodes and keys file, and then back up that file.
To change the encryption type, go to Datasets:
Select the encrypted dataset on the tree table, then click Edit on the ZFS Encryption widget.
The Edit Encryption Options dialog for the selected dataset displays.
You must unlock a locked encrypted dataset before you can make changes.
If the dataset inherits encryption settings from a parent dataset, to change this, clear the Inherit encryption properties from parent checkbox to display the key type encryption setting options.
If the encryption type is set to passphrase, you can change the passphrase, or change Encryption Type to key.
You cannot change a dataset created with a key as the encryption type to passphrase.
Key type options are Generate Key (pre-selected) or clear to display the Key field. Enter your new key in this field.
Use a complex passphrase that is not easy to guess. Store in a secure location subject to regular backups.
Leave the other settings at default, then click Confirm to activate Save.
Click Save to close the window and update the ZFS Encryption widget to reflect the changes made.
Locking and Unlocking Datasets
You can only lock and unlock an encrypted dataset if it is secured with a passphrase instead of a key file.
Before locking a dataset, verify that it is not currently in use.
Locking a Dataset
Select the encrypted dataset on the tree table, then click Lock on the ZFS Encryption widget to open the Lock Dataset dialog with the dataset full path name.
Use the Force unmount option only if you are certain no one is currently accessing the dataset.
Force unmount boots anyone using the dataset (e.g. someone accessing a share) so you can lock it.
Click Confirm to activate Lock, then click Lock.
You cannot use locked datasets.
Unlocking a Dataset
To unlock a dataset, go to Datasets then select the locked dataset on the tree table.
Click Unlock on the ZFS Encryption widget to open the Unlock Dataset screen.
Enter the key if key-encrypted, or the passphrase into Dataset Passphrase and click Save.
Select Unlock Child Encrypted Roots to unlock all locked child datasets if they use the same passphrase.
Select Force if the dataset mount path exists but is not empty. When this happens, the unlock operation fails.
Using Force allows the system to rename the existing directory and file where the dataset should mount. This prevents the mount operation from failing.
A confirmation dialog displays.
Click CONTINUE to confirm you want to unlock the datasets. Click CLOSE to exit and keep the datasets locked.
A second confirmation dialog opens confirming the datasets unlocked.
Click CLOSE.
TrueNAS displays the dataset with the unlocked icon.
Encrypting a Zvol
Encryption is for securing sensitive data.
You can only encrypt a Zvol if you create the Zvol from a dataset with encryption.
Users are responsible for backing up and securing encryption keys and passphrases!
Losing the ability to decrypt data is similar to a catastrophic data loss.
Zvols inherit encryption settings from the parent dataset.
To encrypt a Zvol, select a dataset configured with encryption and then create a new Zvol.
If you do not see the ZFS Encryption widget, you created the Zvol from an unencrypted dataset. Delete the Zvol and start over.
The Zvol is encrypted with settings inherited from the parent dataset.
To change inherited encryption properties from passphrase to key, or enter a new key or passphrase, select the zvol, then click Edit on the ZFS Encryption widget.
If Encryption Type is set to Key, type an encryption key into the Key field or select Generate Key.
If using Passphrase, enter a passphrase of eight to 512 characters. Use a passphrase complex enough to not easily guess.
After making any changes, select Confirm, and then click Save.
Save any change to the encryption key or passphrase, update your saved passcodes and keys file, and back up the file.
Managing Encryption Credentials
There are two ways to manage the encryption credentials, with a key file or passphrase.
Creating a new encrypted pool automatically generates a new key file and prompts users to download it.
Always back up the key file to a safe and secure location.
To manually back up a root dataset key file, click Export Key on the ZFS Encryption widget.
A passphrase is a user-defined string of eight to 512 characters that is required to decrypt the dataset.
The pbkdf2iters is the number of password-based key derivation function 2 (PBKDF2) iterations to use for reducing vulnerability to brute-force attacks. Users must enter a number greater than 100000.
Unlocking a Replicated Encrypted Dataset or Zvol Without a Passphrase
TrueNAS SCALE users should either replicate the dataset/Zvol without properties to disable encryption at the remote end or construct a special JSON manifest to unlock each child dataset/zvol with a unique key.
Method 1: Construct JSON Manifest.
Replicate every encrypted dataset you want to replicate with properties.
Export key for every child dataset that has a unique key.
For each child dataset construct a proper json with poolname/datasetname of the destination system and key from the source system like this:
{"tank/share01": "57112db4be777d93fa7b76138a68b790d46d6858569bf9d13e32eb9fda72146b"}
Save this file with the extension .json.
On the remote system, unlock the dataset(s) using properly constructed json files.
Method 2: Replicate Encrypted Dataset/zvol Without Properties.
Uncheck properties when replicating so that the destination dataset is not encrypted on the remote side and does not require a key to unlock.
Go to Data Protection and click ADD in the Replication Tasks window.
Click Advanced Replication Creation.
Fill out the form as needed and make sure Include Dataset Properties is NOT checked.
Click Save.
Method 3: Replicate Key Encrypted Dataset/zvol.
Go to Datasets on the system you are replicating from.
Select the dataset encrypted with a key, then click Export Key on the ZFS Encryption widget to export the key for the dataset.
Apply the JSON key file or key code to the dataset on the system you replicated the dataset to.
Option 1: Download the key file and open it in a text editor. Change the pool name/dataset part of the string to the pool name/dataset for the receiving system. For example, replicating from tank1/dataset1 on the replicate-from system to tank2/dataset2 on the replicate-to system.
Option 2: Copy the key code provided in the Key for dataset window.
On the system receiving the replicated pool/dataset, select the receiving dataset and click Unlock.
Unlock the dataset.
Either clear the Unlock with Key file checkbox, paste the key code into the Dataset Key field (if there is a space character at the end of the key, delete the space), or select the downloaded Key file that you edited.
Click Save.
Click Continue.
4.7 - Setting Up Permissions
Provides instructions on editing and viewing ACL permissions, using the ACL editor screens, and general information on ACLs.
TrueNAS SCALE provides basic permissions settings and an access control list (ACL) editor to define dataset permissions.
ACL permissions control the actions users can perform on dataset contents and shares.
An Access Control List (ACL) is a set of account permissions associated with a dataset that applies to directories or files within that dataset.
TrueNAS uses ACLs to manage user interactions with shared datasets and creates them when users add a dataset to a pool.
ACL Types in SCALE
TrueNAS SCALE offers two ACL types: POSIX and NFSv4.
For a more in-depth explanation of ACLs and configurations in TrueNAS SCALE, see our ACL Primer.
The Dataset Preset setting on the Add Dataset screen determines the type of ACL for the dataset.
To see the ACL type, click Edit on the Dataset Details widget to open the Edit Dataset.
Click on the Advanced Options screen and scroll down to the ACL Type field.
Preset options are:
Generic for non-SMB share datasets such as iSCSI and NFS share datasets or datasets not associated with application storage.
Multiprotocol for datasets optimized for SMB and NFS multi-mode shares or to create a dataset for NFS shares.
SMB for datasets optimized for SMB shares.
Apps for datasets optimized for application storage.
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset.
If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators.
Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
ACL Settings for Dataset Presets
ACL Type
ACL Mode
Case Sensitivity
Enable atime
Generic
POSIX
n/a
Sensitive
Inherit
SMB
NFSv4
Restricted
Insensitive
On
Apps
NFSv4
Passthrough
Sensitive
Off
Multiprotocol
NFSv4
Passthrough
Sensitive
Off
SCALE POSIX or NFSv4 ACL types, show different options on the ACL Editor screen.
Both the POSIX and NFSv4 ACL Editors screens allow you to define the owner user and group, and add ACL entries (ACEs) for individual user accounts or groups to customize the permissions for the selected dataset.
The owner user and group should remain set to either root or the admin account with full privileges.
Add ACE items for other users, groups, directories, or other options to grant access permissions to the dataset.
Click in the Who field and select the item (like User or Group) and to display the User or Group fields where you select the user or group accounts.
Viewing Permissions
Basic ACL permissions are viewable and configurable from the Datasets screen.
Select a dataset, then scroll down to the Permissions widget to view owner and individual ACL entry permissions.
To view the Edit ACL screen, either select the dataset and click Edit on the Permissions widget or go to Sharing and click on the share widget header to open the list of shares. Select the share, then click the options icon and select Edit Filesystem ACL.
If the dataset has an NFSv4 ACL, the Edit ACL screen opens.
Enter or select the Owner user from the User dropdown list, then set the read/write/execute permissions, and select Apply User to confirm changes.
User options include users created manually or imported from a directory service.
Repeat for the Group field.
Select the group name from the dropdown list, set the read/write/execute permissions, and then select Apply Group to confirm the changes.
To prevent errors, TrueNAS only submits changes after the apply option is selected.
A common misconfiguration is not adding or removing the Execute permission from a dataset that is a parent to other child datasets.
Removing this permission results in lost access to the path.
To apply ACL settings to all child datasets, select Apply permissions recursively.
Change the default settings to your preferred primary account and group and select Apply permissions recursively before saving any changes.
See Edit ACL Screen for information on the ACL editor screens and setting options.
Adding a New Preset (POSIX ACL)
From the Unix Permissions Editor screen:
Click Set ACL.
The Select a preset ACL dialog opens.
Select Select a present ACL to use a pre-configured set of permissions.
Select the preset to use from the Default ACL Options dropdown list, or click Create a custom ACL to configure your own set of permissions.
Click Continue.
Each default preset loads different permissions to the Edit ACL screen.
The Create a custom preset option opens the Edit ACL screen with no default permission settings.
Enter the ACL owner user and group, and add new ACE for users, groups, etc. that you want to grant access permissions to for this dataset
Select or enter the administrative user name in Owner, then click Apply Owner.
The owner controls which TrueNAS user and group has full control of the dataset.
You can leave this set to root but we recommend changing this to the admin user with the Full Control role.
Repeat for the Owner Group, then click Apply Group.
Select the ACE entry on the Access Control List list on the left of the screen just below Owner and Owner Group.
If adding a new entry, click Add Item.
Click on Who and select the value from the dropdown list.
Whatever is selected in Who highlights the Access Control List entry on the left side of the screen.
If selecting User, the User field displays below the Who field. Same for Group.
Select a name from the dropdown list of options in the User (or Group) field or begin typing the name to see a narrowed list of options to select from.
Select the Read, Modify, and/or Execute permissions.
(Optional) Select Apply permissions recursively, below the list of access control entries, to apply this preset to all child datasets.
(Optional) Click Use Preset to display the ACL presets window and select a predefined set of permissions from the list of presets.
See Using Preset ACL Entries (POSIX ACL) for the list of presets.
Click Save as Preset to add this to the list of ACL presets. Click Save Access Control List to save the changes made to the ACL.
Configuring an ACL (NFSv4 ACL)
An NFS4 ACL preset loads pre-configured permissions to match general permissions situations.
Changing the ACL type affects how TrueNAS writes and reads on-disk ZFS ACL.
When the ACL type changes from POSIX to NFSv4, internal ZFS ACLs do not migrate by default, and access ACLs encoded in posix1e extended attributes convert to native ZFS ACLs.
When the ACL type changes from NFSv4 to POSIX, native ZFS ACLs do not convert to posix1e extended attributes, but ZFS uses the native ACL for access checks.
To prevent unexpected permissions behavior, you must manually set new dataset ACLs recursively after changing the ACL type.
Setting new ACLs recursively is destructive.
We suggest creating a ZFS snapshot of the dataset before changing the ACL type or modifying permissions.
To change NFSv4 ACL permissions:
Go to Datasets, select the dataset, scroll down to the Permissions widget, and click Edit. The Edit ACL screen opens.
Select or enter the administrative user name in Owner, then click Apply Owner.
The owner controls which TrueNAS user and group has full control of the dataset.
You can leave this set to root but we recommend changing the owner user and group to the admin user with the Full Control role.
Select or enter the group name in Owner Group, then click Apply Group.
Select the ACE entry on the Access Control List list on the left of the screen below Owner and Owner Group.
If adding a new entry, click Add Item.
Click on Who and select the value from the dropdown list.
If selecting User, the User field displays below the Who field. Same for Group.
Select a name from the dropdown list of options or begin typing the name to see a narrowed list of options to select from.
The selection in Who highlights the Access Control List entry on the left side of the screen.
Select permission type from the Permissions dropdown list.
If Basic is selected, the list displays four options: Read, Modify, Traverse and Full Control.
Basic flags enable or disable ACE inheritance.
Select Advanced to select more granular controls from the options displayed.
Advanced flags allow further control of how the ACE applies to files and directories in the dataset.
(Optional) Select Apply permissions recursively, below the list of access control entries, to apply this preset to all child datasets.
This is not generally recommended as recursive changes often cause permissions issues (see the warning at the top of this section).
(Optional) Click Use Preset to display the ACL presets window to select a predefined set of permissions from the list of presets.
See Using Preset ACL Entries (NFS ACL).
(Optional) Click Save as Preset to add this to the list of ACL presets.
Click Save Access Control List to save the changes for the user or group selected.
Using Preset ACL Entries (NFSv4 ACL)
To rewrite the current ACL with a standardized preset, follow the steps above in Configuring an ACL to step 6 where you click Use Preset, and then select an option:
NFS4_OPEN gives the owner and group full dataset control. All other accounts can modify the dataset contents.
NFS4_RESTRICTED gives the owner full dataset control. The group can modify the dataset contents.
NFS4_HOME gives the owner full dataset control. The group can modify the dataset contents. All other accounts can navigate the dataset.
NFS4_DOMAIN_HOME gives the owner full dataset control. The group can modify the dataset contents. All other accounts can navigate the dataset.
NFS4_ADMIN gives the admin user and builtin_administrators group full dataset control. All other accounts can navigate the dataset.
Click Save Access Control List to add this ACE entry to the Access Control List.
Using Preset ACL Entries (POSIX ACL)
If the file system uses a POSIX ACL, the first option presented is to select an existing preset or the option to create a custom preset.
To rewrite the current ACL with a standardized preset, click Use Preset and then select an option:
POSIX_OPEN gives the owner and group full dataset control. All other accounts can modify the dataset contents.
POSIX_RESTRICTED gives the owner full dataset control. The group can modify the dataset contents.
POSIX_HOME gives the owner full dataset control. The group can modify the dataset contents. All other accounts can navigate the dataset.
POSIX_ADMIN gives the admin user and builtin_administrators group full dataset control. All other accounts can navigate the dataset.
If creating a custom preset, a POSIX-based Edit ACL screen opens.
Follow the steps in Adding a New Preset (POSIX ACL) to set the owner and owner group, then the ACL entries (user, group) and permissions from the options shown.
5 - Shares
Tutorials for configuring the various data sharing features in TrueNAS SCALE.
File sharing is one of the primary benefits of a NAS. TrueNAS helps foster collaboration between users through network shares. TrueNAS SCALE allows users to create and configure Windows SMB shares, Unix (NFS) shares, and block (iSCSI) shares targets.
When creating zvols for shares, avoid giving them names with capital letters or spaces since they can cause problems and failures with iSCSI and NFS shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
Contents
AFP Migration: Provides information on migrating AFP shares from CORE to SCALE.
Block Shares (iSCSI): Describes the iSCSI protocol and has tutorials for various configuration scenarios.
Adding iSCSI Block Shares: Provides instructions on setting up iSCSI block shares manually or using the wizard and starting the service.
Using an iSCSI Share: Provides information on setting up a Linux or Windows system to use a TrueNAS-configured iSCSI block share.
Adding NFS Shares: Provides instructions on adding NFS shares, starting NFS service, and accessing the share.
Multiprotocol Shares: Provides instructions on setting up SMB and NFSv4 mixed-mode shares.
Windows Shares (SMB): Provides information on SMB shares and instruction creating a basic share and setting up various specific configurations of SMB shares.
Managing SMB Shares: Provides instructions on managing existing SMB share and dataset ACL permissions.
Using SMB Shadow Copy: Provides information on SMB share shadow copies, enabling shadow copies, and resolving an issue with Microsoft Windows 10 v2004 release.
Setting Up SMB Home Shares: Provides instructions on setting up private SMB datasets and shares as an alternative to legacy SMB home shares.
Provides information on migrating AFP shares from CORE to SCALE.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
Since the Apple Filing Protocol (AFP) for shares is deprecated and no longer receives updates, it is not in TrueNAS SCALE.
However, users can sidegrade a TrueNAS CORE configuration into SCALE, so TrueNAS SCALE migrates previously-saved AFP configurations into SMB configurations.
Migrating AFP Shares
To prevent data corruption that could result from the sidegrade operation, in TrueNAS SCALE, go to Windows (SMB) Shares, select the more_vert for the share, then select Edit to open the Edit SMB screen.
Click Advanced Options and scroll down to the Other Options section.
Select Legacy AFP Compatibility to enable compatibility for AFP shares migrated to SMB shares.
Do not select this option if you want a pure SMB share with no AFP relation.
Netatalk service is no longer in SCALE as of version 21.06.
AFP shares automatically migrate to SMB shares with the Legacy AFP Compatibility option enabled.
Do not clear the Legacy AFP Compatibility checkbox, as it impacts how data is written to and read from shares.
Any other shares created to access these paths after the migration must also have Legacy AFP Compatibility selected.
Once you have sidegraded from CORE to SCALE, you can find your migrated AFP configuration in Shares >Windows Shares (SMB) with the prefix AFP_.
To make the migrated AFP share accessible, start the SMB service.
Connecting Migrated Shares
Since AFP shares migrate to SMB in SCALE, you must use SMB syntax to mount them.
On your Apple system, press +K or go to Go > Connect to Server….
Enter smb://ipaddress/mnt/pool/dataset, where:
ipaddress* is your TrueNAS IP address
pool is the name of the pool
dataset is the name of the shared dataset
5.2 - Block Shares (iSCSI)
Describes the iSCSI protocol and has tutorials for various configuration scenarios.
About the Block (iSCSI) Sharing Protocol
Internet Small Computer Systems Interface (iSCSI) represents standards for using Internet-based protocols for linking binary data storage device aggregations.
IBM and Cisco submitted the draft standards in March 2000. Since then, iSCSI has seen widespread adoption into enterprise IT environments.
iSCSI functions through encapsulation. The Open Systems Interconnection Model (OSI) encapsulates SCSI commands and storage data within the session stack. The OSI further encapsulates the session stack within the transport stack, the transport stack within the network stack, and the network stack within the data stack.
Transmitting data this way permits block-level access to storage devices over LANs, WANs, and even the Internet itself (although performance may suffer if your data traffic is traversing the Internet).
The table below shows where iSCSI sits in the OSI network stack:
OSI Layer Number
OSI Layer Name
Activity as it relates to iSCSI
7
Application
An application tells the CPU that it needs to write data to non-volatile storage.
6
Presentation
OSI creates a SCSI command, SCSI response, or SCSI data payload to hold the application data and communicate it to non-volatile storage.
5
Session
Communication between the source and the destination devices begins. This communication establishes when the conversation starts, what it talks about, and when the conversion ends. This entire dialogue represents the session. OSI encapsulates the SCSI command, SCSI response, or SCSI data payload containing the application data within an iSCSI Protocol Data Unit (PDU).
4
Transport
OSI encapsulates the iSCSI PDU within a TCP segment.
3
Network
OSI encapsulates the TCP segment within an IP packet.
2
Data
OSI encapsulates the IP packet within the Ethernet frame.
1
Physical
The Ethernet frame transmits as bits (zeros and ones).
Unlike other sharing protocols on TrueNAS, an iSCSI share allows block sharing and file sharing.
Block sharing provides the benefit of block-level access to data on the TrueNAS.
iSCSI exports disk devices (zvols on TrueNAS) over a network that other iSCSI clients (initiators) can attach and mount.
iSCSI Terminology
Challenge-Handshake Authentication Protocol (CHAP): an authentication method that uses a shared secret and three-way authentication to determine if a system is authorized to access the storage device. It also periodically confirms that the session has not been hijacked by another system. In iSCSI, the client (initiator) performs the CHAP authentication.
Mutual CHAP: a CHAP type in which both ends of the communication authenticate to each other.
Internet Storage Name Service (iSNS): protocol for the automated discovery of iSCSI devices on a TCP/IP network.
Extent: the storage unit to be shared. It can either be a file or a device.
Portal: indicates which IP addresses and ports to listen on for connection requests.
Initiators and Targets: iSCSI introduces the concept of initiators and targets which act as sources and destinations respectively. iSCSI initiators and targets follow a client/server model. Below is a diagram of a typical iSCSI network. The TrueNAS storage array acts as the iSCSI target and can be accessed by many of the different iSCSI initiator types, including software and hardware-accelerated initiators.
The iSCSI protocol standards require that iSCSI initiators and targets is represented as iSCSI nodes. It also requires that each node is given a unique iSCSI name. To represent these unique nodes via their names, iSCSI requires the use of one of two naming conventions and formats, IQN or EUI. iSCSI also allows the use of iSCSI aliases which are not required to be unique and can help manage nodes.
Logical Unit Number (LUN): LUN represents a logical SCSI device. An initiator negotiates with a target to establish connectivity to a LUN. The result is an iSCSI connection that emulates a connection to a SCSI hard disk. Initiators treat iSCSI LUNs as if they were a raw SCSI or SATA hard drive. Rather than mounting remote directories, initiators format and directly manage filesystems on iSCSI LUNs. When configuring multiple iSCSI LUNs, create a new target for each LUN. Since iSCSI multiplexes a target with multiple LUNs over the same TCP connection, there can be TCP contention when more than one target accesses the same LUN. TrueNAS supports up to 1024 LUNs.
Jumbo Frames: Jumbo frames are the name given to Ethernet frames that exceed the default 1500 byte size. This parameter is typically referenced by the nomenclature as a maximum transmission unit (MTU). A MTU that exceeds the default 1500 bytes necessitates that all devices transmitting Ethernet frames between the source and destination support the specific jumbo frame MTU setting, which means that NICs, dependent hardware iSCSI, independent hardware iSCSI cards, ingress and egress Ethernet switch ports, and the NICs of the storage array must all support the same jumbo frame MTU value. So, how does one decide if they should use jumbo frames?
Administrative time is consumed configuring jumbo frames and troubleshooting if/when things go sideways. Some network switches might also have ASICs optimized for processing MTU 1500 frames while others might be optimized for larger frames. Systems administrators should also account for the impact on host CPU utilization. Although jumbo frames are designed to increase data throughput, it may measurably increase latency (as is the case with some un-optimized switch ASICs); latency is typically more important than throughput in a VMware environment. Some iSCSI applications might see a net benefit running jumbo frames despite possible increased latency. Systems administrators should test jumbo frames on their workload with lab infrastructure as much as possible before updating the MTU on their production network.
TrueNAS Enterprise
Asymmetric Logical Unit Access (ALUA): ALUA allows a client computer to discover the best path to the storage on a TrueNAS system.
HA storage clusters can provide multiple paths to the same storage.
For example, the disks are directly connected to the primary computer and provide high speed and bandwidth when accessed through that primary computer.
The same disks are also available through the secondary computer, but speed and bandwidth are restricted.
With ALUA, clients automatically ask for and use the best path to the storage.
If one of the TrueNAS HA computers becomes inaccessible, the clients automatically switch to the next best alternate path to the storage.
When a better path becomes available, as when the primary host becomes available again, the clients automatically switch back to that better path to the storage.
Do not enable ALUA on TrueNAS unless it is also supported by and enabled on the client computers. ALUA only works when enabled on both the client and server.
iSCSI Configuration Methods
There are a few different approaches for configuring and managing iSCSI-shared data:
TrueNAS Enterprise
TrueNAS Enterprise customers that use vCenter to manage their systems can use the TrueNAS vCenter Plugin to connect their TrueNAS systems to vCenter and create and share iSCSI datastores.
This is all managed through the vCenter web interface.
TrueNAS CORE web interface: the TrueNAS web interface is fully capable of configuring iSCSI shares. This requires creating and populating zvol block devices with data, then setting up the iSCSI Share. TrueNAS Enterprise licensed customers also have additional options to configure the share with Fibre Channel.
TrueNAS SCALE web interface: TrueNAS SCALE offers a similar experience to TrueNAS CORE for managing data with iSCSI; create and populate the block storage, then configure the iSCSI share.
Contents
Adding iSCSI Block Shares: Provides instructions on setting up iSCSI block shares manually or using the wizard and starting the service.
Using an iSCSI Share: Provides information on setting up a Linux or Windows system to use a TrueNAS-configured iSCSI block share.
SCALE has implemented administrator roles to further comply with FIPS security hardening standards.
The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
Adding an iSCSI Block Share
TrueNAS SCALE offers two methods to add an iSCSI block share: the setup wizard or the manual steps using the screen tabs.
Both methods cover the same basic steps but have some differences.
The setup wizard requires you to enter some settings before you can move on to the next screen or step in the setup process.
It is designed to ensure you configure the iSCSI share completely, so it can be used immediately.
The manual process has more configuration screens over the wizard and allows you to configure the block share in any order.
Use this process to customize your share for special uses cases.
It is designed to give you additional flexibility to build or tune a share to your exact requirements.
Before you Begin
Have the following ready before you begin adding your iSCSI block share:
Storage pool and dataset.
A path to a Device (zvol or file) that doesn’t use capital letters or spaces.
iSCSI Wizard
This section walks you through the setup process using the wizard screens.
Click here for more information
To use the setup wizard,
Add the block device.
a. Enter a name using all lowercase alphanumeric characters plus a dot (.), dash (-), or colon (:). We recommend keeping it short or at most 63 characters.
b. Choose the Extent Type. You can select either Device or File.
If you select Device, select the zvol to share from the Device dropdown list.
If you select File, file settings display. Browse to the file location to populate the path, then enter the size in Filesize. Enter 0 to use the actual existing file size.
c. Select the type of platform using the share. For example, if you use an updated Linux OS, choose Modern OS.
d. Click Next.
Add the portal
Now you either create a new portal or select an existing one from the dropdown list.
If you create a new portal, select a Discovery Authentication Method from the dropdown list.
If you select None, you can leave Discovery Authentication Group empty.
If you select either CHAP or MUTUAL CHAP, you must also to select a Discovery Authentication Group from the dropdown list.
If no group exists, click Create New and enter a value in Group ID, User, and Secret.
Select 0.0.0.0 or :: from the IP Address dropdown list. 0.0.0.0 listens on all IPv4 addresses and :: listens on all IPv6 addresses.
Click NEXT
Add the Initiator. After adding the portal, set up the initiators that use the iSCSI share.
Decide which initiators can use the iSCSI share.
Leave the list empty to allow all initiators, or add entries to the list to limit access to those systems.
Confirm the iSCSI setup. Review your settings.
If you need or want to change any setting click Back until you reach the wizard screen with the setting.
click Save.
iSCSI Manual Setup
This procedure walks you through adding each configuration setting on the seven configuration tab screens. While the procedure places each tab screen in order, you can select the tab screen to add settings in any order.
Click here for more information
Configure share settings that apply to all iSCSI shares.
a. Click Configure on the main Block (iSCSI) Share Targets widget.
The Target Global Configuration tab screen opens.
b. Enter a name using lowercase alphanumeric characters plus dot (.), dash (-), and colon (:) in Base Name.
Use the iqn.format for the name. See the “Constructing iSCSI names using the iqn.format” section of RFC3721.
c. Enter the host names or IP address of the ISNS servers to register with the iSCSI targets and portals of the system. Separate entries by pressing Enter.
d. The value in Pool Available Space Threshold generates an alert when the pool has this percentage of space remaining.
This is typically configured at the pool level when using zvols or at the extent level for both file and device-based extents.
e. Enter the iSCSI listen port. Add the TCP port used to access the iSCSI target. The default is 3260.
f. (Optional, Enterprise-licensed systems only) Select Asymmetrical Logical Unit Access (ALUA) to enable it. Shows only on Enterprise-licensed systems.
Only enable if both the client and server systems support ALUA, and ALUA is enabled on both client and server.
g. Click Save.
Add portals. Click Portals tab.
a. Click Add at the top right of the screen to open the Add Portal screen.
b. (Optional) Enter a description. Portals are automatically assigned a numeric group.
c. Select the Discovery Authentication Method from the dropdown list.
None allows anonymous discovery and does not require you to select a Discovery Authentication Group.
CHAP and Mutual CHAP require authentication and you to select a group ID in Discovery Authentication Group.
d. (Optional) Based on your Discovery Authentication Method, select a group in Discovery Authentication Group.
e. Click Add to select an IP Address the portal listens on from the dropdown list. 0.0.0.0 listens on all IPv4 addresses and :: listens on all IPv6 addresses.
f. Click Save.
Add initiators groups to create authorized access client groups. Click on the Initiators Groups tab to open the screen.
a. Click Add at the top right of the screen to open the SHARING > ISCSI > INITIATORS > Add screen.
b. Select Allow All Initiators or configure your own allowed initiators.
Enter the iSCSI Qualified Name (IQN) in Allowed Initiators (IQN) and click + to add it to the list. Example: iqn.1994-09.org.freebsd:freenas.local.
c. Click Save.
Add network authorized access. Click on the Authorized Access tab to open the screen.
a. Click Add at the top right of the screen to open the Add Authorized Access screen.
b. Enter a number in Group ID. Each group ID allows configuring different groups with different authentication profiles.
Example: all users with a group ID of 1 inherit the authentication profile associated with Group 1.
c. Enter a user around to create for CHAP authentication with the user on the remote system. Consider using the initiator name as the user name.
d. Enter the user password of at least 12 to no more than 16 characters long in Secret and Secret (Confirm).
e. (Optional) Enter peer user details in Peer User and Peer Secret and Peer Secret (Confirm).
Peer user is only entered when configuring mutual CHAP and is usually the same value as User. The password must be different from the one entered in Secret.
f. Click Save.
Create storage resources. Click Targets tab.
a. Click Add at the top right of the screen to open the Add iSCSI Target screen.
b. Enter a name in Target Name. Use lowercase alphanumeric characters plus dot (.), dash (-), and colon (:) in the iqn.format.
See the “Constructing iSCSI names using the iqn.format” section of RFC3721.
c. (Optional) Enter a user-friendly name in Target Alias.
d. Click Add next to Authorized Networks to enter IP address information.
e. Click Add under iSCSI Group to display the group settings.
f. Select the group ID from the Portal Group ID dropdown.
g. (Optional) Select the group ID in Initiator Group ID or leave it set to None.
h. (Optional) Select the Authentication Method from the dropdown list of options.
i. (Optional) Select the Authentication Group Number from the dropdown list. This value represents the number of existing authorized accesses.
j. Click Save.
Add new share storage units (extents). Click the Extents tab.
a. Click Add at the top right of the screen to open the Add Extent screen.
b. Enter a name for the extent. If the extent size is not 0, it cannot be an existing file within the pool or dataset.
c. Leave Enabled selected.
d. In the Compatibility section, the Enable TPC checkbox is selected by default. This allows an initiator to bypass normal access control and access any scannable target.
e. Xen initiator compat mode is disabled by default. Select when using Xen as the iSCSI initiator.
f. Do not change LUN RPM when using Windows as the initiator. Only change LUN RPM in environments where you need accurate reporting statistics for devices that use a specific RPM.
g. Read-only is disabled by default. Select to prevent the initiator from initializing this LUN.
h. In the Type section, select the extent type from the Extent Type dropdown.
Device provides virtual storage access to zvols, zvol snapshots, or physical devices.
File provides virtual storage access to a single file.
i. (Optional) Select the option from the Device dropdown. This field only displays when Extent Type is set to Device.
Select the path when Extent Type is set to File. Browse to the location.
Create a new file by browsing to a dataset and appending /{filename.ext} to the path. Enter the size in Filesize.
j. Select the Logical Block Size from the dropdown list. Leave at the default of 512 unless the initiator requires a different block size.
k. Select Disable Physical Block Size Reporting if the initiator does not support physical block size values over 4K (MS SQL).
a. Click Add at the top right of the screen to open the Add Associated Target screen.
b. Select the target from the Target dropdown list.
c. Select or enter 0. The first LUN on SCALE must be zero (0). If adding additional LUNs, enter or select a value between 1 and 1023 for those additional LUNs.
Some initiators expect a value below 256. Leave this LUN ID blank to automatically assign the next available ID.
d. Select an existing extent from the Extent dropdown.
e. Click Save
Creating a Quick iSCSI Target
TrueNAS SCALE allows users to add iSCSI targets without having to set up another share.
Click here for more information
Go to Shares and click the Block (iSCSI) Shares Targets widget.
a. Click Add at the top right of the screen to open the Add iSCSI Target screen.
b. Enter a name in Target Name. Use lowercase alphanumeric characters plus dot (.), dash (-), and colon (:) in the iqn.format.
See the “Constructing iSCSI names using the iqn.format” section of RFC3721.
c. (Optional) Enter a user-friendly name in Target Alias.
d. Click Add next to Authorized Networks to enter IP address information.
e. Click Add under iSCSI Group to display the group settings.
f. Select the group ID from the Portal Group ID dropdown.
g. (Optional) Select the group ID in Initiator Group ID or leave it set to None.
h. (Optional) Select the Authentication Method from the dropdown list of options.
i. (Optional) Select the Authentication Group Number from the dropdown list. This value represents the number of existing authorized accesses.
j. Click Save.
Starting the iSCSI Service
When adding an iSCSI share the system prompts you to start, or restart, the service. You can also do this by clicking the more_vert on the Block (iSCSI) Shares Targets widget and selecting Turn On Service.
You can also go to System Settings > Services and locate iSCSI on the list and click the Running toggle to start the service.
Set iSCSI to start when TrueNAS boots up, go to System Settings > Services and locate iSCSI on the list. Select Start Automatically.
Clicking the edit returns to the options in Shares > Block (iSCSI) Shares Targets.
5.2.2 - Using an iSCSI Share
Provides information on setting up a Linux or Windows system to use a TrueNAS-configured iSCSI block share.
Connecting to and using an iSCSI share can differ between operating systems.
This article provides instructions on setting up a Linux and Windows system to use the TrueNAS iSCSI block share.
Using Linux iSCSI Utilities and Service
In this section, you start the iSCSI service, log in to the share, and obtain the configured basename and target. You also partition the iSCSI disk, make a file system for the share, mount it, and share data.
Click here for more information
Before you begin, open the command line and ensure you have installed the open-iscsi utility.
To install the utility on an Ubuntu/Debian distribution, enter command sudo apt update && sudo apt install open-iscsi.
After the installation completes, ensure the iscsid service is running using the sudo service iscsid start command.
First, with the iscsid service started, run the iscsiadm command with the discovery arguments and get the necessary information to connect to the share.
Next, discover and log into the iSCSI share.
Run the command sudo iscsiadm \--mode discovery \--type sendtargets \--portal {IPADDRESS}.
The output provides the basename and target name that TrueNAS configured.
Alternatively, enter sudo iscsiadm -m discovery -t st -p {IPADDRESS} to get the same output.
Note the basename and target name given in the output. You need them to log in to the iSCSI share.
When a Portal Discovery Authentication Method is CHAP, add the three following lines to /etc/iscsi/iscsid.conf.
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = user
discovery.sendtargets.auth.password = secret
The user for discovery.sendtargets.auth.username is set in the Authorized Access used by the iSCSI share Portal.
Likewise, the password to use for discovery.sendtargets.auth.password is the Authorized Access secret.
Without those lines, the iscsiadm does not discover the portal with the CHAP authentication method.
Enter comand sudo iscsiadm \--mode node \--targetname {BASENAME}:{TARGETNAME} \--portal {IPADDRESS} \--login,
where {BASENAME} and {TARGETNAME} is the discovery command information.
Now you partition an iSCSI disk.
When the iSCSI share login succeeds, the device shared through iSCSI shows on the Linux system as an iSCSI Disk.
To view a list of connected disks in Linux, enter command sudo fdisk -l.
Because the connected iSCSI disk is raw, you must partition it.
Identify the iSCSI device in the list and enter sudo fdisk {/PATH/TO/iSCSIDEVICE}.
Use the fdisk command defaults when partitioning the disk.
Remember to type w when finished partitioning the disk.
The w command tells fdisk to save any changes before quitting.
After creating the partition on the iSCSI disk, a partition slice displays on the device name.
For example, /dev/sdb1.
Enter fdisk -l to see the new partition slice.
Next, make a file system on the iSCSI disk.
Finally, use mkfs to make a file system on the new partition slice.
To create the default file system (ext2), enter sudo mkfs {/PATH/TO/iSCSIDEVICEPARTITIONSLICE}.
Mount the iSCSI device and share the data.
Enter sudo mount {/PATH/TO/iSCSIDEVICEPARTITIONSLICE}.
For example, sudo mount /dev/sdb1 /mnt mounts the iSCSI device /dev/sdb1 to file /mnt.
Using the iSCSI Share with Windows
This section provides instructions on setting up Windows iSCSI Initiator Client to work with TrueNAS iSCSI shares.
Click here for more information
To access the data on the iSCSI share, clients need to use iSCSI Initiator software. An iSCSI Initiator client is pre-installed in Windows 7 to 10 Pro, and Windows Server 2008, 2012, and 2019. Windows Professional Edition is usually required.
First, click the Start Menu and search for the iSCSI Initiator application.
Next, go to the Configuration tab and click Change to replace the iSCSI initiator with the name created earlier. Click OK.
Next, switch to the Discovery Tab, click Discover Portal, and type in the TrueNAS IP address.
If TrueNAS changed the port number from the default 3260, enter the new port number.
If you set up CHAP when creating the iSCSI share, click Advanced…, set Enable CHAP log on, and enter the initiator name and the same target/secret set earlier in TrueNAS.
Click OK.
Go to the Targets tab, highlight the iSCSI target, and click Connect.
After Windows connects to the iSCSI target, you can partition the drive.
Search for and open the Disk Management app.
The current state of your drive should be unallocated. Right-click the drive and click New Simple Volume….
Complete the wizard to format the drive and assign a drive letter and name.
Finally, go to This PC or My Computer in File Explorer. The new iSCSI volume should display under the list of drives. You should now be able to add, delete, and modify files and folders on your iSCSI drive.
5.2.3 - Increasing iSCSI Available Storage
Provides information on increasing available storage in zvols and file LUNs for iSCSI block shares.
Expanding LUNs
TrueNAS lets users expand Zvol and file-based LUNs to increase the available storage in an iSCSI share.
Zvol LUNs
To expand a Zvol LUN, go to Datasets and click the Zvol LUN name. The Zvol Details widget displays. Click the Edit button.
TrueNAS prevents data loss by not allowing users to reduce the Zvol size.
TrueNAS also does not allow users to increase the Zvol size past 80% of the pool size.
File LUNs
Go to Shares and click Configure in the Block (iSCSI) Shares Targets screen, then select the Extents tab.
Enter a new size in Filesize.
Enter the new value as an integer that is one or more multiples of the logical block size (default 512) larger than the current file size.
Click Save.
5.3 - Adding NFS Shares
Provides instructions on adding NFS shares, starting NFS service, and accessing the share.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
About UNIX (NFS) Shares
Creating a Network File System (NFS) share on TrueNAS makes a lot of data available for anyone with share access.
Depending on the share configuration, it can restrict users to read or write privileges.
NFS treats each dataset as its own file system. When creating the NFS share on the server, the specified dataset is the location that client accesses. If you choose a parent dataset as the NFS file share location, the client cannot access any nested or child datasets beneath the parent.
If you need to create shares that include child datasets, SMB sharing is an option. Note that Windows NFS Client versions currently support only NFSv2 and NFSv3.
The UDP protocol is deprecated and not supported with NFS. It is disabled by default in the Linux kernel.
Using UDP over NFS on modern networks (1Gb+) can lead to data corruption caused by fragmentation during high loads.
Sharing Administrator Access
SCALE has implemented administrator roles to further comply with FIPS security hardening standards.
The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
Creating an NFS Share and Dataset
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares.
Sharing an entire pool makes it more difficult to later restrict access if needed.
You have the option to create the share and dataset at the same time from either the Add Dataset or Add NFS screens.
If creating a dataset and share from the Add Dataset screen, we recommend creating a new dataset with the Dataset Preset set to Generic for the new NFS share. Or you can set it to Multiprotocol and select only the NFS share type.
Creating a Dataset Using Add Dataset
To create a basic dataset, go to Datasets.
Default settings include those inherited from the parent dataset.
Select a dataset (root, parent, or child), then click Add Dataset.
Select the Dataset Preset option you want to use. Options are:
Generic for non-SMB share datasets such as iSCSI and NFS share datasets or datasets not associated with application storage.
Multiprotocol for datasets optimized for SMB and NFS multi-mode shares or to create a dataset for NFS shares.
SMB for datasets optimized for SMB shares.
Apps for datasets optimized for application storage.
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset.
If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators.
Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
ACL Settings for Dataset Presets
ACL Type
ACL Mode
Case Sensitivity
Enable atime
Generic
POSIX
n/a
Sensitive
Inherit
SMB
NFSv4
Restricted
Insensitive
On
Apps
NFSv4
Passthrough
Sensitive
Off
Multiprotocol
NFSv4
Passthrough
Sensitive
Off
If creating an SMB or multi-protocol (SMB and NFS) share the dataset name value auto-populates the share name field with the dataset name.
If you plan to deploy container applications, the system automatically creates the ix-applications dataset, but this dataset is not used for application data storage.
If you want to store data by application, create the dataset(s) first, then deploy your application.
When creating a dataset for an application, select Apps as the Dataset Preset. This optimizes the dataset for use by an application.
If you want to configure advanced setting options, click Advanced Options.
For the Sync option, we recommend production systems with critical data use the default Standard choice or increase to Always.
Choosing Disabled is only suitable in situations where data loss from system crashes or power loss is acceptable.
Select either Sensitive or Insensitive from the Case Sensitivity dropdown.
The Case Sensitivity setting is found under Advanced Options and is not editable after saving the dataset.
Click Save.
Review the Dataset Preset and Case Sensitivity under Advanced Options on the Add Dataset screen before clicking Save.
You cannot change these or the Name setting after clicking Save.
To create the share and dataset from the Add NFS Share screen:
Go to Shares > Unix (NFS) Shares and click Add to open the Add NFS Share configuration screen.
Enter the path or use the arrow_right icon to the left of folder/mnt to locate the dataset and populate the path.
Browsing to select a path
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Click Create Dataset, enter a name for the dataset and click Create.
The system creates the dataset optimized for an NFS share, and populates the share Name and updates the Path with the dataset name.
The dataset name is the share name.
Enter text to help identify the share in Description.
Enable Service turns the NFS service on and changes the toolbar status to Running.
If you wish to create the share without immediately enabling it, select Cancel.
Adding NFS Share Networks and Hosts
If you want to enter allowed networks, click Add to the right of Networks.
Enter an IP address in Network and select the mask CIDR notation.
Click Add for each network address and CIDR you want to define as an authorized network.
Defining an authorized network restricts access to all other networks. Leave empty to allow all networks.
If you want to enter allowed systems, click Add to the right of Hosts.
Enter a host name or IP address to allow that system access to the NFS share.
Click Add for each allowed system you want to define.
Defining authorized systems restricts access to all other systems.
Press the X to delete the field and allow all systems access to the share.
Adjusting Access Permissions
If you want to tune the NFS share access permissions or define authorized networks, click Advanced Options.
Select Read-Only to prohibit writing to the share.
To map user permissions to the root user, enter a string or select the user from the Maproot User dropdown list.
To map the user permissions to all clients, enter a string or select the user from the Mapall User dropdown list.
To map group permissions to the root user, enter a string or select the group from the Maproot Group dropdown list.
To map the group permissions to all clients, enter a string or select the group from the Mapall Group dropdown list.
Select an option from the Security dropdown. If you select KRB5 security, you can use a Kerberos ticket. Otherwise, everything is based on IDs.
Security Types
Setting
Description
SYS
Uses locally acquired UIDs and GIDs. No cryptographic security.
KRB5
Uses Kerberos for authentication.
KRB5I
Uses Kerberos for authentication and includes a hash with each transaction to ensure integrity.
KRB5P
Uses Kerberos for authentication and encrypts all traffic between the client and server. KRB5P is the most secure but also incurs the most load.
Editing an NFS Share
To edit an existing NFS share, go to Shares > Unix Shares (NFS) and click the share you want to edit.
The Edit NFS screen settings are identical to the share creation options, but you cannot create a new dataset.
Starting the NFS Service
To begin sharing, click the more_vert on the toolbar and select Turn On Service. Turn Off Service displays if NFS is on. Turn On Service displays if NFS is off.
Or you can go to System Settings > Services, locate NFS, and click the toggle to running.
Select Start Automatically if you want NFS to activate when TrueNAS boots.
The NFS service does not automatically start on boot if all NFS shares are encrypted and locked.
Configuring NFS Service
You can configure the NFS service from either the System Settings > Services or the Shares > Unix Shares (NFS) widget.
To configure NFS service settings from the Services screen, click edit on the System Settings > Services screen to open the NFS service screen.
To configure NFS service settings from the Shares > Unix Shares (NFS) widget, click the Config Service from the more_vert dropdown menu on the widget header to open the NFS service screen.
Unless you need specific settings, we recommend using the default NFS settings.
When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.
Connecting to the NFS Share
Although you can connect to an NFS share with various operating systems, we recommend using a Linux/Unix OS.
First, download the nfs-common kernel module.
You can do this using the installed distribution package manager.
For example, on Ubuntu/Debian, enter command sudo apt-get install nfs-common in the terminal.
After installing the module, connect to an NFS share by entering sudo mount -t nfs {IPaddressOfTrueNASsystem}:{path/to/nfsShare} {localMountPoint}.
Where {IPaddressOfTrueNASsystem} is the remote TrueNAS system IP address that contains the NFS share, {path/to/nfsShare} is the path to the NFS share on the TrueNAS system, and {localMountPoint} is a local directory on the host system configured for the mounted NFS share.
For example, sudo mount -t nfs 10.239.15.110:/mnt/Pool1/NFS_Share /mnt mounts the NFS share NFS_Share to the local directory /mnt.
You can also use the Linux nconnect function to let your NFS mount support multiple TCP connections.
To enable nconnect, enter sudo mount -t nfs -o rw,nconnect=16 {IPaddressOfTrueNASsystem}:{path/to/nfsShare} {localMountPoint}.
Where {IPaddressOfTrueNASsystem}, {path/to/nfsShare}, and {localMountPoint} are the same ones you used when connecting to the share.
For example, sudo mount -t nfs -o rw,nconnect=16 10.239.15.110:/mnt/Pool1/NFS_Share /mnt.
By default, anyone that connects to the NFS share only has read permission.
To change the default permissions, edit the share, open the Advanced Options, and change the Access settings.
You must have ESXI 6.7 or later for read/write functionality with NFSv4 shares.
5.4 - Multiprotocol Shares
Provides instructions on setting up SMB and NFSv4 mixed-mode shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
About Multiprotocol Shares
A multiprotocol or mixed-mode NFS and SMB share supports both NFS and SMB protocols for sharing data.
Multiprotocol shares allow clients to use either protocol to access the same data.
This can be useful in environments with a mix of Windows systems and Unix-like systems, especially if some clients lack an SMB client.
Carefully consider your environment and access requirements before configuring a multiprotocol share.
For many applications, a single protocol SMB share provides better user experience and ease of administration.
Linux clients can access SMB shares using mount.cifs.
It is important to properly configure permissions and access controls to ensure security and data integrity when using mixed-mode sharing.
To maximize security on the NFS side of the multiprotocol share, we recommend using NFSv4 and Active Directory(AD) for Kerberos authentication.
It is also important that NFS clients preserve extended attributes when copying files, or SMB metadata could be discarded in the copy.
First Steps
Before adding a multiprotocol SMB and NFS share to your system:
Configure and start the SMB and NFS services.
Configure the NFS service to require Kerberos authentication.
Join the TrueNAS server to an existing Active Directory domain.
Configure a container, Kerberos admin, and user accounts in AD.
Before joining AD and creating a dataset for the share to use, start both the SMB and NFS services and configure the NFS service for Kerberos authentication.
Configure the NFS service before joining AD for simpler Kerberos credential creation.
You can either use theShares screen Configure Service option on both the Windows (SMB) Share and on the UNIX (NFS) Shares widgets, or go to System Settings > Services and select the Edit option on the SMB and NFS services.
Unless you need a specific setting or are configuring a unique network environment, we recommend using the default SMB service settings.
After configuring the share services, start the services.
From the Sharing screen, click on the Windows (SMB) Sharesmore_vert to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.
After adding a share, use the toggle to enable or disable the service for that share.
To enable the service from the System Settings > Services screen, click the toggle for the service and set Start Automatically if you want the service to activate when TrueNAS boots.
Configuring and Starting the NFS Service
Open the NFS service screen, then select only NFSv4 on the Enabled Protocols dropdown list.
For security hardening, we recommend disabling the NFSv3 protocol.
Select Require Kerberos for NFSv4 to enable using a Kerberos ticket.
If Active Directory is already joined to the TrueNAS server, click Save and then reopen the NFS service screen.
Click Add SPN to open the Add Kerberos SPN Entry dialog.
Click Yes when prompted to add a Service Principal Name (SPN) entry.
Enter the AD domain administrator user name and password in Name and Password.
TrueNAS SCALE automatically applies SPN credentials if the NFS service is enabled with Require Kerberos for NFSv4 selected before joining Active Directory.
Click Save again, then start the NFS service.
From the Sharing screen, click on the Unix Shares (NFS)more_vert to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.
Each NFS share on the list also has a toggle to enable or disable the service for that share.
To enable the service from the System Settings > Services screen, click the toggle for the service and set Start Automatically if you want the service to activate when TrueNAS boots.
The NFS service does not automatically start on boot if all NFS shares are encrypted and locked.
Joining Active Directory
Mixed-mode SMB and NFS shares greatly simplify data access for client running a range of operating systems.
They also require careful attention to security complexities not present in standard SMB shares.
NFS shares do not respect permissions set in the SMB Share ACL.
Protect the NFS export with proper authentication and authorization controls to prevent unauthorized access by NFS clients.
We recommend using Active Directory to enable Kerberos security for the NFS share.
Configure a container (group or organizational unit), Kerberos admin, and user accounts in AD.
Creating a Multiprotocol Share Dataset
You can create the dataset and add a multiprotocol (SMB and NFS) share using the Add Dataset screen.
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares.
Sharing an entire pool makes it more difficult to later restrict access if needed.
Select the dataset you want to be the parent of the multimode dataset, then click Add Dataset.
Enter a name for the dataset. The dataset name populates the SMB Name field and becomes the name of the SMB and NFS shares.
Select Multiprotocol from the Dataset Preset dropdown. The share configuration options display with Create NFS Share and Create SMB Share preselected.
(Optional) Click Advanced Options to customize other dataset settings such as quotas, compression level, encryption, and case sensitivity.
See Creating Datasets for more information on adding and customizing datasets.
Click Save. TrueNAS creates the dataset and the SMB and NFS shares. Next edit both shares.
After editing the shares, edit the dataset ACL.
Editing the SMB Share
After creating the multimode share on the Add Dataset screen, go to Shares and edit the SMB share.
Select the share on the Windows Shares (SMB) widget and then click Edit.
The Edit SMB screen opens showing the Basic Options settings.
Select Multi-protocol (NFSv4/SMB) shares from the Purpose dropdown list to apply pre-determined Advanced Options settings for the share.
Enable Kereberos security. Click Advanced Options.
Select KRB5 from the Security dropdown to enable the Kerberos ticket that generated when you joined Active Directory.
If needed, select Read-Only to prohibit writing to the share.
Click Save.
Restart the service when prompted.
Adjusting the Dataset ACL
After joining AD, creating a multimode dataset and the SMB and NFS shares, adjust the dataset/file system ACL to match the container and users configured in AD.
You can modify dataset permissions from the Shares screen using the securityEdit Filesystem ACL icon to open the Edit ACL screen for each share (SMB and NFS).
Using this method you select the share on the Windows (SMB) Share widget, then click the icon to edit the dataset properties for the SMB share, but you must repeat this for the NFS share.
Or you can go to Datasets, select the name of the dataset created for the multiprotocol share to use and scroll down to the Permissions widget for the dataset.
Click Edit to open the Edit ACL screen.
Check the Access Control List to see if the AD group you created is on the list and has the correct permissions.
If not, add this Access Control Entry (ACE) item on the Edit ACL screen for the multimode dataset (or each share).
Enter Group in the Who field or use the dropdown list to select Group.
Type or select the appropriate group in the Group field.
Verify Full Control displays in Permissions. If not, select it from the dropdown list.
Click Save Access Control List to add the ACE item or save changes.
See Permissions for more information on editing dataset permissions.
After setting the dataset permission, connect to the share.
Connecting to a Multiprotocol Share
After creating and configuring the shares, connect to the mulit-protocol share using either SMB or NFS protocols from a variety of client operating systems including Windows, Apple, FreeBSD, and Linux/Unix systems.
Provides information on SMB shares and instruction creating a basic share and setting up various specific configurations of SMB shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
About Windows (SMB) Shares
SMB (also known as CIFS) is the native file-sharing system in Windows.
SMB shares can connect to most operating systems, including Windows, MacOS, and Linux.
TrueNAS can use SMB to share files among single or multiple users or devices.
SMB supports a wide range of permissions, security settings, and advanced permissions (ACLs) on Windows and other systems, as well as Windows Alternate Streams and Extended Metadata.
SMB is suitable for managing and administering large or small pools of data.
TrueNAS uses Samba to provide SMB services.
The SMB protocol has multiple versions. An SMB client typically negotiates the highest supported SMB protocol during SMB session negotiation.
Industry-wide, SMB1 protocol (sometimes referred to as NT1) usage is deprecated for security reasons.
As of SCALE 22.12 (Bluefin) and later, TrueNAS does not support SMB client operating systems that are labeled by their vendor as End of Life or End of Support.
This means MS-DOS (including Windows 98) clients, among others, cannot connect to TrueNAS SCALE SMB servers.
The upstream Samba project that TrueNAS uses for SMB features notes in the 4.11 release that the SMB1 protocol is deprecated and warns portions of the protocol might be further removed in future releases.
Administrators should work to phase out any clients using the SMB1 protocol from their environments.
However, most SMB clients support SMB 2 or 3 protocols, even when not default.
Legacy SMB clients rely on NetBIOS name resolution to discover SMB servers on a network.
TrueNAS disables the NetBIOS Name Server (nmbd) by default. Enable it on the Network > Global Settings screen if you require this functionality.
MacOS clients use mDNS to discover SMB servers present on the network. TrueNAS enables the mDNS server (avahi) by default.
Windows clients use WS-Discovery to discover the presence of SMB servers, but you can disable network discovery by default depending on the Windows client version.
Discoverability through broadcast protocols is a convenience feature and is not required to access an SMB server.
Sharing Administrator Access
SCALE has implemented administrator roles to further comply with FIPS security hardening standards.
The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
How do I add an SMB Share?
Verify Active Directory connections are working and error free before adding an SMB share.
If configured but not working or in an error state, AD cannot bind and prevents starting the SMB service.
Creating an SMB share to your system involves several steps to add the share and get it working.
Create the SMB share user account.
You can also use directory services like Active Directory or LDAP to provide additional user accounts.
If setting up an external SMB share, we recommend using Active Directory or LDAP, or at a minimum synchronizing the user accounts between systems.
TrueNAS allows creating the dataset and share at the same time from either the Add Dataset screen or the Add SMB share screen.
Use either option to create a basic SMB share, but when customizing share presets use the Add SMB screen to create the share and dataset.
The procedure in this article provides the instructions to add the dataset while adding the share using the Add SMB screen.
Modify the share permissions.
After adding or modifying the user account for the share, edit the dataset permissions.
TrueNAS must be joined to Active Directory or have at least one local SMB user before creating an SMB share. When creating an SMB user, ensure that Samba Authentication is enabled.
You cannot access SMB shares using the root user, TrueNAS built-in user accounts, or those without Samba Authentication selected.
To add users or edit users, go to Credentials > Users to add or edit the SMB share user(s).
Click Add to create a new or as many new user accounts as needed.
If joined to Active Directory, Active Directory can create the TrueNAS accounts.
Enter the values in each required field, verify Samba Authentication is selected, then click Save.
For more information on the fields and adding users, see Creating User Accounts.
By default, all new local users are members of a built-in group called builtin_users.
You can use a group to grant access to all local users on the server or add more groups to fine-tune permissions for large numbers of users.
Why not just allow anonymous access to the share?
Anonymous or guest access to the share is possible, but it is a security vulnerability and not recommended for Enterprise or systems with more than one SMB share administrator account.
Using a guest account also increases the likelihood of unauthorized users gaining access to your data.
Major SMB client vendors are deprecating it, partly because signing and encryption are impossible for guest sessions.What about LDAP users?
If you want LDAP server users to access the SMB share, go to Credentials > Directory Services.
If you configured an LDAP server, select the server and click Edit to display the LDAP configuration screen.
If not configured, click Configure LDAP to display the LDAP configuration screen.
Click Advanced Options and select Samba Schema (DEPRECATED - see the help text).
Only enable LDAP authentication for the SMB share if you require it. Your LDAP server must have Samba attributes.
Support for Samba Schema is officially deprecated in Samba 4.13.
Samba Schema is no longer in Samba after 4.14.
Users should begin upgrading legacy Samba domains to Samba AD domains.
Local TrueNAS user accounts can no longer access the share.
Adding an SMB Share and Dataset
You can create an SMB share while creating a dataset on the Add Dataset screen or create the dataset while creating the share on the Add SMB Share screen.
This article covers adding the dataset on the Add SMB Share screen.
It is best practice to use a dataset instead of a full pool for SMB and/or NFS shares.
Sharing an entire pool makes it more difficult to later restrict access if needed.
What are ZFS dataset setting defaults?
TrueNAS creates the ZFS dataset with these settings:
ACL Mode set to Restricted
The ACL Type influences the ACL Mode setting. When ACL Type is set to Inherit, you cannot change the ACL Mode setting.
When ACL Type is set to NFSv4, you can change the ACL Mode to Restricted.
Case Sensitivity set to Insensitive
TrueNAS also applies a default access control list to the dataset.
This default ACL is restrictive and only grants access to the dataset owner and group.
You can modify the ACL later according to your use case.
To create a basic Windows SMB share and a dataset, go to Shares, then click Add on the Windows Shares (SMB) widget to open the Add Share screen.
Enter or browse to select SMB share mount path (parent dataset where you want to add a dataset for this share) to populate the Path field.
The Path is the directory tree on the local file system that TrueNAS exports over the SMB protocol.
Browsing to select a path
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Click Create Dataset. Enter the name for the share dataset in the Create Dataset dialog, then click Create.
The system creates the new dataset.
Name becomes the dataset name entered and is the SMB share name.
This forms part of the share pathname when SMB clients perform an SMB tree connect.
Because of how the SMB protocol uses the name, it must be less than or equal to 80 characters.
Do not use invalid characters as specified in Microsoft documentation MS-FSCC section 2.1.6.
If you change the name, follow the naming conventions for:
If creating an external SMB share, enter the hostname or IP address of the system hosting the SMB share and the name of the share on that system.
Enter as EXTERNAL:ip address\sharename in Path, then change Name to EXTERNAL with no special characters.
(Optional) Select a preset from the Purpose dropdown list to apply.
The preset selected locks or unlock pre-determined Advanced Options settings for the share.
To retain control over all the share Advanced Options settings, select No presets or Default share parameters.
To create an alternative to Home Shares, select Private SMB Datasets and Shares.
See Setting Up SMB Home Shares for more information on replacing this legacy feature with private SMB shares and datasets.
SMP Purpose Options
Setting
Description
No presets
Select to retain control over all Advanced Options settings. This option gives users the flexibility to manually configure SMB parameters.
Default share parameters
The default option when you open the Add SMB screen and to use for any basic SMB share. These settings provide a baseline configuration that ensures compatibility and functionality, and allow users to set up shares with commonly implemented options and behaviors.
Basic time machine share
Select to set up a basic time machine share. This provides a centralized location for users to store and manage system backups.
Multi-User time machine
Select to set up a multi-user time machine share. This option allows multiple users to use TrueNAS as a centralized backup solution while simultaneously ensuring that each backup users make are kept separate and secure from one another.
Multi-Protocol (NFSv3/SMB) shares
Select for multi-protocol (NFSv3/SMB) shares. Choosing this option allows NFS and SMB users to access TrueNAS at the same time.
Private SMB Datasets and Shares
Select to create a share that maps to a path determined by the username of the authenticated user. TrueNAS creates a unique, private dataset matching the user name.
SMB WORM. Files become read-only via SMB after 5 minutes
The SMB WORM preset only impacts writes over the SMB protocol. Before deploying this option in a production environment, determine whether the feature meets your requirements. Employing this option, ensures data written to the share cannot be modified or deleted, thus increasing overall data integrity and security.
(Optional) Enter a Description to help explain the share purpose.
Select Enabled to allow sharing of this path when the SMB service is activated.
Leave it cleared to disable the share without deleting the configuration.
(Optional) Click Advanced Options to configure audit logging or other advanced configuration settings such as changing Case Sensitivity.
Click Save to create the share and add it to the Shares > Windows (SMB) Shares list.
Enable the SMB service when prompted.
Configuring Share Advanced Options Settings
For a basic SMB share, using the Advanced Options settings is not required, but if you set Purpose to No Presets, click Advanced Options to finish customizing the SMB share for your use case.
The following are possible use cases. See SMB Shares Screens for all settings and other possible use cases.
Setting Up Guest Access
Not a recommended configuration and adds security vulnerabilities!
To allow guest access to the share, select Allow Guest Access.
The privileges are the same as the guest account.
Windows 10 version 1709 and Windows Server version 1903 disable guest access by default.
Additional client-side configuration is required to provide guest access to these clients.
MacOS clients: Attempting to connect as a user that does not exist in TrueNAS does not automatically connect as the guest account.
Connect As: Guest Specifically choose this option in macOS to log in as the guest account.
See the Apple documentation for more details.
To prohibit writes to the share, select Export Read-Only.
To restrict share visibility to users with read or write access to the share, select Access Based Share Enumeration.
See the smb.conf manual page.
Setting Up Host Allow and Host Deny
Use the Host Allow and Host Deny options to allow or deny specific host names and IP addresses.
Use the Hosts Allow field to enter a list of allowed host names or IP addresses.
Separate entries by pressing Enter.
Entering values in the Host Allow restricts access to only the addresses entered into this list!
You can break UI access for all other IP or host name entries by using this list.
You can find a more detailed description with examples here.
Use the Hosts Deny field to enter a list of denied host names or IP addresses. Separate entries by pressing Enter.
Hosts Allow and Hosts Deny work together to produce different situations:
Leaving both Hosts Allow and Hosts Deny free of entries allows any host to access the SMB share.
Adding entries to the Hosts Allow list but not the Hosts Deny list allows only the hosts on the Hosts Allow list to access the share.
Adding entries to the Hosts Deny list but not Hosts Allow list allows all hosts not on the Hosts Deny list to access the share.
Adding entries to both a Hosts Allow and Hosts Deny list allows all hosts on the Hosts Allow list to access the share, and also allows hosts not on the Hosts Allow or Hosts Deny list to access the share.
Apple Filing Protocol (AFP) Compatibility
AFP shares are deprecated and not available in TrueNAS.
To customize your SMB share to work with a migrated AFP share or with your MacOS, use the Advanced Options settings provided for these use cases:
Legacy AFP Compatibility controls how the SMB share reads and writes data.
Leave unset for the share to behave like a standard SMB share.
Only set this when the share originated as an AFP sharing configuration.
Pure SMB shares or macOS SMB clients do not require legacy compatibility.
Use Apple-style Character Encoding converts NTFS illegal characters in the same manner as MacOS SMB clients.
By default, Samba uses a hashing algorithm for NTFS illegal characters.
Private SMB Datasets and Shares
Use to set up an alternative to the legacy Home Shares function.
Allow adding private datasets and shares for individual users. Useful as an alternate way to creating home shares for individual users.
See Setting Up SMB Home Shares for more information.
Enabling SMB Audit Logging
To enable SMB audit logging, from either the Add SMB or Edit SMB screens, click Advanced Options, scroll down to Audit Logging and select Enable.
Enabling ACL Support
To add ACL support to the share, select Enable ACL under Advanced Options on either the Add SMB or Edit SMB screens.
See Managing SMB Shares for more on configuring permissions for the share and the file system.
Tuning ACLs for SMB Shares
There are two levels to set SMB share permissions, at the share or for the dataset associated for with the share.
See Managing SMB Shares for more information on these options.
See Permissions for more information on dataset permissions.
Tuning the Share ACL
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
Using the Edit Share ACL option configures the permissions for just the share, but not the dataset the share uses.
The permissions apply at the SMB share level for the selected share.
They do not apply to other file sharing protocol clients, other SMB shares that export the same share path (i.e., /poolname/shares specified in Path), or to the dataset the share uses.
After creating the share and dataset, modify the share permissions to grant user or group access.
Click on shareEdit Share ACL icon to open the Edit Share ACL screen if you want to modify permissions at the share level.
Select either User in Who, then the user name in User, and then set the permission level using Permissions and Type.
(Optional) Click Add then select Group, the group name, and then set the group permissions.
Click Save.
See Permissions for more information on setting user and group settings.
Tuning the Dataset (Filesystem) Permissions
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
To configure share owner, user and group permissions for the dataset Access Control List (ACL), use the Edit Filesystem ACL option.
This modifies the ACL entry for the SMB share the path (defined in Path) at the dataset level.
To customize permissions, add Access Control Entries (ACEs) for users or groups.
To access the dataset (filesystem) permissions, either click the «span class=“material-icons”>security> Edit Filesystem ACL icon on the share row to open the Edit ACL screen for the dataset the share uses.
You can also go to Datasets, select the dataset the share uses (same name as the share), then click Edit on the Permissions widget to open the Edit ACL screen.
Samba Authentication selected by default when SMB share users are created or added to TrueNAS SCALE manually or through a directory service, and these users are automatically added to the builtin-users group.
Users in this group can add or modify files and directories in the share.
The share dataset ACL includes an ACE for the builtin-users group, and the @owner and @group are set to root by default.
Change the @owner and @group values to the admin (Full admin) user and click Apply under each.
To restrict or grant additional file permissions for some or all share users, do not modify the builtin-users group entry.
Best practice is to create a new group for the share users that need different permissions, reassign these users to the new group and remove them from builtin-users group.
Next, edit the ACL by adding a new ACE entry for the new group, and then modify the permissions of that group.
Home users can modify the builtin-users group ACE entry to grant FULL_CONTROL
If you need to restrict or increase permissions for some share users, create a new group and add an ACE entry with the modified permissions.
Changing the built-in-user Group Permissions
To change permissions for the builtin_users group, go to Datasets, select the share dataset, and scroll down to the Permissions widget.
Click Edit to open the Edit ACL screen.
Locate the ACE entry for the builtin-users group and click on it.
Check the Access Control List area to see the if the permissions are correct.
Begin typing builtin_users in the Group field until it displays, then click on it to populate the field.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field.
For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List to add the ACE item or save changes.
Adding a New Share Group
To change the permission level for some share users, add a new group, reassign the user(s) to the new group, then modify the share dataset ACL to include this new group and the desired permissions.
Go to Local Groups, click Add and create the new group.
Go Local Users, select a user, click Edit, remove the builtin-user entry from Auxiliary Groups and add the new group.
Click Save. Repeat this step for each user or change the group assignment in the directory server to the new group.
Edit the filesystem (dataset) permissions. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field.
For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List.
If restricting this group to read only and the share dataset is nested under parent datasets, go to each parent dataset, edit the ACL.
Add an ACE entry for the new group, and select Traverse.
Keep the parent dataset permission set to either Full_Control or MODIFY but select Traverse.
Using the Traverse Permission
If a share dataset is nested under other datasets (parents), you must add the ACL Traverse permission at the parent dataset level(s) to allow read-only users to move through directories within an SMB share.
After adding the group and assigning it to the user(s), next modify the dataset ACLs for each dataset in the path (parent datasets and the share dateset).
Add the new group to the share ACL. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Click Save Access Control List.
Return to the Datasets screen, locate the parent dataset for the share dataset, use one of the methods to access the Edit ACL screen for the parent dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then select Traverse.
Click Save Access Control List.
Repeat for each parent dataset in the path. This allows the restricted share group to navigate through the directories in the path to the share dataset.
Starting the SMB Service
To connect to an SMB share, start the SMB service.
After adding a new share TrueNAS prompts you to either start, or restart the SMB service.
You can also start the service from the Windows (SMB) Share widget or on the System Settings > Services screen from the SMB service row.
Starting the Service Using the Windows SMB Share
From the Sharing screen, click on the Windows (SMB) Sharesmore_vert to display the service options, which are Turn Off Service if the service is running or Turn On Service if the service is not running.
Each SMB share on the list also has a toggle to enable or disable the service for that share.
Starting the Service Using System Settings
To make SMB share available on the network, go to System > Services and click the toggle for SMB.
Set Start Automatically if you want the service to activate when TrueNAS boots.
Configuring the SMB Service
Configure the SMB service by clicking Config Service from the more_vert dropdown menu on the Windows (SMB) Shares widget header or by clicking edit on the Services screen.
Unless you need a specific setting or are configuring a unique network environment, we recommend using the default settings.
Mounting the SMB Share
The instructions in this section cover mounting the SMB share on a system with the following operating systems.
Mounting on a Linux System
Verify that your Linux distribution has the required CIFS packages installed.
Create a mount point with the sudo mkdir /mnt/smb_share command.
Mount the volume with the sudo mount -t cifs //computer_name/share_name /mnt/smb_share command.
If your share requires user credentials, add the switch -o username= with the username after cifs and before the share address.
Mounting on a Windows System
To permanently mount the SMB share in Windows, map a drive letter in the computer for the user to the TrueNAS IP and share name.
Select a drive letter from the bottom of the alphabet rather than from the top to avoid assigning a drive dedicated to some other device.
The example below uses Z.
Open the command line and run the following command with the appropriate drive letter, TrueNAS system name or IP address, and the share name.
net use Z: \\TrueNAS_name\share_name /PERSISTENT:YES
Where:
Z is the drive letter to map to TrueNAS and the share
TrueNAS_name is either the host name or system IP address
share_name is the name given to the SMB share
To temporarily connect to a share, open a Windows File Explorer window, type \\TrueNAS_name\share_name and then enter the user credentials to authenticate with to connect to the share.
Windows remembers the user credentials so each time you connect it uses the same authentication credentials unless you reboot the system, then you are prompted to enter the authentication credentials again.
Mounting on an Apple System
Have the username and password for the user assigned to the pool or for the guest if the share has guest access ready before you begin.
Open Finder > Go > Connect To Server
Enter the SMB address as follows: smb://192.168.1.111.
Input the username and password for the user assigned to that pool or guest if the share has guest access.
Mounting on a FreeBSD System
Mounting on a FreeBSD system involves creating the mount point, then mounting the volume.
Create a mount point using the sudo mkdir /mnt/smb_share command.
Mount the volume using the sudo mount_smbfs -I computer_name\share_name /mnt/smb_share command.
Setting up an External SMB Share
External SMB shares are essentially redirects to shares on other systems.
Administrators might want to use this when managing multiple TrueNAS systems with SMB shares and if they do not want to keep track of which shares live on which boxes for clients.
This feature allows admins to connect to any of the TrueNAS systems with external shares set up, and to see them all.
Create the SMB share on another SCALE server (for example, system1), as described in Adding an SMB Share above.
We recommend using Active Directory or LDAP when creating user accounts, but at a minimum synchronize user accounts between the system with the share (system1) and on the TrueNAS SCALE system where you set up the external share (for example, system2).
On system2, enter the host name or IP address of the system hosting the SMB share (system1) and the name given the share on that system as EXTERNAL:ip address\sharename in Path, then change Name to EXTERNAL with no special characters.
Leave Purpose set to Default share parameters, leave Enabled selected, then click Save to add the share redirect.
Repeat the system2 instructions above to add an external redirect (share) on system1 to see the SMB shares of each system.
Repeat for each TrueNAS system with SMB shares to add as an external redirect.
Change the auto-populated name to EXTERNAL2 or something to distinguish it from the SMB shares on the local system (system1 in this case) and any other external shares added.
SMB Shares Contents
These tutorials describe creating and managing various specific configurations of SMB shares.
Managing SMB Shares: Provides instructions on managing existing SMB share and dataset ACL permissions.
Using SMB Shadow Copy: Provides information on SMB share shadow copies, enabling shadow copies, and resolving an issue with Microsoft Windows 10 v2004 release.
Setting Up SMB Home Shares: Provides instructions on setting up private SMB datasets and shares as an alternative to legacy SMB home shares.
Provides instructions on managing existing SMB share and dataset ACL permissions.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
To access SMB share management options, go to Shares screen with the Windows (SMB) Shares widget.
The widget lists SMB shares configured on but is not the full list.
Each share listed includes four icons that open other screens or dialogs that provide access to share settings.
To see a full list of shares, click on Windows (SMB) Shares launch to open the Sharing > SMB screen.
Each share row on this screen provides access to the other screens or dialogs with share settings.
Sharing Administrator Access
SCALE has implemented administrator roles to further comply with FIPS security hardening standards.
The Sharing Admin role allows the user to create new shares and datasets, modify the dataset ACL permissions, and to start/restart the sharing service, but does not permit the user to modify users to grant the sharing administrator role to new or existing users.
Full Admin users retain full access control over shares and creating/modifying user accounts.
Managing SMB Shares
To manage an SMB share click the icons on the widget or use the on the Sharing > SMB details screen to see the options for the share you want to manage. Options are:
Edit opens the Edit SMB screen where you can change settings for the share.
Edit Filesystem ACL opens the Edit ACL screen where you can edit the dataset permissions for the share.
The Dataset Preset option determines the ACL type and therefore the ACL Editor screen that opens.
Delete opens a delete confirmation dialog. Use this to delete the share and remove it from the system. Delete does not affect shared data.
Modifying ACL Permissions for SMB Shares
You have two options that modify ACL permissions for SMB shares:
Edit Share ACL where you modify ACL permissions applying to the entire SMB share.
Edit Filesystem ACL where you modify ACL permissions at the shared dataset level.
See the ACL Primer for general information on Access Control Lists (ACLs) in general, the Permissions article for more details on configuring ACLs, and Edit ACL Screen for more information on the dataset ACL editor screens and setting options.
Configuring the SMB Share ACL
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
Using the Edit Share ACL option configures the permissions for just the share, but not the dataset the share uses.
The permissions apply at the SMB share level for the selected share.
They do not apply to other file sharing protocol clients, other SMB shares that export the same share path (i.e., /poolname/shares specified in Path), or to the dataset the share uses.
After creating the share and dataset, modify the share permissions to grant user or group access.
Click on shareEdit Share ACL icon to open the Edit Share ACL screen if you want to modify permissions at the share level.
Select either User in Who, then the user name in User, and then set the permission level using Permissions and Type.
(Optional) Click Add then select Group, the group name, and then set the group permissions.
Click Save.
See Permissions for more information on setting user and group settings.
Configuring Dataset File System ACL
You cannot access SMB shares with the root user. Change the SMB dataset ownership to the admin user (Full Admin user).
To configure share owner, user and group permissions for the dataset Access Control List (ACL), use the Edit Filesystem ACL option.
This modifies the ACL entry for the SMB share the path (defined in Path) at the dataset level.
To customize permissions, add Access Control Entries (ACEs) for users or groups.
To access the dataset (filesystem) permissions, either click the «span class=“material-icons”>security> Edit Filesystem ACL icon on the share row to open the Edit ACL screen for the dataset the share uses.
You can also go to Datasets, select the dataset the share uses (same name as the share), then click Edit on the Permissions widget to open the Edit ACL screen.
Samba Authentication selected by default when SMB share users are created or added to TrueNAS SCALE manually or through a directory service, and these users are automatically added to the builtin-users group.
Users in this group can add or modify files and directories in the share.
The share dataset ACL includes an ACE for the builtin-users group, and the @owner and @group are set to root by default.
Change the @owner and @group values to the admin (Full admin) user and click Apply under each.
To restrict or grant additional file permissions for some or all share users, do not modify the builtin-users group entry.
Best practice is to create a new group for the share users that need different permissions, reassign these users to the new group and remove them from builtin-users group.
Next, edit the ACL by adding a new ACE entry for the new group, and then modify the permissions of that group.
Home users can modify the builtin-users group ACE entry to grant FULL_CONTROL
If you need to restrict or increase permissions for some share users, create a new group and add an ACE entry with the modified permissions.
Changing the built-in-user Group Permissions
To change permissions for the builtin_users group, go to Datasets, select the share dataset, and scroll down to the Permissions widget.
Click Edit to open the Edit ACL screen.
Locate the ACE entry for the builtin-users group and click on it.
Check the Access Control List area to see the if the permissions are correct.
Begin typing builtin_users in the Group field until it displays, then click on it to populate the field.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field.
For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List to add the ACE item or save changes.
Adding a New Share Group
To change the permission level for some share users, add a new group, reassign the user(s) to the new group, then modify the share dataset ACL to include this new group and the desired permissions.
Go to Local Groups, click Add and create the new group.
Go Local Users, select a user, click Edit, remove the builtin-user entry from Auxiliary Groups and add the new group.
Click Save. Repeat this step for each user or change the group assignment in the directory server to the new group.
Edit the filesystem (dataset) permissions. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Select Basic in the Permissions area then select the level of access you want to assign in the Permissions field.
For more granular control, select Advanced then select on each permission option to include.
Click Save Access Control List.
If restricting this group to read only and the share dataset is nested under parent datasets, go to each parent dataset, edit the ACL.
Add an ACE entry for the new group, and select Traverse.
Keep the parent dataset permission set to either Full_Control or MODIFY but select Traverse.
Using the Traverse Permission
If a share dataset is nested under other datasets (parents), you must add the ACL Traverse permission at the parent dataset level(s) to allow read-only users to move through directories within an SMB share.
After adding the group and assigning it to the user(s), next modify the dataset ACLs for each dataset in the path (parent datasets and the share dateset).
Add the new group to the share ACL. Use one of the methods to access the Edit ACL screen for the share dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then set the permission level.
Click Save Access Control List.
Return to the Datasets screen, locate the parent dataset for the share dataset, use one of the methods to access the Edit ACL screen for the parent dataset.
Add a new ACE entry for the new group. Click Add Item to create an ACE for the new group.
Select Group in the Who field, type the name into the Group field, then select Traverse.
Click Save Access Control List.
Repeat for each parent dataset in the path. This allows the restricted share group to navigate through the directories in the path to the share dataset.
5.5.2 - Adding a Basic Time Machine SMB Share
Provides instructions to adding an SMB share and enabling basic time machine.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
SCALE uses predefined setting options to establish an SMB share that fits a predefined purpose, such as a basic time machine share.
Setting Up a Basic Time Machine SMB Share
To set up a basic time machine share:
Create the user(s) for this SMB share.
Go to Credentials > Local User and click Add.
You can either create the dataset to use for the share on the Add Dataset screen and the share, or create the dataset when you add the share on the Add SMB screen.
If you want to customize the dataset, use the Add Dataset screen.
To create a basic dataset, go to Datasets.
Default settings include those inherited from the parent dataset.
Select a dataset (root, parent, or child), then click Add Dataset.
Select the Dataset Preset option you want to use. Options are:
Generic for non-SMB share datasets such as iSCSI and NFS share datasets or datasets not associated with application storage.
Multiprotocol for datasets optimized for SMB and NFS multi-mode shares or to create a dataset for NFS shares.
SMB for datasets optimized for SMB shares.
Apps for datasets optimized for application storage.
Generic sets ACL permissions equivalent to Unix permissions 755, granting the owner full control and the group and other users read and execute privileges.
SMB, Apps, and Multiprotocol inherit ACL permissions based on the parent dataset.
If there is no ACL to inherit, one is calculated granting full control to the owner@, group@, members of the builtin_administrators group, and domain administrators.
Modify control is granted to other members of the builtin_users group and directory services domain users.
Apps includes an additional entry granting modify control to group 568 (Apps).
ACL Settings for Dataset Presets
ACL Type
ACL Mode
Case Sensitivity
Enable atime
Generic
POSIX
n/a
Sensitive
Inherit
SMB
NFSv4
Restricted
Insensitive
On
Apps
NFSv4
Passthrough
Sensitive
Off
Multiprotocol
NFSv4
Passthrough
Sensitive
Off
If creating an SMB or multi-protocol (SMB and NFS) share the dataset name value auto-populates the share name field with the dataset name.
If you plan to deploy container applications, the system automatically creates the ix-applications dataset, but this dataset is not used for application data storage.
If you want to store data by application, create the dataset(s) first, then deploy your application.
When creating a dataset for an application, select Apps as the Dataset Preset. This optimizes the dataset for use by an application.
If you want to configure advanced setting options, click Advanced Options.
For the Sync option, we recommend production systems with critical data use the default Standard choice or increase to Always.
Choosing Disabled is only suitable in situations where data loss from system crashes or power loss is acceptable.
Select either Sensitive or Insensitive from the Case Sensitivity dropdown.
The Case Sensitivity setting is found under Advanced Options and is not editable after saving the dataset.
Click Save.
Review the Dataset Preset and Case Sensitivity under Advanced Options on the Add Dataset screen before clicking Save.
You cannot change these or the Name setting after clicking Save.
To use the Add SMB screen, click Add on the Windows (SMB) Shares widget to open the screen.
Set the Path to the existing dataset created for the share, or to where you want to add the dataset, then click Create Dataset.
Enter a name for the dataset and click Create Dataset.
The dataset name populates the share Name field and updates the Path automatically.
The dataset name becomes the share name.
Leave this as the default.
If you change the name follow the naming conventions for:
Select Enabled to allow sharing of this path when the SMB service is activated.
Leave it cleared if you want to disable the share without deleting the configuration.
Finish customizing the share, then click Save.
Do not start the SMB service when prompted, start it after configuring the SMB service.
Modifying the SMB Service
Click on the on the Windows (SMB) Share widget, then click Configure Service to open the SMB Service screen.
You can also go to System Settings > Services and scroll down to SMB.
If using the Services screen, click the toggle to turn off the SMB service if it is running, then click editConfigure to open the SMB Service settings screen.
Click Advanced Settings.
Verify or select Enable Apple SMB2/3 Protocol Extension to enable it, then click Save.
Restart the SMB service.
5.5.3 - Using SMB Shadow Copy
Provides information on SMB share shadow copies, enabling shadow copies, and resolving an issue with Microsoft Windows 10 v2004 release.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
Enable Shadow Copies exports ZFS snapshots as Shadow Copies for Microsoft Volume Shadow Copy Service (VSS) clients.
About SMB Shadow Copies
Shadow Copies, also known as the Volume Shadow Copy Service (VSS) or Previous Versions, is a Microsoft service for creating volume snapshots.
You can use shadow copies to restore previous versions of files from within Windows Explorer.
By default, all ZFS snapshots for a dataset underlying an SMB share path are presented to SMB clients through the volume shadow copy service or are accessible directly with SMB when the hidden ZFS snapshot directory is within the SMB share path.
Before you activate Shadow Copies in TrueNAS, there are a few caveats:
Shadow Copies might not work if you have not updated the Windows system to the latest service pack.
If previous versions of files to restore are not visible, use Windows Update to ensure the system is fully up-to-date.
Shadow Copies support only works for ZFS pools or datasets.
You must configure SMB share dataset or pool permissions appropriately.
Enabling Shadow Copies
To enable shadow copies, go to Shares > Windows (SMB) Shares and locate the share.
If listed on the widget, select the Edit option for the share.
If not listed, click Windows (SMB) Shares launch to open the Sharing > SMB list-view screen.
Select the share, then click the more_vert for the share, then click Edit to open the Edit SMB screen.
Click Advanced Options, scroll down to Other Options, and then select Enable Shadow Copies.
Click Save.
Windows 10 v2004 Issue
Some users might experience issues in the Windows 10 v2004 release where they cannot access network shares.
The problem appears to come from a bug in gpedit.msc, the Local Group Policy Editor.
Unfortunately, setting the Allow insecure guest logon flag value to Enabled in Computer Configuration > Administrative Templates > Network > Lanman Workstation in Windows does not affect the configuration.
To work around this issue, edit the Windows registry.
Use Regedit and go to HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters.
The DWORD AllowInsecureGuestAuth is an incorrect value: 0x00000000. Change this value to 0x00000001 (Hexadecimal 1) to allow adjusting the settings in gpedit.msc.
You can use a Group Policy Update to apply the edit to a fleet of Windows machines.
Deleting Shadow Copies
Users with an SMB client cannot delete Shadow copies.
Instead, the administrator uses the TrueNAS web interface to remove snapshots.
Disable shadow copies for an SMB share by clearing the Enable shadow copies checkbox on the Edit SMB screen in the Other Options on the Advanced Options screen for the SMB share.
Disabling does not prevent access to the hidden .zfs/snapshot directory for a ZFS dataset when it is within the path for an SMB share.
5.5.4 - Setting Up SMB Home Shares
Provides instructions on setting up private SMB datasets and shares as an alternative to legacy SMB home shares.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
SMB Home Shares are a legacy feature for organizations looking to maintain existing SMB configurations.
They are not recommended for new deployments.
Future TrueNAS SCALE releases can introduce instability or require configuration changes affecting this legacy feature.
Replacing SMB Home Shares
TrueNAS does not recommend setting up home shares with the Use as Home Share option, found in the Add SMB and Edit SMB screen Advanced Options settings, in the Other Options section.
This option is for organizations still using the legacy home shares option of adding a single SMB share to provide a personal directory for every user account.
Users wanting to create the equivalent of home shares should use the intructions in the Adding Private SMB Datasets and Shares section below for the recommended method for creating private shares and datasets.
The legacy home shares provide each user a personal home directory when connecting to the share.
These home directories are not accessible by other users.
You can use only one share as the home share, but you can create as many non-home shares as you need or want.
Other options for configuring individual user directories include:
Configure a single share on the TrueNAS and provision individual user directories on the client OS.
Create a single SMB share and configure the ACL so that users can create individual directories on the share that inherit write access for the user and grant read access the administrator.
Create an SMB share using the Private SMB datasets and shares preset; and then create per-user datasets under the umbrella of a single share when users access the share.
Creating an SMB home share requires configuring the system storage and provisioning local users or joining Active Directory.
Adding Private SMB Datasets and Shares
This option allows creating private share and datasets for the users that require the equivalent of the legacy home share.
It is not intended for every user on the system.
Setting up private SMB shares and datasets prevents the system from showing these to all users with access to the root level of the share.
Examples of private SMB shares are those for backups, system configuration, and users or departments that need to keep information private from other users.
Before setting up SMB shares check system alerts to verify there are no errors related to connections to Active Directory.
Resolve any issues with Active Directory before proceeding. If Active Directory cannot bind with TrueNAS you cannot start the SMB service after making changes.
To add private shares and datasets for users that require home directories:
Create the share using the Private SMB Datasets and Shares preset.
Configure the share dataset ACL to use the NFSv4_HOME preset.
Create users either manually or through Active Directory.
Creating the Share and Dataset
TrueNAS must be joined to Active Directory or have at least one local SMB user before creating an SMB share. When creating an SMB user, ensure that Samba Authentication is enabled.
You cannot access SMB shares using the root user, TrueNAS built-in user accounts, or those without Samba Authentication selected.
You can use an existing dataset for the share or create a new dataset.
You can either add a share when you create the dataset for the share on the Add Dataset screen, or create the dataset when you add the share on the Add SMB screen.
If creating a simple SMB share and dataset use either method, or if customizing the dataset, use the Add Dataset screen to access dataset advanced setting options.
To configure a customized SMB share, use the Add SMB share option that provides access to the advanced setting options for shares.
This procedure covers creating the share and dataset from the Add Share screen.
To create an alternative to the legacy SMB home share:
Go to Shares, click Add on the Windows (SMB) Shares widget to open the Add SMB screen.
If you created the dataset already, you can add the share with the correct share preset on this screen.
If you are creating the share and dataset together you can create both using the correct share preset on this screen.
Browse to or enter the location of an existing dataset or path to where you want to create the dataset to populate the Path for the share.
To add a dataset, click Create Dataset, enter a name for the dataset, then click Create Dataset.
For example, creating a share and dataset named private.
By default, the dataset name populates the share Name field and becomes the share name. The share and dataset must have the same name. It also updates the Path automatically.
Set Purpose to the Private SMB Dataset and Share preset and click Advanced Options to show the additional settings.
Configure the options you want to use.
Scroll down to Other Options and select Export Recycle Bin to allow moving files deleted in the share to a recycle bin in that dataset.
Files are renamed to a per-user subdirectory within .recycle directory at the root of the SMB share if the path is the same dataset as the share.
If the dataset has nested dataset, the directory is at the root of the current dataset. If this is the case, there is not automatic deletion based on file size.
Click Save.
Enable or restart the SMB service when prompted and make the share available on your network.
After saving the dataset and if not already set for the dataset, set the ACL permissions.
Setting Dataset ACL Permissions
After creating the share and dataset, edit ACL permissions.
You can access the Edit ACL screen either from the Datasets or the Shares screens.
If starting on the Datasets screen, select the dataset row, then click Edit on the Permissions widget to open the Edit ACL screen.
See Setting Up Permissions for more information on editing dataset permissions.
If starting on the Shares screen, select the share on the Windows (SMB) Share widget, then click Edit Filesystem ACL to open the Edit ACL screen.
Select the option to edit the file system ACL not the share permissions.
See SMB Shares for detailed information on editing the share dataset permissions.
To set the permission for the private dataset and share, the home share alternative scenario, select the HOME (if a POSIX ACL) or NSFv4_HOME (for NFSv4 ACL) preset option to correctly configure dataset permissions.
Click the Owner dropdown and select the administration user with full control, then repeat for Group.
You can set the owning group to your Active Directory domain admins. Click Apply Owner and Apply Group.
Next, click Use Preset and choose NFS4_HOME. If the dataset has a POSIX ACL the preset is HOME.
Click Continue, then click Save Access Control List.
Next, add the users that need a private dataset and share.
As of SCALE 22.12 (Bluefin) and later, TrueNAS does not support SMB client operating systems that are labeled by their vendor as End of Life or End of Support.
This means MS-DOS (including Windows 98) clients, among others, cannot connect to TrueNAS SCALE SMB servers.
The upstream Samba project that TrueNAS uses for SMB features notes in the 4.11 release that the SMB1 protocol is deprecated and warns portions of the protocol might be further removed in future releases.
Administrators should work to phase out any clients using the SMB1 protocol from their environments.
Adding Local Share Users
Go to Credentials > Users and click Add.
Create a new user name and password. For home directories, make the username all lowercase.
Add and configure permissions for the user the private share is for to allow log in access to the share and the ability see a folder matching their username.
By default, the user Home Directory is set to /var/empty.
You must change this to the path for the new parent dataset created for home directories.
Select the path /mnt/poolname/datasetname/username where poolname is the name of the pool where you added the share dataset, datasetname is the name of the dataset associated with the share, and username is the username (all lowercase) and is also the name of the home directory for that username.
Select Create Home Directory.
If existing users require access to the home share, go to Credentials > Local Users and edit an existing account.
Click Save. TrueNAS adds the user and creates the home directory for the user.
If existing users require access to a home share, go to Credentials > Users, select the user, click Edit and add the home directory as described above.
SCALE 24.04 changes the default user home directory location from /nonexistent to /var/empty.
This new directory is an immutable directory shared by service accounts and accounts that should not have a full home directory.
The 24.04.01 maintenance release introduces automated migration to force home directories of existing SMB users from /nonexistent to /var/empty.
Why the change?
TrueNAS uses the pam_mkhomdir PAM module in the pam_open_session configuration file to automatically create user home directories if they do not exist.
pam_mkhomedir returns PAM_PERM_DENIED if it fails to create a home directory for a user, which eventually turns into a pam_open_session() failure.
This does not impact other PAM API calls, for example, pam_authenticate().
TrueNAS 24.04 (or newer) does not include the customized version of pam_mkhomedir used in TrueNAS 13.0 and earlier or 13.3 releases.
This version of pam_mkhomedir specifically avoided trying to create the /nonexistent directory.
This led to some circumstances where users could create the /nonexistent directory on TrueNAS versions before 24.04.
Starting in TrueNAS 24.04 (Dragonfish), the root file system of TrueNAS is read-only, which prevents pam_mkhomdir from creating the /nonexistent directory in cases where it previously did.
This results in a permissions error if pam_open_session() is called by an application for a user account that has Home Directory set to /nonexistent.
Adding Share Users with Directory Services
You can use Active Directory or LDAP to create share users.
If not already created, add a pool, then join Active Directory.
When creating the share for this dataset, use the SMB preset for the dataset but do not add the share from the Add Dataset screen.
Do not share the root directory!
Go to Shares and follow the instructions listed above using the Private SMB Dataset and Share preset, and then modifying the file system permissions of the dataset to use the NFSv4_HOME ACL preset.
5.5.5 - SMB Share MacOS Client Limitations When Using Decomposed Unicode Characters
Provides information on SMB share MacOS client limitation when using decomposed unicode characters.
There are normalize forms for a unicode character with diacritical marks: decomposed (NFD) and pre-composed (NFC).
Take for example the character ä (a + umlaut) and the encoding differences between NFC (b’\xc3\xa4’) and NFD (b’a\xcc\x88’).
The MacOS SMB client historically and at present forces normalization of unicode strings to NFC prior to generating network traffic to the remote SMB server.
The practical impact of this is that a file that contains NFD diacritics on a remote SMB server (TrueNAS, Windows, etc.) might be visible in the directory listing in the MacOS SMB client and thereby Finder, but any operations on the file (edits, deletions, etc.) have undefined behaviors since a file with NFC diacritics does not exist on the remote server.
>>> os.listdir(".")
['220118_M_HAN_MGK_X_4_Entwässerung.pdf']
>>> os.unlink('220118_M_HAN_MGK_X_4_Entwässerung.pdf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
FileNotFoundError: [Errno 2] No such file or directory: '220118_M_HAN_MGK_X_4_Entwässerung.pdf'
>>> os.listdir(".")
['220118_M_HAN_MGK_X_4_Entwässerung.pdf']
Above is a short example of a MacOS SMB client attempting to delete a file with NFD normalization on remote Windows server.
Short of Apple providing a fix for this, the only strategy for an administrator to address these issues is to rename the files with pre-composed (NFC) form. Unfortunately, normalization is not guaranteed to be lossless.
Provides information on setting up SMB multichannel.
When creating a share, do not attempt to set up the root or pool-level dataset for the share.
Instead, create a new dataset under the pool-level dataset for the share.
Setting up a share using the root dataset leads to storage configuration issues.
SMB multichannel allows servers to use multiple network connections simultaneously by combining the bandwidth of several network interface cards (NICs) for better performance.
SMB multichannel does not function if you combine NICs into a LAGG.
Activating Multichannel in TrueNAS Scale
If you already have clients connected to SMB shares, disconnect them before activating multichannel.
Go to System Settings > Services and click the edit edit icon for the SMB service.
Click Advanced Settings, then enable Multichannel.
Save and restart the SMB service, then reconnect all clients to their SMB Shares.
Validating Multichannel Activated In Windows
After you connect a client to their SMB share, open Powershell as an administrator on a client, then enter Get-SmbMultichannelConnection. The terminal should list multiple server IPs.
Tutorials related to configuring data backup features in TrueNAS SCALE.
The Data Protection section allows users to set up multiple reduntant tasks that will protect and/or backup data in case of drive failure.
Scrub Tasks and S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) Tests can provide early disk failure alerts by identifying data integrity problems and detecting various indicators of drive reliability.
Cloud Sync, Periodic Snapshot, Rsync, and Replication Tasks, provide backup storage for data and allow users to revert the system to a previous configuration or point in time.
Cloud Sync Tasks: Tutorials for configuring and managing data backups to from TrueNAS to various 3rd party Cloud Service Providers.
Backing Up Google Drive to TrueNAS SCALE: Provides instructions on adding Google Drive cloud credentials using the Add Cloud Credentials and Add Cloud Sync Task screens. It also provides information on working with Google-created content.
Adding a Storj Cloud Sync Task: Provides instructions on how to set up a Storj cloud sync task and how to configure a Storj-TrueNAS account to work with SCALE cloud credentials and cloud sync tasks.
Creating VMWare Snapshots: Provides instructions for creating ZFS snapshots when using TrueNAS as a VMWare datastore.
Managing S.M.A.R.T. Tests: Provides instructions on running S.M.A.R.T. tests manually or automatically, using Shell to view the list of tests, and configuring the S.M.A.R.T. test service.
Replication Tasks: Tutorials for configuring ZFS snapshot replication tasks in TrueNAS SCALE.
Setting Up a Local Replication Task: Provides instructions on adding a replication task using different pools or datasets on the same TrueNAS system.
Advanced Replication Tasks: Provides instructions on using Advanced Replication and lists other tutorials for configuring advanced ZFS snapshot replication tasks in TrueNAS SCALE.
6.1 - Managing Scrub Tasks
Provides instruction on running scrub and resilver tasks.
When TrueNAS performs a scrub, ZFS scans the data on a pool.
Scrubs identify data integrity problems, detect silent data corruptions caused by transient hardware issues, and provide early disk failure alerts.
Default Scrub Tasks
TrueNAS generates a default scrub task when you create a new pool and sets it to run every Sunday at 12:00 AM.
Adjusting Scrub/Resilver Priority
Resilvering is a process that copies data to a replacement disk.
Complete it as quickly as possible.
Resilvering is a high priority task.
It can run in the background while performing other system functions, however, this can put a higher demand on system resources.
Increasing the priority of resilvers helps them finish faster as the system runs tasks with higher priority ranking.
Use the Resilver Priority screen to schedule a time where a resilver task can become a higher priority for the system and when the additional I/O or CPU use does not affect normal usage.
Select Enabled, then use the dropdown lists to select a start time in Begin and time to finish in End to define a priority period for the resilver.
To select the day(s) to run the resilver, use the Days of the Week dropdown to select when the task can run with the priority given.
A resilver process running during the time frame defined between the beginning and end times likely runs faster than during times when demand on system resources is higher.
We advise you to avoid putting the system under any intensive activity or heavy loads (replications, SMB transfers, NFS transfers, Rsync transfers, S.M.A.R.T. tests, pool scrubs, etc.) during a resilver process.
Creating New Scrub Tasks
TrueNAS needs at least one data pool to create scrub task.
To create a scrub task for a pool, go to Data Protection and click ADD in the Scrub Tasks window.
Select a preset schedule from the dropdown list or click Custom to create a new schedule for when to run a scrub task. Custom opens the Advanced Scheduler window.
Advanced Scheduler
Choosing a Presets option populates in the rest of the fields.
To customize a schedule, enter crontab values for the Minutes/Hours/Days.
These fields accept standard cron values.
The simplest option is to enter a single number in the field.
The task runs when the time value matches that number.
For example, entering 10 means that the job runs when the time is ten minutes past the hour.
An asterisk (*) means match all values.
You can set specific time ranges by entering hyphenated number values.
For example, entering 30-35 in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and 35.
You can also enter lists of values.
Enter individual values separated by a comma (,).
For example, entering 1,14 in the Hours field means the task runs at 1:00 AM (0100) and 2:00 PM (1400).
A slash (/) designates a step value.
For example, entering * in Days runs the task every day of the month. Entering */2 runs it every other day.
Combining the above examples creates a schedule running a task each minute from 1:30-1:35 AM and 2:30-2:35 PM every other day.
TrueNAS has an option to select which Months the task runs.
Leaving each month unset is the same as selecting every month.
The Days of Week schedules the task to run on specific days in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month.
The Schedule Preview displays when the current settings mean the task runs.
Examples of CRON syntax
TrueNAS lets users create flexible schedules using the advanced cron syntax.
The tables below have some examples:
Syntax
Meaning
Examples
*
Every item.
* (minutes) = every minute of the hour. * (days) = every day.
*/N
Every Nth item.
*/15 (minutes) = every 15th minute of the hour. */3 (days) = every 3rd day. */3 (months) = every 3rd month.
Comma and hyphen/dash
Each stated item (comma) Each item in a range (hyphen/dash).
1,31 (minutes) = on the 1st and 31st minute of the hour. 1-3,31 (minutes) = on the 1st to 3rd minutes inclusive, and the 31st minute, of the hour. mon-fri (days) = every Monday to Friday inclusive (every weekday). mar,jun,sep,dec (months) = every March, June, September, December.
You can specify days of the month or days of the week.
Desired schedule
Values to enter
3 times a day (at midnight, 08:00 and 16:00)
months=*; days=*; hours=0/8 or 0,8,16; minutes=0 (Meaning: every day of every month, when hours=0/8/16 and minutes=0)
Every Monday/Wednesday/Friday, at 8.30 pm
months=*; days=mon,wed,fri; hours=20; minutes=30
1st and 15th day of the month, during October to June, at 00:01 am
Every 15 minutes during the working week, which is 8am - 7pm (08:00 - 19:00) Monday to Friday
Note that this requires two tasks to achieve: (1) months=*; days=mon-fri; hours=8-18; minutes=*/15 (2) months=*; days=mon-fri; hours=19; minutes=0 We need the second scheduled item, to execute at 19:00, otherwise we would stop at 18:45. Another workaround would be to stop at 18:45 or 19:45 rather than 19:00.
To view the progress of a scrub task, check the status under the Next Run column.
Editing Scrub Tasks
To edit a scrub, go to Data Protection and click the scrub task you want to edit.
6.2 - Cloud Sync Tasks
Tutorials for configuring and managing data backups to from TrueNAS to various 3rd party Cloud Service Providers.
This section has tutorials for configuring and managing data backups to or from TrueNAS to various 3rd party cloud service providers.
This article provides instructions on adding a cloud sync task, configuring environment variables, running an unscheduled sync task, creating a copy of a task with a reversed transfer mode, and troubleshooting common issues with some cloud storage providers.
TrueNAS can send, receive, or synchronize data with a cloud storage provider.
Cloud sync tasks allow for single-time transfers or recurring transfers on a schedule.
They are an effective method to back up data to a remote location.
These providers are supported for Cloud Sync tasks in TrueNAS SCALE:
Using the cloud means data can go to a third-party commercial vendor not directly affiliated with iXsystems.
You should fully understand vendor pricing policies and services before using them for cloud sync tasks.
iXsystems is not responsible for any charges incurred from using third-party vendors with the cloud sync feature.
Cloud Sync Task Requirements
You must have:
All system storage configured and ready to receive or send data.
A cloud storage provider account and location (like an Amazon S3 bucket).
You can create cloud storage account credentials using Credentials > Backup Credentials > Cloud Credentials before adding the sync task or add it when configuring the cloud sync task using Add on the Data Protection > Cloud Sync Task widget to open the Cloudsync Task Wizard.
See the Cloud Credentials article for instructions on adding a backup cloud credential.
Creating a Cloud Sync Task
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select an existing backup credential from the Credential dropdown list.
If not already added as a cloud credential, click Add New to open the Cloud Credentials screen to add the credential.
Click Save to close the screen and return to the wizard.
Click Next to open the Where and When wizard screen.
Select the option from Direction and in Transfer Mode.
Select the location where to pull from or push data to in the Folder field.
Select the dataset location in Directory/Files. Browse to the dataset to use on SCALE for data storage.
Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
Browsing to select a path
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Cloud provider settings change based on the credential you select. Select or enter the required settings that include where files are stored.
If shown, select the bucket on the Bucket dropdown list.
Select the time to run the task from the Schedule options.
Click Save to add the task.
Use Dry Run to test the configuration before clicking Save or select the option on the Cloud Sync Task widget after you click Save.
TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule.
Encrypting Cloud Sync Tasks
The option to encrypt data transferred to or from a cloud storage provider is available in the Advanced Options settings.
Select Remote Encryption to use rclone crypt encryption during pull and push transfers.
With Pull selected as the Transfer Direction, the Remote Encryption decrypts files stored on the remote system before the transfer.
This requires entering the same password used to encrypt data in both Encryption Password and Encryption Salt.
With Push selected as the Transfer Direction, data is encrypted before it is transferred and stored on the remote system.
This also requires entering the same password used to encrypt data in both Encryption Password and Encryption Salt.
We do not recommend enabling Filename Encryption for any cloud sync tasks that did not previously have it enabled.
Users with existing cloud sync tasks that have this setting enabled must leave it enabled on those tasks to be able to restore those existing backups.
Do not enable file name encryption on new cloud sync tasks!
When Filename Encryption is selected, transfers encrypt and decrypt file names with the rclone Standard file name encryption mode.
The original directory structure of the files is preserved.
When disabled, encryption does not hide file names or directory structure, file names can be 246 characters long, use sub-paths, and copy single files.
When enabled, file names are encrypted, file names are limited to 143 characters, directory structure is visible, and files with identical names have identical uploaded names.
File names can use sub-paths, single copy files, and shortcuts to shorten the directory recursion.
Troubleshooting Transfer Mode Problems
Sync keeps all the files identical between the two storage locations.
If the sync encounters an error, it does not delete files in the destination.
Syncing to a Backblaze B2 bucket does not delete files from the bucket, even after deleting those files locally.
Instead, files are tagged with a version number or moved to a hidden state.
To automatically delete old or unwanted files from the bucket, adjust the Backblaze B2 Lifecycle Rules.
A directory, deleted in BackBlaze B2 and notated with an asterisk, do not display in the SCALE UI.
These folders are essentially empty directories and Backblaze API restricts them so they do not display.
Amazon S3 Issues
Sync cannot delete files stored in Amazon S3 Glacier or S3 Glacier Deep Archive.
Restore these files by another means, like the Amazon S3 console.
Using Scripting and Environment Variables
Advanced users can write scripts that run immediately before or after the cloud sync task.
Use either the Advanced Options screen accessed from the Cloudsync Task Wizard or Edit Cloud Sync Task screen, scroll down to the Advanced Options to locate and then enter environment variables in either the Pre-script or Post-script fields.
The Post-script field only runs when the cloud sync task succeeds.
Click here for Environment Variables
General Environment Variables
CLOUD_SYNC_ID
CLOUD_SYNC_DESCRIPTION
CLOUD_SYNC_DIRECTION
CLOUD_SYNC_TRANSFER_MODE
CLOUD_SYNC_ENCRYPTION
CLOUD_SYNC_FILENAME_ENCRYPTION
CLOUD_SYNC_ENCRYPTION_PASSWORD
CLOUD_SYNC_ENCRYPTION_SALT
CLOUD_SYNC_SNAPSHOT
Provider-Specific Variables
There also are provider-specific variables like CLOUD_SYNC_CLIENT_ID or CLOUD_SYNC_TOKEN or CLOUD_SYNC_CHUNK_SIZE.
Remote storage settings:
CLOUD_SYNC_BUCKET
CLOUD_SYNC_FOLDER
Local storage settings:
CLOUD_SYNC_PATH
Running an Unscheduled Cloud Sync Task
Saved tasks activate based on the schedule set for the task.
Click Run Now on the Cloud Sync Task widget to run the sync task before the saved scheduled time.
You can also expand the task on the Cloud Sync Tasks screen and click Run Now on the task details screen.
An in-progress cloud sync must finish before another can begin.
Stopping an in-progress task cancels the file transfer and requires starting the file transfer over.
To view logs about a running task, or its most recent run, click on the State oval.
Using Cloud Sync Task Restore
To create a new cloud sync task that uses the same options but reverses the data transfer, select history for an existing cloud sync on the Data Protection page.
The Restore Cloud Sync Task window opens.
Enter a name in Description for this reversed task.
Select the Transfer Mode and then define the path for a storage location on TrueNAS scale for the transferred data.
Click Restore.
TrueNAS saves the restored cloud sync as another entry in Data protection > Cloud Sync Tasks.
If you set the restore destination to the source dataset, TrueNAS may alter ownership of the restored files to root.
If root did not create the original files and you need them to have a different owner, you can recursively reset their ACL permissions through the GUI.
Cloud Sync Tasks Contents
Backing Up Google Drive to TrueNAS SCALE: Provides instructions on adding Google Drive cloud credentials using the Add Cloud Credentials and Add Cloud Sync Task screens. It also provides information on working with Google-created content.
Adding a Storj Cloud Sync Task: Provides instructions on how to set up a Storj cloud sync task and how to configure a Storj-TrueNAS account to work with SCALE cloud credentials and cloud sync tasks.
Provides instructions on adding Google Drive cloud credentials using the Add Cloud Credentials and Add Cloud Sync Task screens. It also provides information on working with Google-created content.
Google Drive and G Suite are widely used tools for creating and sharing documents, spreadsheets, and presentations with team members.
While cloud-based tools have inherent backups and replications included by the cloud provider, certain users might require additional backup or archive capabilities.
For example, companies using G Suite for important work might be required to keep records for years, potentially beyond the scope of the G Suite subscription.
TrueNAS offers the ability to easily back up Google Drive by using the built-in cloud sync.
Setting up Google Drive Credentials
You can add Google Drive credentials using the Add Cloud Credentials screen accessed from the Credentials > Backup Credentials > Cloud Credentials screen, or you can add them when you create a cloud sync task using the Add Cloud Sync Task screen accessed from the Data Protection > Cloud Sycn Task screen.
Adding Google Drive Credentials Using Cloud Credentials
To set up a cloud credential, go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Enter a credential name.
Select Google Drive on the Provider dropdown list. The Google Drive authentication settings display on the screen.
Enter the Google Drive authentication settings.
a. Click Log In To Provider. The Google Authentication window opens.
b. Click Proceed to open the Choose an Account window.
c. Select the email account to use. Google displays the Sign In window. Enter the password and click Next to enter the password. Click Next again.
Google might display a Verify it’s you window. Enter a phone number where Google can text an verification code, or you can click Try another way.
d. Click Allow on the TrueNAS wants to access your Google Account window. TrueNAS populates Access Token with the token Google provides.
Click Verify Credentials and wait for TrueNAS to display the verification dialog with verified status. Close the dialog.
Click Save.
The Cloud Credentials widget displays the new credentials. These are also available for cloud sync tasks to use.
Adding A Google Drive Cloud Sync Task
You must add the cloud credential on the Backup Credentials screen before you create the cloud sync task.
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select Google Drive on the Credential dropdown list then enter your credentials.
Click Next.
Select the direction for the sync task.
PULL brings files from the cloud storage provider to the location specified in Directory/Files (this is the location on TrueNAS SCALE).
PUSH sends files from the location in Directory/Files to the cloud storage provider location you specify in Folder.
Select the transfer method from the Transfer Mode dropdown list.
Sync keeps files identical on both TrueNAS SCALE and the remote cloud provider server. If the sync encounters an error, destination server files are not deleted.
Copy duplicates files on both the TrueNAS SCALE and remote cloud provider server.
Move transfer the files to the destination server and then deleted the copy on server that transferred the files. It also overwrites files with the same names on the destination.
Enter or browse to the dataset or folder directory.
Click the arrow_right arrow to the left of folder/ under the Directory/Files and Folder fields.
Select the TrueNAS SCALE dataset path in Directory/Files and the Google Drive path in Folder.
If PUSH is the selected Direction, this is where on TrueNAS SCALE the files you want to copy, sync or move transfer to the provider.
If Direction is set to PULL this is the location where on TrueNAS SCALE you want to copy, sync or move files to.
Click the arrow_right to the left of folder/ to collapse the folder tree.
Select the preset from the Schedule dropdown that defines when the task runs.
For a specific schedule, select Custom and use the Advanced Scheduler.
Clearing the Enable checkbox makes the configuration available without allowing the specified schedule to run the task.
To manually activate a saved task, go to Data Protection > Cloud Sync Tasks, click for the cloud sync task you want to run. Click CONTINUE or CANCEL for the Run Now operation.
(Optional) Click Advanced Options to set any advanced option you want or need for your use case or to define environment variables.
Scroll down to and enter the variables or scripts in either the Pre-script or Post-script fields.
These fields are for advanced users.
Click Dry Run to test your settings before you click Save.
TrueNAS connects to the cloud storage provider and simulates a file transfer but does not send or receive data.
The new task displays on the Cloud Sync Tasks widget with the status of PENDING until it runs.
If the task completes without issue the status becomes SUCCESS.
See Using Scripting and Environment Variables for more information on environment variables.
Working with Google Created Content
One caveat is that Google Docs and other files created with Google tools have their own proprietary set of permissions and their read/write characteristics unknown to the system over a standard file share. Files are unreadable as a result.
To allow Google-created files to become readable, allow link sharing to access the files before the backup. Doing so ensures that other users can open the files with read access, make changes, and then save them as another file if further edits are needed. Note that this is only necessary if the file was created using Google Docs, Google Sheets, or Google Slides; other files should not require modification of their share settings.
TrueNAS is perfect for storing content, including cloud-based content, for the long term. Not only is it simple to sync and backup from the cloud, but users can rest assured that their data is safe, with snapshots, copy-on-write, and built-in replication functionality.
6.2.2 - Adding a Storj Cloud Sync Task
Provides instructions on how to set up a Storj cloud sync task and how to configure a Storj-TrueNAS account to work with SCALE cloud credentials and cloud sync tasks.
TrueNAS can send, receive, or synchronize data with the cloud storage provider Storj.
Cloud sync tasks allow for single-time transfers or recurring transfers on a schedule.
They are an effective method to back up data to a remote location.
To take advantage of the lower-cost benefits of the Storj-TrueNAS cloud service, you must create your Storj account using the link provided on the Add Cloud Credentials screen.
You must also create and authorize the storage buckets on Storj for use by SCALE.
iXsystems is not responsible for any charges you incur using a third-party vendor with the cloud sync feature.
This procedure provides instructions to set up both Storj and SCALE.
TrueNAS supports major providers like Amazon S3, Google Cloud, and Microsoft Azure.
It also supports many other vendors. To see the full list of supported vendors, go to Credentials > Backup Credentials > Cloud Credentials click Add, and open the Provider dropdown list.
Cloud Sync Task Requirements
You must have all system storage (pool and datasets or zvols) configured and ready to receive or send data.
Creating a Storj Cloud Sync Task
To create your cloud sync task for a Storj-TrueNAS transfer you:
Adding the cloud credential in SCALE includes using the link to create the Storj-TrueNAS account, creating a new bucket, and obtaining the S3 authentication credentials you need to complete the process in SCALE.
The instructions in this section covers adding the Storj-iX account and configuring the cloud service credentials in SCALE and Storj.
The process includes going to Storj to create a new Storj-iX account and returning to SCALE to enter the S3 credentials provided by Storj.
Go to Credentials > Backup Credentials and click Add on the Cloud Credentials widget.
The Add Cloud Credential screen opens with Storj displayed as the default provider in the Provider field.
Enter a descriptive name to identify the credential in the Name field.
Click Signup for account to create your Stor-TrueNAS account. This opens the Storj new account screen for TrueNAS.
You must use this link to create your Storj account to take advantage of the benefits of the Storj-TrueNAS pricing!
Credentials > Backup Credentials, and click Add.
Select StorjiX as the Provider on the Cloud Credentials screen, then click Signup for account.
The Storj Create your Storj account web page opens.
Enter your information in the fields, select the I agree to the Terms of Service and Privacy Policy, then click the button at the bottom of the screen.
The Storj main dashboard opens.
Adding the Storj-TrueNAS Bucket
Now you can add the storage bucket you want to use in your Storj-TrueNAS account and SCALE cloud sync task.
From the Storj main dashboard:
Click Buckets on the navigation panel on the left side of the screen to open the Buckets screen.
Click New Bucket to open the Create a bucket window.
Enter a name in Bucket Name using lowercase alphanumeric characters, with no spaces between characters, then click Continue to open the Encrypt your bucket window.
Select the encryption option you want to use. Select Generate passphrase to let Storj provide the encryption or select Enter Passphrase to enter your own.
If you already have a Storj account and want to use the same passphrase for your new bucket, select Enter Passphrase.
If you select Generate a passphrase, Storj allows you to download the encryption keys.
You must keep encryption keys stored in a safe place where you can back up the file.
Select I understand, and I have saved the passphrase then click Download.
Click Continue to complete the process and open the Buckets screen with your new bucket.
Setting up S3 Access to the Bucket
After creating your bucket, add S3 access for the new bucket(s) you want to use in your Storj-TrueNAS account and the SCALE cloud sync task.
Click Access to open the** Access Management** dashboard, then click Create S3 Credentials on the middle S3 credentials widget.
The Create Access window opens with Type set to S3 Credentials.
Enter the name you want to use for this credential. Our example uses the name of the bucket we created.
Select the permissions you want to allow this access from the Permissions dropdown and select the bucket you want to have access to this credential from the dropdown list.
For example, select All for Permissions, then select the one bucket we created ixstorj1.
If you want to use the SCALE option to add new buckets in SCALE, set Storj Permissions and Buckets to All.
Select Add Date (optional) if you want to set the duration or length of time you want to allow this credential to exist.
This example set this to Forever. You can select a preset period or use the calendar to set the duration.
Click Encrypt My Access to open the Encryption Information dialog, then click Continue to open theSelect Encryption options window.
Select the encryption option you want to use.
Select Generate Passphrase to allow Storj to provide the encryption passphrase, or select Create My Own Passphrase to enter a passphrase of your choice.
Use Copy to Clipboard or Download.txt to obtain the Storj-generated passphrase. Keep this passphrase along with the access keys in a safe place where you can back up the file.
If you lose your passphrase, neither Storj nor iXsystems can help you recover your stored data!
7 . Click Create my Access to obtain the access and secret keys. Use Download.txt to save these keys to a text file.
This completes the process of setting up your Storj buckets and S3 access.
Enter these keys in the Authentication fields in TrueNAS SCALE on the Add Cloud Credential screen to complete setting up the SCALE cloud credential.
Setting Up the Storj Cloud Sync Task
To add the Storj cloud sync task, go to Data Protection > Cloud Sync Tasks:
Click Add to open the Cloudsync Task Wizard.
Select the Storj credential on the Credential dropdown list, then click Next to show the What and When wizard screen.
Set the Direction and Transfer Mode you want to use.
Set the Direction and Transfer Mode you want to use.
Browse to the dataset or zvol you want to use on SCALE for data storage.
Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
If you set the Storj S3 access to only apply to the new bucket created in Storj, you can only use that bucket, selecting Add New results in an error.
If you set the Storj S3 Bucket access to All, you can either select the new bucket you created in Storj or Add New to create a new Storj bucket here in SCALE!
If Direction is set to PUSH, click on the folder icon for the Folder field to select the desired folder in the Storj bucket from the dropdown list if not copying/moving/syncing the entire contents of the bucket with the dataset selected in the Directory/Files field.
Set the task schedule for when to run this task.
Click Save.
TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule.
To test the task, click Dry Run or Run Now to start the task apart from the scheduled time.
6.2.3 - Adding a Google Photos Cloud Sync Task
Provides instructions on how to set up Google Photos API credentials and use them to create a cloud sync task.
Google Photos works best in TrueNAS using a Google Photos API key and rclone token.
Creating the API Credentials
On the Google API dashboard, click the dropdown menu next to the Google Cloud logo and select your project.
If you do not have a project, click NEW PROJECT and enter a value in Project name, Organization, and Location.
Enable API
After you select your project, click Enabled APIs & Services on the left menu, then click + ENABLE APIS AND SERVICES.
Enter photos library api in the search bar, then select Photos Library API and click ENABLE.
Configure Authentication
Next, click OAuth consent screen on the left menu, select EXTERNAL, then click CREATE.
Enter a value in App name and User support email.
Enter an email address in the Developer contact information section, then click SAVE AND CONTINUE.
Continue to the Add users section, enter your email address, then click ADD.
On the OAuth consent screen, click PUBLISH APP under Testing and push the app to production.
Can I leave the app in testing mode?
You can leave the app in testing mode, but your cloud sync task fails when your testing app credentials expire after seven days.
Create Credentials
Click Credentials on the left menu, then click + CREATE CREDENTIALS and select OAuth client ID.
Select Desktop app in the Application type dropdown, then enter a name for the client ID and click CREATE.
Copy and save your client ID and secret, or download the JSON file.
Configuring Rclone
Download rclone for your OS and open it in a command line utility.
The example photos in this article use Powershell in Windows OS.
Enter rclone config, then enter n to create a new remote.
Enter a name for the new remote, then enter the number from the list corresponding to Google Photos.
Enter the client id and secret you saved when you created the Google Photos API credentials, then enter false to keep the Google Photos backend read-only.
Do not edit the advanced config. When prompted about automatically authenticating rclone with the remote, enter y.
A browser window opens to authorize rclone access. Click Allow.
In the command line, enter y when prompted about media item resolution to complete the configuration.
Copy and save the type, client_id, client_secret, and token, then enter y to keep the new remote.
Creating Google Photos Cloud Credentials
Open your TrueNAS Web UI. Go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Select Google Photos as the Provider and enter a name.
Do not click Log In To Provider.
Paste the Google Photos API client ID and client secret in the OAuth Client ID and OAuth Client Secret fields.
Paste your rclone token into the Token field.
Click Verify Credential to ensure you filled out the fields correctly, then click Save.
Creating the Cloud Sync Task
To add a cloud sync task, go to Data Protection > Cloud Sync Tasks and click Add. The Cloudsync Task Wizard opens.
Select an existing backup credential from the Credential dropdown list.
If not already added as a cloud credential, click Add New to open the Cloud Credentials screen to add the credential.
Click Save to close the screen and return to the wizard.
Click Next to open the Where and When wizard screen.
Select the option from Direction and in Transfer Mode.
Select the location where to pull from or push data to in the Folder field.
Select the dataset location in Directory/Files. Browse to the dataset to use on SCALE for data storage.
Click the arrow to the left of the name to expand it, then click on the name to select it.
If Direction is set to PUSH, click on the folder icon to add / to the Folder field.
Browsing to select a path
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Cloud provider settings change based on the credential you select. Select or enter the required settings that include where files are stored.
If shown, select the bucket on the Bucket dropdown list.
Select the time to run the task from the Schedule options.
Click Save to add the task.
Use Dry Run to test the configuration before clicking Save or select the option on the Cloud Sync Task widget after you click Save.
TrueNAS adds the task to the Cloud Sync Task widget with the Pending status until the task runs on schedule.
6.3 - Configuring Rsync Tasks
Provides instructions on adding rsync tasks with two methods, SSH connection and module.
You often need to copy data to another system for backup or when migrating to a new system.
A fast and secure way of doing this is by using rsync with SSH.
Rsync provides the ability to either push or pull data.
The Push function copies data from TrueNAS to a remote system.
The Pull function moves or copies data from a remote system and stores it in the defined Path on the TrueNAS host system.
Before You Begin
There are two ways to connect to a remote system and run an rsyc task: setting up an SSH connection or an rsync module.
You need to have either an SSH connection for the remote server already configured or an rsync module configured in a remote rsync server.
Each has different preparation requirements.
Preparing SSH Mode Remote Sync
When the remote system is another TrueNAS, set the Rsync Mode to SSH, verify the SSH service is active on both systems, and ensure SSH keys are exchanged between systems.
When the remote system is not TrueNAS, make sure that system has the rsync service activated and permissions configured for the user account name that TrueNAS uses to run the rsync task.
Create an SSH connection and keypair.
Go to Credentials > Backup Credentials to add an SSH connection and keypair. Download the keys.
Enter the admin user that should set up and have permission to perform the remote sync operation with the remote system.
If using two TrueNAS systems with the admin user, enter admin. If one system only uses the root user, enter root.
Update the admin user by adding the private key to user in the UI and then adding the private key to the home directory for the admin user.
When the Rsync Mode is SSH,
Start the SSH service on both systems. Go to System Settings > Services and enable SSH.
Preparing Module Mode Remote Sync
Create a dataset on the remote TrueNAS (or other system).
Write down the host and path to the data on the remote system you plan to sync with.
First, enable SSH and establish a connection to the remote server.
Establishing an SSH Connection
Enable SSH on the remote system.
Enable SSH in TrueNAS.
Go to System > Services and toggle SSH on.
Set up an SSH connection to the remote server.
To do this go to Credentials > Backup Credentials and use SSH Connections and SSH Keypairs.
See Adding SSH connections for more information.
Populate the SSH Connections configuration fields as follows:
Semi-Automatic (TrueNAS to TrueNAS)
Select Semi-automatic in Setup Method.
Enter the remote TrueNAS URL.
Fill in the remaining credentials for this TrueNAS to authenticate to the remote TrueNAS and exchange SSH keys.
Select Generate New in Private Key.
Enter a number of seconds for TrueNAS to attempt the connection before timing out and closing the connection.
Manual (TrueNAS to Non-TrueNAS)
Enter the remote host name, port number, and user in the appropriate fields.
Click Discover Remote Host Key to connect to the remote system and automatically populate the Remote Host Key field.
Enter a number of seconds for TrueNAS to attempt the connection before timing out and closing the connection.
After establishing the SSH connection, add the rsync task.
Go to Data Protection and click Add on the Rsync Tasks widget to open the Add Rsync Task screen.
Choose a Direction for the rsync task as either Push or Pull and then define the task Schedule.
Select a User account that matches the SSH connection Username entry in the SSH Connections set up for this remote sync.
Provide a Description for the rsync task.
Select SSH in Rsync Mode.
The SSH settings fields show.
Choose a connection method from the Connect using dropdown list.
If selecting SSH private key stored in user’s home directory, enter the IP address or hostname of the remote system in Remote Host.
Use the format username@remote_host if the username differs on the remote host.
Enter the SSH port number for the remote system in Remote SSH Port. By default, 22 is reserved in TrueNAS.
If selecting SSH connection from the keychain, select an existing SSH connection to a remote system or choose Create New to add a new SSH connection.
Enter a full path to a location on the remote server where you either copy information from or to in Remote Path.
Maximum path length is 255 characters.
If the remote path location does not exist, select Validate Remote Path to create and define it in Remote Path.
Select the schedule to use and configure the remaining options according to your specific needs.
Click Save.
Creating a Module Mode Rsync Task
Before you create an rsync task on the host system, you must create a module on the remote system.
You must define at least one module per rsyncd.conf(5) on the rsync server.
The Rsync Daemon application is available in situations where configuring TrueNAS as an rsync server with an rsync module is necessary.
If the non-TruNAS remote server includes an rsync service, make sure it is turned on.
After configuring the rsync server, go to Data Protection and click Add on the Rsync Tasks widget to open the Add Rsync Task screen.
Enter or browse to the dataset or folder to sync with the remote server.
Use the arrow_right to the left of the /mnt folder and each folder listed on the tree to expand and browse through, then click on the name to populate the path field.
Browsing to select a path
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Click in the User field then select the user from the dropdown list.
The user must have permissions to run an rsync on the remote server.
Set the Direction for the rsync task.
Select Pull to copy from the remote server to TrueNAS or Push to copy from TrueNAS to the remote server.
Select Module as the connection mode from the Rsync Mode dropdown.
Enter the remote host name or IP in Remote Host.
Use the format username@remote_host when the username differs from the host entered into the Remote Host field.
Set the schedule for when to run this task, and any other options you want to use.
If you need a custom schedule, select Custom to open the advanced scheduler window.
Advanced Scheduler
Choosing a Presets option populates in the rest of the fields.
To customize a schedule, enter crontab values for the Minutes/Hours/Days.
These fields accept standard cron values.
The simplest option is to enter a single number in the field.
The task runs when the time value matches that number.
For example, entering 10 means that the job runs when the time is ten minutes past the hour.
An asterisk (*) means match all values.
You can set specific time ranges by entering hyphenated number values.
For example, entering 30-35 in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and 35.
You can also enter lists of values.
Enter individual values separated by a comma (,).
For example, entering 1,14 in the Hours field means the task runs at 1:00 AM (0100) and 2:00 PM (1400).
A slash (/) designates a step value.
For example, entering * in Days runs the task every day of the month. Entering */2 runs it every other day.
Combining the above examples creates a schedule running a task each minute from 1:30-1:35 AM and 2:30-2:35 PM every other day.
TrueNAS has an option to select which Months the task runs.
Leaving each month unset is the same as selecting every month.
The Days of Week schedules the task to run on specific days in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month.
The Schedule Preview displays when the current settings mean the task runs.
Examples of CRON syntax
TrueNAS lets users create flexible schedules using the advanced cron syntax.
The tables below have some examples:
Syntax
Meaning
Examples
*
Every item.
* (minutes) = every minute of the hour. * (days) = every day.
*/N
Every Nth item.
*/15 (minutes) = every 15th minute of the hour. */3 (days) = every 3rd day. */3 (months) = every 3rd month.
Comma and hyphen/dash
Each stated item (comma) Each item in a range (hyphen/dash).
1,31 (minutes) = on the 1st and 31st minute of the hour. 1-3,31 (minutes) = on the 1st to 3rd minutes inclusive, and the 31st minute, of the hour. mon-fri (days) = every Monday to Friday inclusive (every weekday). mar,jun,sep,dec (months) = every March, June, September, December.
You can specify days of the month or days of the week.
Desired schedule
Values to enter
3 times a day (at midnight, 08:00 and 16:00)
months=*; days=*; hours=0/8 or 0,8,16; minutes=0 (Meaning: every day of every month, when hours=0/8/16 and minutes=0)
Every Monday/Wednesday/Friday, at 8.30 pm
months=*; days=mon,wed,fri; hours=20; minutes=30
1st and 15th day of the month, during October to June, at 00:01 am
Every 15 minutes during the working week, which is 8am - 7pm (08:00 - 19:00) Monday to Friday
Note that this requires two tasks to achieve: (1) months=*; days=mon-fri; hours=8-18; minutes=*/15 (2) months=*; days=mon-fri; hours=19; minutes=0 We need the second scheduled item, to execute at 19:00, otherwise we would stop at 18:45. Another workaround would be to stop at 18:45 or 19:45 rather than 19:00.
Select the Enabled to enable the task.
Leave cleared to disable the task but not delete the configuration.
You can still run the rsync task by going to Data Protection and clicking then the Run Nowplay_arrow icon for the rsycn task.
Click Save.
6.4 - Adding Periodic Snapshot Tasks
Provides instructions on creating periodic snapshot tasks in TrueNAS SCALE.
Periodic snapshot tasks allow you to schedule creating read-only versions of pools and datasets at a given point in time. You can also access VMWare snapshot integration and TrueNAS SCALE storage snapshots from the Periodic Snapshot Tasks widget.
How should I use snapshots?
Snapshots do not make not copies of the data so creating one is quick and if little data changed, they take very little space.
It is common to take frequent snapshots as soon as every 15 minutes, even for large and active pools.
A snapshot where no files changed takes no storage space, but as files changes happen, the snapshot size changes to reflect the size of the changes.
In the same way as all pool data, after deleting the last reference to the data you recover the space.
Snapshots keep a history of files, providing a way to recover an older copy or even a deleted file.
For this reason, many administrators take snapshots often, store them for a period of time, and store them on another system, typically using the Replication Tasks function.
Such a strategy allows the administrator to roll the system back to a specific point in time.
If there is a catastrophic loss, an off-site snapshot can restore data up to the time of the last snapshot.
Creating a Periodic Snapshot Task
Create the required datasets or zvols before creating a snapshot task.
Go to Data Protection > Periodic Snapshot Tasks and click Add.
First, choose the dataset (or zvol) to schedule as a regular backup with snapshots, and how long to store the snapshots.
Next, define the task Schedule.
If you need a specific schedule, choose Custom and use the Advanced Scheduler section below.
Configure the remaining options for your use case.
For help with naming schema and lifetime settings refer to the sections below.
Click Save to save this task and add it to the list in Data Protection > Periodic Snapshot Tasks.
You can find any snapshots taken using this task in Storage > Snapshots.
To check the log for a saved snapshot schedule, go to Data Protection > Periodic Snapshot Tasks and click on the task. The Edit Periodic Snapshot Tasks screen displays where you can modify any settings for the task.
Using the Advanced Scheduler
Advanced Scheduler
Choosing a Presets option populates in the rest of the fields.
To customize a schedule, enter crontab values for the Minutes/Hours/Days.
These fields accept standard cron values.
The simplest option is to enter a single number in the field.
The task runs when the time value matches that number.
For example, entering 10 means that the job runs when the time is ten minutes past the hour.
An asterisk (*) means match all values.
You can set specific time ranges by entering hyphenated number values.
For example, entering 30-35 in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and 35.
You can also enter lists of values.
Enter individual values separated by a comma (,).
For example, entering 1,14 in the Hours field means the task runs at 1:00 AM (0100) and 2:00 PM (1400).
A slash (/) designates a step value.
For example, entering * in Days runs the task every day of the month. Entering */2 runs it every other day.
Combining the above examples creates a schedule running a task each minute from 1:30-1:35 AM and 2:30-2:35 PM every other day.
TrueNAS has an option to select which Months the task runs.
Leaving each month unset is the same as selecting every month.
The Days of Week schedules the task to run on specific days in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month.
The Schedule Preview displays when the current settings mean the task runs.
Examples of CRON syntax
TrueNAS lets users create flexible schedules using the advanced cron syntax.
The tables below have some examples:
Syntax
Meaning
Examples
*
Every item.
* (minutes) = every minute of the hour. * (days) = every day.
*/N
Every Nth item.
*/15 (minutes) = every 15th minute of the hour. */3 (days) = every 3rd day. */3 (months) = every 3rd month.
Comma and hyphen/dash
Each stated item (comma) Each item in a range (hyphen/dash).
1,31 (minutes) = on the 1st and 31st minute of the hour. 1-3,31 (minutes) = on the 1st to 3rd minutes inclusive, and the 31st minute, of the hour. mon-fri (days) = every Monday to Friday inclusive (every weekday). mar,jun,sep,dec (months) = every March, June, September, December.
You can specify days of the month or days of the week.
Desired schedule
Values to enter
3 times a day (at midnight, 08:00 and 16:00)
months=*; days=*; hours=0/8 or 0,8,16; minutes=0 (Meaning: every day of every month, when hours=0/8/16 and minutes=0)
Every Monday/Wednesday/Friday, at 8.30 pm
months=*; days=mon,wed,fri; hours=20; minutes=30
1st and 15th day of the month, during October to June, at 00:01 am
Every 15 minutes during the working week, which is 8am - 7pm (08:00 - 19:00) Monday to Friday
Note that this requires two tasks to achieve: (1) months=*; days=mon-fri; hours=8-18; minutes=*/15 (2) months=*; days=mon-fri; hours=19; minutes=0 We need the second scheduled item, to execute at 19:00, otherwise we would stop at 18:45. Another workaround would be to stop at 18:45 or 19:45 rather than 19:00.
Using Naming Schemas
The Naming Schema determines how automated snapshot names generate.
A valid schema requires the %Y (year), %m (month), %d (day), %H (hour), and %M (minute) time strings, but you can add more identifiers to the schema too, using any identifiers from the Python strptime function.
For Periodic Snapshot Tasks used to set up a replication task with the Replication Task function:
You can use custom naming schema for full backup replication tasks. If you are going to use the snapshot for an incremental replication task, use the default naming schema.
This uses some letters differently from POSIX (Unix) time functions.
For example, including %z (time zone) ensures that snapshots do not have naming conflicts when daylight time starts and ends, and %S (second) adds finer time granularity.
When referencing snapshots from a Windows computer, avoid using characters like colon (:) that are invalid in a Windows file path.
Some applications limit filename or path length, and there might be limitations related to spaces and other characters.
Always consider future uses and ensure the name given to a periodic snapshot is acceptable.
Setting Snapshot Lifetimes
A snapshot lifetime value defines how long the snapshot schedule ignores that snapshot when it looks for obsolete snapshots to remove.
For example, defining a lifetime of two weeks on a snapshot created after a weekly snapshot schedule runs can result in that snapshot actually being deleted three weeks later.
This is because the snapshot has a timestamp and defined lifetime that preserves the snapshot until the next time the scheduled snapshot task runs.
TrueNAS also preserves snapshots when at least one periodic task requires it.
For example, you have two schedules created where one schedule takes a snapshot every hour and keeps them for a week, and the other takes a snapshot every day and keeps them for 3 years.
Each has an hourly snapshot taken.
After a week, snapshots created at 01.00 through 23.00 get deleted, but you keep snapshots timed at 00.00 because they are necessary for the second periodic task.
These snapshots get destroyed at the end of 3 years.
6.5 - Creating VMWare Snapshots
Provides instructions for creating ZFS snapshots when using TrueNAS as a VMWare datastore.
Use this procedure to create ZFS snapshots when using TrueNAS SCALE as a VMWare datastore.
You must have a paid edition of VMWare ESXi to use the TrueNAS SCALE VMWare Snapshots feature.
ESXi free has a locked (read-only) API that prevents using TrueNAS SCALE VMWare Snapshots.
This tutorial uses ESXi version 8.
When creating a ZFS snapshot of the connected dataset, VMWare automatically takes a snapshot of any running virtual machines on the associated datastore.
VMware snapshots can integrate VMware Tools, making it possible to quiesce VM snapshots, sync filesystems, take shadow copy snapshots, and more.
Quiescing snapshots is the process of bringing VM data into a consistent state, suitable for creating automatic backups.
Quiesced snapshots can be file-system consistent, where all pending data or file-system changes complete, or application consistent, where applications complete all tasks and flush buffers, prior to creating the snapshot.See Manage Snapshots from VMWare for more information.
VM snapshots are included as part of the connected Virtual Machine File System (VMFS) datastore and stored as a point-in-time within the scheduled or manual TrueNAS ZFS snapshot of the data or zvol backing that VMWare datastore.
The temporary VMware snapshots are automatically deleted on the VMWare side, but still exist in the ZFS snapshot and are available as stable restore points.
TrueNAS Enterprise
TrueNAS Enterprise customers with TrueNAS CORE 12.0 and newer and TrueNAS SCALE 22.12.4 (Bluefin) and newer deployed can access the iXsystems TrueNAS vCenter plugin.
This activates management options for TrueNAS hardware attached to vCenter Server and enables limited management of TrueNAS systems from a single interface.
Please contact iXsystems Support to learn more and schedule a time to deploy or upgrade the plugin.
Contacting iXsystems Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
Before using TrueNAS SCALE to create VMWare snapshots, configure TrueNAS to present a VMFS datastore or NFS export to your ESXi host(s) (this tutorial uses iSCSI) and then create and start your VM(s) in ESXi.
Virtual machines must be running for TrueNAS to include them in VMWare snapshots, because powered-off virtual machines are already in a consistent state
Go to Datasets and click Add Zvol to create a dedicated zvol for VMWare. This tutorial uses virtual/vmware/zvol-01.
Create an iSCSI share.
Go to Shares and click Wizard on the Block (iSCSI) Shares Targets widget.
a. Enter a name for the share. For example, vmware. Select Device for Extent Type and select the zvol from the Device dropdown.
Leave Sharing Platform set to VMware and Target set to Create New, then click Next.
b. Set Portal to Create New.
You can leave Discovery Authentication Method set to NONE, or select CHAP or Mutual CHAP and enter a Discovery Authentication Group ID.
Click Add next to IP Address and select either 0.0.0.0 for IPv4 or :: for IPv6 to listen on all ports.
c. Leave Initiators blank and click Save.
In the VMWare ESXi Host Client, go to Storage, select Adapters, and then click Software iSCSI to configure the iSCSI connection.
c. Click Rescan to discover the iSCSI initiator.
ESXi automatically adds static targets for discovered initiators.
Click Software iSCSI again to confirm.
d. Go to Devices and click Rescan to discover the shared storage. ESXi adds the TrueNAS iSCSI disk to the list of devices.
Go to Datastores and click New Datastore to create a new VMFS datastore using the TrueNAS device.
Then go to Virtual Machines and create your new virtual machine(s), using the new datastore for storage.
Creating a VMWare Snapshot
To configure TrueNAS SCALE to create VMWare snapshots, go to Data Protection and click the VMware Snapshot Integration button in the Periodic Snapshot Tasks widget to open the VMWare Snapshots screen.
You must follow the exact sequence to add the VMware snapshot or the ZFS Filesystem and Datastore fields do not populate with options available on your system.
If you click in ZFS Filestore* or Datastores before you click Fetch Datastores the creation process fails, the two fields do not populate with the information from the VMWare host, and you must exit the add form or click Cancel and start again.
Enter the IP address or host name for your VMWare system in Hostname.
Enter the user credentials on the VMware host with ‘Create Snapshot’ and ‘Remove Snapshot’ permissions in VMware.
See Virtual Machine Snapshot Management Privileges from VMware for more information.
Click Fetch Datastores. This connects TrueNAS SCALE to the VMWare host and populates the ZFS Filesystem and Datastore dropdown fields.
Make sure the correct TrueNAS ZFS dataset or zvol matching the VMware datastore is populated.
Select the TrueNAS SCALE dataset from the ZFS Filesystem dropdown list of options.
Select the VMFS datastore from the Datastore dropdown list of options.
Click Save.
The saved snapshot configuration appears on the VMware Snapshots screen.
State indicates the current status of the VMware connection as PENDING, SUCCESS, or ERROR.
Create a new periodic snapshot task for the zvol or a parent dataset.
If there is an existing snapshot task for the zvol or a parent dataset, VMWare snapshots are automatically integrated with any snapshots created after the VMWare snapshot is configured.
Expand the configured task on the Periodic Snapshot Tasks screen and ensure that VMware Sync is true.
Reverting to a ZFS Snapshot in VMWare ESXi
To revert a VM using a ZFS snapshot, first clone the snapshot as a new dataset in TrueNAS SCALE, present the cloned dataset to ESXi as a new LUN, resignature the snapshot to create a new datastore, then stop the old VM and re-register the existing machine from the new datastore.
Clone the snapshot to a new dataset.
a. Go to Data Protection and click Snapshots on the Periodic Snapshot Tasks widget and locate the snapshot you want to recover and click on that row to expand details.
b. Click Clone to New Dataset.
Enter a name for the new dataset or accept the one provided then click Clone.
Share the cloned zvol to VMWare using NFS or iSCSI (this tutorial uses iSCSI).
a. Go to Shares and click Block (iSCSI) Shares Targets to access the iSCSI screen.
b. Click Extents and then click Add to open the Add Extent screen.
c. Enter a name for the new extent, select Device from the Extent Type dropdown, and select the cloned zvol from the Device dropdown.
Edit other settings according to your use case and then click Save.
d. Click Associated Targets and then click Add to open the Add Associated Target screen.
e. Select the existing VMWare target from the Target dropdown.
Enter a new LUN ID number or leave it blank to automatically assign the next available number.
Select the new extent from the Extent dropdown and then click Save.
Go to Storage > Adapters and click Rescan to discover the new LUN.
Then go to the Devices tab and click Rescan again to discover VMFS filesystems on the LUN.
At this point, ESXi discovers the cloned device snapshot, but is unable to mount it because the original device is still online.
Resignature the snapshot so that it can be mounted.
a. Access the ESXi host shell using SSH or a local console connection to resignature the snapshot
b. Enter the command
esxcli storage vmfs snapshot list
to view the unmounted snapshot.
Note the VMFS UUID value.
c. Enter the command
esxcli storage vmfs snapshot resignature -u VMFS-UUID, where VMFS-UUID is the ID of the snapshot according to the previous command output.
ESXi resignatures the snapshot and automatically mounts the device.
Output Example
[root@localhost:~] esxcli storage vmfs snapshot list
65a58a71-c5ac3323-6306-d4ae52c1e78d
Volume Name: LUN1
VMFS UUID: 65a58a71-c5ac3323-6306-d4ae52c1e78d
Can mount: false
Reason for un-mountability: the original volume is still online
Can resignature: true
Reason for non-resignaturability:
Unresolved Extent Count: 1
[root@localhost:~] esxcli storage vmfs snapshot resignature -u 65a58a71-c5ac3323-6306-d4ae52c1e78d
d. Go back to Storage > Devices in the ESXi Host Client UI and click Refresh.
The mounted snapshot appears in the list of devices.
Select the VM(s) you want to revert and click Next.
e. Review selections on the Ready to complete screen/ If correct, click Finish.
Start the new VM(s) and verify functionality, then delete or archive the previous VM(s).
Copy or migrate the VMware virtual machine to the original, non-snapshot datastore.
6.6 - Managing S.M.A.R.T. Tests
Provides instructions on running S.M.A.R.T. tests manually or automatically, using Shell to view the list of tests, and configuring the S.M.A.R.T. test service.
S.M.A.R.T. or Self-Monitoring, Analysis and Reporting Technology is a standard for disk monitoring and testing.
You can monitor disks for problems using different kinds of self-tests.
TrueNAS can adjust when it issues S.M.A.R.T. alerts.
When S.M.A.R.T. monitoring reports a disk issue, we recommend you replace that disk.
Most modern ATA, IDE, and SCSI-3 hard drives support S.M.A.R.T.
Refer to your respective drive documentation for confirmation.
TrueNAS runs S.M.A.R.T. tests on disks.
Running tests can reduce drive performance, so we recommend scheduling tests when the system is in a low-usage state.
Avoid scheduling disk-intensive tests at the same time!
For example, do not schedule S.M.A.R.T. tests on the same day as a disk scrub or other data protection task.
How do I check or change S.M.A.R.T. testing for a disk?
Go to Storage, then click Disks button. Select disks to be examined using the checkbox at left. Click the expand_more to the right of the disk row to expand it.
Enable S.M.A.R.T. shows as true or false.
To enable or disable testing, click EDIT and find the Enable S.M.A.R.T. option.
Running a Manual S.M.A.R.T. Test
To test one or more disk for errors, go to Storage and click the Disks button.
Select the disks you want to test using the checkboxes to the left of the disk names. Selecting multiple disks displays the Batch Operations options.
Click Manual Test. The Manual S.M.A.R.T. Test dialog displays.
Manual S.M.A.R.T. tests on NVMe devices is currently not supported.
Next, select the test type from the Type dropdown and then click Start.
Test types differ based on the drive connection, ATA or SCSI.
Test duration varies based on the test type you chose.
TrueNAS generates alerts when tests discover issues.
ATA Drive Connection Test Types
The ATA drive connection test type options are:
Long runs a S.M.A.R.T. Extended Self Test that scans the entire disk surface, which may take hours on large-volume disks.
Short runs a basic S.M.A.R.T. Short Self Test (usually under ten minutes) that varies by manufacturer.
Conveyance runs a S.M.A.R.T. Conveyance Self Test (usually only minutes) that identifies damage incurred while transporting the device.
Offline runs a S.M.A.R.T. Immediate Offline Test that updates the S.M.A.R.T. Attribute values. Errors will appear in the S.M.A.R.T. error log.
SCSI Drive Connection Test Type
Long runs the “Background long” self-test.
Short runs the “Background short” self-test.
Offline runs the default self-test in the foreground, but doesn’t place an entry in the self-test log.
Where can I view the test results?
Click the expand_more in a disk’s row to expand it, then click S.M.A.R.T. TEST RESULTS.
You can also see results in the Shell using smartctl and the name of the drive: smartctl -l selftest /dev/ada0.
Running Automatic S.M.A.R.T. Tests
To schedule recurring S.M.A.R.T. tests, go to Data Protection and click ADD in the S.M.A.R.T. Tests widget.
Select the disks to test from the Disks dropdown list, and then select the test type to run from the Type dropdown list.
Next select a preset from the Schedule dropdown. To create a custom schedule select Custom to open the advanced scheduler window where you can define the schedule parameters you want to use.
Advanced Scheduler
Choosing a Presets option populates in the rest of the fields.
To customize a schedule, enter crontab values for the Minutes/Hours/Days.
These fields accept standard cron values.
The simplest option is to enter a single number in the field.
The task runs when the time value matches that number.
For example, entering 10 means that the job runs when the time is ten minutes past the hour.
An asterisk (*) means match all values.
You can set specific time ranges by entering hyphenated number values.
For example, entering 30-35 in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and 35.
You can also enter lists of values.
Enter individual values separated by a comma (,).
For example, entering 1,14 in the Hours field means the task runs at 1:00 AM (0100) and 2:00 PM (1400).
A slash (/) designates a step value.
For example, entering * in Days runs the task every day of the month. Entering */2 runs it every other day.
Combining the above examples creates a schedule running a task each minute from 1:30-1:35 AM and 2:30-2:35 PM every other day.
TrueNAS has an option to select which Months the task runs.
Leaving each month unset is the same as selecting every month.
The Days of Week schedules the task to run on specific days in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month.
The Schedule Preview displays when the current settings mean the task runs.
Examples of CRON syntax
TrueNAS lets users create flexible schedules using the advanced cron syntax.
The tables below have some examples:
Syntax
Meaning
Examples
*
Every item.
* (minutes) = every minute of the hour. * (days) = every day.
*/N
Every Nth item.
*/15 (minutes) = every 15th minute of the hour. */3 (days) = every 3rd day. */3 (months) = every 3rd month.
Comma and hyphen/dash
Each stated item (comma) Each item in a range (hyphen/dash).
1,31 (minutes) = on the 1st and 31st minute of the hour. 1-3,31 (minutes) = on the 1st to 3rd minutes inclusive, and the 31st minute, of the hour. mon-fri (days) = every Monday to Friday inclusive (every weekday). mar,jun,sep,dec (months) = every March, June, September, December.
You can specify days of the month or days of the week.
Desired schedule
Values to enter
3 times a day (at midnight, 08:00 and 16:00)
months=*; days=*; hours=0/8 or 0,8,16; minutes=0 (Meaning: every day of every month, when hours=0/8/16 and minutes=0)
Every Monday/Wednesday/Friday, at 8.30 pm
months=*; days=mon,wed,fri; hours=20; minutes=30
1st and 15th day of the month, during October to June, at 00:01 am
Every 15 minutes during the working week, which is 8am - 7pm (08:00 - 19:00) Monday to Friday
Note that this requires two tasks to achieve: (1) months=*; days=mon-fri; hours=8-18; minutes=*/15 (2) months=*; days=mon-fri; hours=19; minutes=0 We need the second scheduled item, to execute at 19:00, otherwise we would stop at 18:45. Another workaround would be to stop at 18:45 or 19:45 rather than 19:00.
Saved schedules appear in the S.M.A.R.T. Tests window.
S.M.A.R.T. tests can offline disks! Avoid scheduling S.M.A.R.T. tests simultaneously with scrub or other data protection tasks.
Start the S.M.A.R.T. service. Go to System Settings > Services and scroll down to the S.M.A.R.T. service. If not running, click the toggle to turn the service on. Select Start Automatically to have this service start after after the system reboots.
If you have not configured the S.M.A.R.T. service yet, while the service is stopped, click edit to open the service configuration form. See Services S.M.A.R.T. Screen for more information on service settings.
Click Save to save settings and return to the Services screen.
RAID controllers?
Disable the S.M.A.R.T. service when a RAID controller controls the disks.
The controller monitors S.M.A.R.T. separately and marks disks as a Predictive Failure on a test failure.
Using Shell to View Scheduled Tests
CLI
To verify the schedule is saved, you can open the shell and enter smartd -q showtests.
6.7 - Replication Tasks
Tutorials for configuring ZFS snapshot replication tasks in TrueNAS SCALE.
TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data.
When properly configured and scheduled, replication takes regular snapshots of storage pools or datasets and saves them in the destination location either on the same system or a different system.
Local replication occurs on the same TrueNAS SCALE system using different pools or datasets.
Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system, or with some other remote server you want to use to store your replicated data.
Local and remote replication can involve encrypted pools or datasets.
Setting Up a Simple Replication Task Overview
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
It also covers the related steps to take prior to configuring a replication task.
Prerequisites
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities.
Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection.
You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
Contents
Setting Up a Local Replication Task: Provides instructions on adding a replication task using different pools or datasets on the same TrueNAS system.
Advanced Replication Tasks: Provides instructions on using Advanced Replication and lists other tutorials for configuring advanced ZFS snapshot replication tasks in TrueNAS SCALE.
Provides instructions on adding a replication task using different pools or datasets on the same TrueNAS system.
Using Local Replication
A local replication creates a zfs snapshot and saves it to another location on the same TrueNAS SCALE system either using a different pool, or dataset or zvol.
This allows users with only one system to take quick data backups or snapshots of their data when they have only one system.
In this scenario, create a dataset on the same pool to store the replication snapshots. You can create and use a zvol for this purpose.
If configuring local replication on a system with more than one pool, create a dataset to use for the replicated snapshots on one of those pools.
While we recommend regularly scheduled replications to a remote location as the optimal backup scenario, this is useful when no remote backup locations are available, or when a disk is in immediate danger of failure.
Storage space you allocate to a zvol is only used by that volume, it does not get reallocated back to the total storage capacity of the pool or dataset where you create the zvol if it goes unused.
Plan your anticipated storage need before you create the zvol to avoid creating a zvol that exceeds your storage needs for this volume.
Do not assign capacity that exceeds what is required for SCALE to operate properly. For more information, see SCALE Hardware Guide for CPU, memory and storage capacity information.
With the implementation of the Local Administrator user and role-based permissions, setting up replication tasks as an admin user has a few differences over setting up replication tasks when logged in as root.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
Users also have the option to run a scheduled job on demand.
Setting Up a Simple Replication Task Overview
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
It also covers the related steps you should take prior to configuring a replication task.
Prerequisites
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities.
Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection.
You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
Configuring a Local Replication Task
The replication wizard allows users to create and copy ZFS snapshots to another location on the same system.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc.
Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots.
If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems.
Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Verify Admin User Settings
a. Go to Credentials > Local User, click anywhere on the admin user row to expand it.
Click Edit.
Scroll down to the Home Directory setting.
If set to /home/admin, select Create Home Directory, then Click Save.
If set to /var/empty, first create a dataset to use for home directories, like /tank/homedirs. Enter this in the Home Directory field, make sure this is not read only.
b. Select the sudo permission level you want the admin user to have.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
c. Click Save.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
a. Select On this System on the Source Location dropdown list.
Browse to the location of the pool or dataset you want to replicate and select it so it populates Source with the path.
Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
b. Select On this System on the Destination Location dropdown list.
Browse to the location of the pool or dataset you want to use to store replicated snapshots and select to populate Destination with the path.
c. (Optional) Enter a name for the snapshot in Task Name.
SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time.
Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use.
This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it.
Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually.
Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION.
A dialog displays if this is the first snapshot taken using the destination dataset.
If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task.
This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget.
The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
Provides instructions on adding a replication task with a remote system.
Using Remote Replication
TrueNAS SCALE replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data.
When properly configured and scheduled, remote replication takes take regular snapshots of storage pools or datasets and saves them in the destination location on another system.
Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system (SCALE or CORE) where you want to use to store your replicated snapshots.
With the implementation of the Local Administrator user and role-based permissions, setting up replication tasks as an admin user has a few differences than with setting up replication tasks when logged in as root.
Setting up remote replication while logged in as the admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
Users also have the option to run a scheduled job on demand.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Setting Up a Simple Replication Task Overview
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
It also covers the related steps you should take prior to configuring a replication task.
Prerequisites
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities.
Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection.
You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
Creating a Remote Replication Task
To streamline creating simple replication tasks use the Replication Task Wizard to create and copy ZFS snapshots to another system.
The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, schedule, or retention lifetime, etc.
Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
This saves some time when creating multiple replication tasks between the same two systems.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots.
If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems.
Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Verify Admin User Settings
a. Go to Credentials > Local User, click anywhere on the admin user row to expand it.
Click Edit.
Scroll down to the Home Directory setting.
If set to /home/admin, select Create Home Directory, then Click Save.
If set to /var/empty, first create a dataset to use for home directories, like /tank/homedirs. Enter this in the Home Directory field, make sure this is not read only.
b. Select the sudo permission level you want the admin user to have.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
c. Click Save.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
a. Select either On this System or On a Different System on the Source Location dropdown list.
If your source is a remote system, select On a Different System. The Destination Location automatically changes to On this System.
If your source is the local TrueNAS SCALE system, you must select On a Different System from the Destination Location dropdown list to do remote replication.
TrueNAS shows the number snapshots available for replication.
b. Select an existing SSH connection to the remote system, or select Create New to open the New SSH Connection configuration screen.
c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the Source with the path.
You can select multiple sources or manually type the names into the Source field.
Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
d. Repeat to populate the Destination field.
You cannot use zvols as a remote replication destination. Add a name to the end of the path to create a new dataset in that location.
e. Select Use Sudo for ZFS Commands. Only displays when logged in as the admin user (or the name of the admin user).
This removes the need to issue the cli zfs allow command in Shell on the remote system.
When the dialog displays, click Use Sudo for ZFS Comands. If you close this dialog, select the option on the Add Replication Task wizard screen.
f. Select Replicate Custome Snapshots, then leave the default value in Naming Schema.
If you know how to enter the schema you want, enter it in Naming Schema.
Remote sources require entering a snapshot naming schema to identify the snapshots to replicate.
A naming schema is a pattern of naming custom snapshots you want to replicate.
Enter the name and strftime(3) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication. Separate entries by pressing Enter. The number of snapshots matching the patterns display.
g. (Optional) Enter a name for the snapshot in Task Name.
SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
To make it easier to find the snapshot, give it name easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time.
Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use.
This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it.
Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually.
Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION.
A dialog displays if this is the first snapshot taken using the destination dataset.
If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task.
This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget.
The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
Enter the administration user (i.e., root or admin) that logs into the remote system with the web UI in Admin Username.
Enter the password in Admin Password.
Enter the administration user (i.e., root or admin) for remote system SSH session.
If you clear root as the the user and type any other name the Enable passwordless sudo for ZFS commands option displays.
This option does nothing so leave it cleared.
Select Generate New from the Private Key dropdown list.
(Optional) Select a cipher from the dropdown list, or enter a new value in seconds for the Connection Timeout if you want to change the defaults.
Click Save to create a new SSH connection and populate the SSH Connection field in the Replication Task Wizard.
Using SSH Transfer Security
Using encryption for SSH transfer security is always recommended.
In situations where you use two systems within an absolutely secure network for replication, disabling encryption speeds up the transfer.
However, the data is completely unprotected from eavesdropping.
Choosing No Encryption for the task is less secure but faster. This method uses common port settings but you can override these by switching to the Advanced Replication Creation options or by editing the task after creation.
Provides instructions on using Advanced Replication and lists other tutorials for configuring advanced ZFS snapshot replication tasks in TrueNAS SCALE.
TrueNAS SCALE advanced replication allows users to create one-time or regularly scheduled snapshots of data stored in pools, datasets or zvols on their SCALE system as a way to back up stored data.
When properly configured and scheduled, local or remote replication using the Advanced Replication Creation option takes regular snapshots of storage pools or datasets and saves them in the destination location on the same or another system.
Local replication occurs on the same TrueNAS SCALE system using different pools or datasets.
Remote replication can occur between your TrueNAS SCALE system and another TrueNAS system, or with some other remote server you want to use to store your replicated data.
Local and remote replication can involve encrypted pools or datasets.
The Advanced Replication Creation option opens the Add Replication Task screen.
This screen provides access to the same settings found in the replication wizard but has more options to specify:
Full file system replication
Stream compression
Replication speed
Attempts to replicate data before the task fails
Block size for data sent
Log level verbosity
You can also:
Change encrypted replication to allow an unencrypted dataset as the destination
Create replication from scratch
Include or exclude replication properties
Replicate specific snapshots that match a defined creation time.
Prevent the snapshot retention policy from removing source system snapshots that failed
With the implementation of the local administrator user to replace the root login, there are a few differences between setting up replication tasks as an admin user than with setting up replication tasks when logged in as root.
Setting up remote replication while logged in as the admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
Users also have the option to run a scheduled job on demand.
Setting Up a Replication Task Overview
This section provides a simple overview of setting up a replication task regardless of the type of replication, local or remote.
It also covers the related steps you should take prior to configuring a replication task.
Prerequisites
Before setting up a replication task, you must configure the admin user with the Home Directory set to something other than /var/empty and Auxiliary Groups set to include the builtin_administrators group.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
Remote replication requires setting up an SSH connection in TrueNAS before creating a remote replication task.
Verify the SSH service settings to ensure you have Root with Password, Log in as Admin with Password, and Allow Password Authentication selected to enable these capabilities.
Incorrect SSH service settings can impact the admin user ability to establish an SSH session during replication and require you to obtain and paste a public SSH key into the admin user settings.
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user is correctly configured.
Create a Periodic Snapshot task of the storage locations to be backed up.
Create an SSH connection between the local SCALE system and the remote system for remote replication tasks. Local replication does not require an SSH connection.
You can do this from either Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option in the settings for the remote system.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard where you specify the settings for the replication task.
Setting options change based on the source selections. Replicating to or from a local source does not require an SSH connection.
Configure your SSH connection before you begin configuring the replication task through the Add Replication Task screen.
If you have an existing SSH connection with the remote system the option displays on the SSH Connection dropdown list.
Turn on SSH service. Go to System Settings > Services screen, verify the SSH service configuration, then enable it.
Creating a Simplified Advanced Replication Task
To access advanced replication settings, click Advanced Replication Creation at the bottom of the first screen of the Replication Task Wizard.
The Add Replication Task configuration screen opens.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots.
If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems.
Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Verify Admin User Settings
a. Go to Credentials > Local User, click anywhere on the admin user row to expand it.
Click Edit.
Scroll down to the Home Directory setting.
If set to /home/admin, select Create Home Directory, then Click Save.
If set to /var/empty, first create a dataset to use for home directories, like /tank/homedirs. Enter this in the Home Directory field, make sure this is not read only.
b. Select the sudo permission level you want the admin user to have.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
c. Click Save.
Give the task a name and set the direction of the task.
Unlike the wizard, the Name does not automatically populate with the source/destination task name after you set the source and destination for the task.
Each task name must be unique, and we recommend you name it in a way that makes it easy to remember what the task is doing.
Select the direction of the task. Pull replicates data from a remote system to the local system. Push sends data from the local system to the remote.
Select the method of tranfer for this replication from the Transport dropdown list.
Select LOCAL to replicate data to another location on the same system.
Select SSH is the standard option for sending or receiving data from a remote system. Select the existing SSH Connection from the dropdown list.
Select SSH+Netcat is available as a faster option for replications that take place within completely secure networks.
SSH+Netcat requires defining netcat ports and addresses to use for the Netcat connection.
With SSH-based replications, select the SSH Connection to the remote system that sends or receives snapshots.
To create a new connection to use for replication from a destination to this local system, select newpullssh.
Select Use Sudo for Zfs Commands to controls whether the user used for SSH/SSH+NETCAT replication has passwordless sudo enabled to execute zfs commands on the remote host.
If not selected, you must enter zfs allow on the remote system to to grant non-user permissions to perform ZFS tasks.
Specify the source and destination paths. Adding /name to the end of the path creates a new dataset in that location.
Click the arrow to the left of each folder or dataset name to expand the options and browse to the dataset, then click on the dataset to populate the Source.
Choose a preconfigured periodic snapshot task as the source of snapshots to replicate.
Pulling snapshots from a remote source requires a valid SSH Connection before the file browser can show any directories.
A remote destination requires you to specify an SSH connection before you can enter or select the path.
If the file browser shows a connection error after selecting the correct SSH Connection, you might need to log in to the remote system and configure it to allow SSH connections.
Define how long to keep snapshots in the destination.
Remote sources require defining a snapshot naming schema to identify the snapshots to replicate.
Local sources are replicated by snapshots that were generated from a periodic snapshot task and/or from a defined naming schema that matches manually created snapshots.
DO NOT use zvols as remote destinations.
Select a previously configured periodic snapshot task for this replication task in Periodic Snapshot Tasks.
The replication task selected must have the same values in Recursive and Exclude Child Datasets as the chosen periodic snapshot task.
Selecting a periodic snapshot schedule removes the Schedule field.
If a periodic snapshot task does not exist, exist the advanced replication task configuration, go configure a periodic snapshot task, then return to the Advanced Replication screen to configure the replication Task.
Select Replicate Specific Snapshots to define specific snapshots from the periodic task to use for the replication.
This displays the schedule options for the snapshot task. Enter the schedule.
The only periodically generated snapshots included in the replication task are those that match your defined schedule.
Select the naming schema or regular expression option to use for this snapshot.
A naming schema is a collection of strftime time and date strings and any identifiers that a user might have added to the snapshot name.
For example, entering the naming schema custom-%Y-%m-%d_%H-%M finds and replicates snapshots like custom-2020-03-25_09-15.
Enter multiple schemas by pressing Enter to separate each schema.
Set the replication schedule to use and define when the replication task runs.
Leave Run Automatically selected to use the snapshot task specified and start the replication immediately after the related periodic snapshot task completes.
Select Schedule to display scheduling options for this replication task and To automate the task according to its own schedule.
Selecting Schedule allows scheduling the replication to run at a separate time.
Choose a time frame that gives the replication task enough time to finish and is during a time of day when network traffic for both source and destination systems is minimal.
Use the custom scheduler (recommended) when you need to fine-tune an exact time or day for the replication.
Advanced Scheduler
Choosing a Presets option populates in the rest of the fields.
To customize a schedule, enter crontab values for the Minutes/Hours/Days.
These fields accept standard cron values.
The simplest option is to enter a single number in the field.
The task runs when the time value matches that number.
For example, entering 10 means that the job runs when the time is ten minutes past the hour.
An asterisk (*) means match all values.
You can set specific time ranges by entering hyphenated number values.
For example, entering 30-35 in the Minutes field sets the task to run at minutes 30, 31, 32, 33, 34, and 35.
You can also enter lists of values.
Enter individual values separated by a comma (,).
For example, entering 1,14 in the Hours field means the task runs at 1:00 AM (0100) and 2:00 PM (1400).
A slash (/) designates a step value.
For example, entering * in Days runs the task every day of the month. Entering */2 runs it every other day.
Combining the above examples creates a schedule running a task each minute from 1:30-1:35 AM and 2:30-2:35 PM every other day.
TrueNAS has an option to select which Months the task runs.
Leaving each month unset is the same as selecting every month.
The Days of Week schedules the task to run on specific days in addition to any listed days.
For example, entering 1 in Days and setting Wed for Days of Week creates a schedule that starts a task on the first day of the month and every Wednesday of the month.
The Schedule Preview displays when the current settings mean the task runs.
Examples of CRON syntax
TrueNAS lets users create flexible schedules using the advanced cron syntax.
The tables below have some examples:
Syntax
Meaning
Examples
*
Every item.
* (minutes) = every minute of the hour. * (days) = every day.
*/N
Every Nth item.
*/15 (minutes) = every 15th minute of the hour. */3 (days) = every 3rd day. */3 (months) = every 3rd month.
Comma and hyphen/dash
Each stated item (comma) Each item in a range (hyphen/dash).
1,31 (minutes) = on the 1st and 31st minute of the hour. 1-3,31 (minutes) = on the 1st to 3rd minutes inclusive, and the 31st minute, of the hour. mon-fri (days) = every Monday to Friday inclusive (every weekday). mar,jun,sep,dec (months) = every March, June, September, December.
You can specify days of the month or days of the week.
Desired schedule
Values to enter
3 times a day (at midnight, 08:00 and 16:00)
months=*; days=*; hours=0/8 or 0,8,16; minutes=0 (Meaning: every day of every month, when hours=0/8/16 and minutes=0)
Every Monday/Wednesday/Friday, at 8.30 pm
months=*; days=mon,wed,fri; hours=20; minutes=30
1st and 15th day of the month, during October to June, at 00:01 am
Every 15 minutes during the working week, which is 8am - 7pm (08:00 - 19:00) Monday to Friday
Note that this requires two tasks to achieve: (1) months=*; days=mon-fri; hours=8-18; minutes=*/15 (2) months=*; days=mon-fri; hours=19; minutes=0 We need the second scheduled item, to execute at 19:00, otherwise we would stop at 18:45. Another workaround would be to stop at 18:45 or 19:45 rather than 19:00.
Click Save.
Setting a Replication Compression Level
Options for compressing data, adding a bandwidth limit, or other data stream customizations are available.
Stream Compression options are only available when using SSH.
Before enabling Compressed WRITE Records, verify that the destination system also supports compressed write records.
Setting Block Size
Allow Blocks Larger than 128KB is a one-way toggle.
Replication tasks using large block replication only continue to work as long as this option remains enabled.
Setting Full File System Replication
By default, the replication task uses snapshots to quickly transfer data to the receiving system.
Selecting Full Filesystem Replication means the task completely replicates the chosen Source, including all dataset properties, snapshots, child datasets, and clones.
When using this option, we recommended allocating additional time for the replication task to run.
Replicating Dataset Properties
Leave Full Filesystem Replication unselected and select Include Dataset Properties to include just the dataset properties in the snapshots to replicate.
Leave this option unselected on an encrypted dataset to replicate the data to another unencrypted dataset.
Replicating Child Datasets
Select Recursive to recursively replicate child dataset snapshots or exclude specific child datasets or properties from the replication.
Defining Replication Properties
Enter newly defined properties in Properties Override to replace existing dataset properties with the newly defined properties in the replicated files.
List any existing dataset properties to remove from the replicated files in Properties Exclude.
Saving Pending Snapshots
When a replication task is having difficulty completing, it is a good idea to select Save Pending Snapshots.
This prevents the source TrueNAS from automatically deleting any snapshots that failed to replicate to the destination system.
Changing Destination Dataset from Read-Only
By default, the destination dataset is set to be read-only after the replication completes.
You can change the Destination Dataset Read-only Policy to only start replication when the destination is read-only (set to REQUIRE) or to disable it by setting it to IGNORE.
Adding Transfer Encryption
The Encryption option adds another layer of security to replicated data by encrypting the data before transfer and decrypting it on the destination system.
Selecting Encryption adds the additional setting options HEX key or PASSPHRASE.
You can store the encryption key either in the TrueNAS system database or in a custom-defined location.
Synchronizing Destination and Source Snapshots
Synchronizing Destination Snapshots With Source destroys any snapshots in the destination that do not match the source snapshots.
TrueNAS also does a full replication of the source snapshots as if the replication task never run, which can lead to excessive bandwidth consumption.
This can be a very destructive option.
Make sure that any snapshots deleted from the destination are obsolete or otherwise backed up in a different location.
Defining Snapshot Retention
Defining the Snapshot Retention Policy is generally recommended to prevent cluttering the system with obsolete snapshots.
Choosing Same as Source keeps the snapshots on the destination system for the same amount of time as the defined Snapshot Lifetime from the source system periodic snapshot task.
You can use Custom to define your own lifetime for snapshots on the destination system.
Replicating Snapshots Matching a Schedule
Selecting Only Replicate Snapshots Matching Schedule restricts the replication to only those snapshots created at the same time as the replication schedule.
6.7.3.1 - Setting Up an Encrypted Replication Task
Provides instructions on adding a replication task to a remote system and using encryption.
Using Encryption in Replication Tasks
TrueNAS SCALE replication allows users to create replicated snapshots of data stored in encrypted pools, datasets or zvols that on their SCALE system as a way to back up stored data to a remote system. You can use encrypted datasets in a local replication.
You can set up a replication task for a dataset encrypted with a passphrase or a hex encryption key, but you must unlock the dataset before the task runs or the task fails.
With the implementation of the Local Administrator user and role-based permissions, when setting up remote replication tasks when logged in as an admin user requires selecting Use Sudo For ZFS Commands.
The first snapshot taken for a task creates a full file system snapshot, and all subsequent snapshots taken for that task are incremental to capture differences occurring between the full and subsequent incremental snapshots.
Scheduling options allow users to run replication tasks daily, weekly, monthly, or on a custom schedule.
Users also have the option to run a scheduled job on demand.
Remote replication with datasets also require an SSH connection in TrueNAS. You can use an existing SSH connection if it has the same user credentials you want to use for the new replication task.
Setting Up a Simple Replication Task Overview
This section provides a simple overview of setting up a remote replication task for an encrypted dataset.
It also covers the related steps you should take prior to configuring the replication task.
Replication Task General Overview
Set up the data storage for where you want to save replicated snapshots.
Make sure the admin user has a home directory assigned.
Create an SSH connection between the local SCALE system and the remote system.
You can do this by either going to Credentials > Backup Credentials > SSH Connection and clicking Add or from the Replication Task Wizard using the Generate New option for the remote system.
Unlock the encrypted dataset(s) and export the encryption key to a text editor like Notepad.
Go to Data Protection > Replication Tasks and click Add to open the Replication Task Wizard.
Specify the from and to sources, task name, and set the schedule.
Setting options change based on the source selections. Replicating to or from a local source does not requires an SSH connection.
This completes the general process for all replication tasks.
Creating a Remote Replication Task for an Encrypted Dataset
To streamline creating simple replication tasks use the Replication Task Wizard to create and copy ZFS snapshots to another system.
The wizard assists with creating a new SSH connection and automatically creates a periodic snapshot task for sources that have no existing snapshots.
If you have an existing replication task, you can select it on the Load Previous Replication Task dropdown list to load the configuration settings for that task into the wizard, and then make change such as assigning it a different destination, select encryption options, schedule, or retention lifetime, etc.
Saving changes to the configuration creates a new replication task without altering the task you loaded into the wizard.
This saves some time when creating multiple replication tasks between the same two systems.
Before you begin configuring the replication task, first verify the destination dataset you want to use to store the replication snapshots is free of existing snapshots, or that snapshots with critical data are backed up before you create the task.
To create a replication task:
Create the destination dataset or storage location you want to use to store the replication snapshots.
If using another TrueNAS SCALE system, create a dataset in one of your pools.
Verify the admin user home directory, auxiliary groups, and sudo setting on both the local and remote destination systems.
Local replication does not require an SSH connection, so this only applies to replication to another system.
If using a TrueNAS CORE system as the remote server, the remote user is always root.
If using a TrueNAS SCALE system on an earlier release like Angelfish, the remote user is always root.
If using an earlier TrueNAS SCALE Bluefin system (22.12.1) or you installed SCALE as the root user then created the admin user after initial installation, you must verify the admin user is correctly configured.
Verify Admin User Settings
a. Go to Credentials > Local User, click anywhere on the admin user row to expand it.
Click Edit.
Scroll down to the Home Directory setting.
If set to /home/admin, select Create Home Directory, then Click Save.
If set to /var/empty, first create a dataset to use for home directories, like /tank/homedirs. Enter this in the Home Directory field, make sure this is not read only.
b. Select the sudo permission level you want the admin user to have.
Allow all sudo commands with no password must be selected to enable SSH+NETCAT remote replication.
c. Click Save.
Unlock the source dataset and export the encryption key to a text editor such as Notepad.
Go to Datasets select the source dataset, locate the ZFS Encryption widget and unlock the dataset if locked.
Export the key and paste it in any text editor such as Notepad. If you set up encryption to use a passphrase, you do not need to export a key.
Go to Data Protection and click Add on the Replication Tasks widget to open the Replication Task Wizard. Configure the following settings:
a. Select On this System on the Source Location dropdown list.
If your source is the local TrueNAS SCALE system, you must select On a Different System from the Destination Location dropdown list to do remote replication.
If your source is a remote system, create the replication task as the root user and select On a Different System. The Destination Location automatically changes to On this System.
TrueNAS shows the number of snapshots available for replication.
b. Select an existing SSH connection to the remote system or create a new connection.
Select Create New to open the New SSH Connection configuration screen.
c. Browse to the source pool/dataset(s), then click on the dataset(s) to populate the Source with the path.
You can select multiple sources or manually type the names into the Source field. Separate multiple entries with commas.
Selecting Recursive replicates all snapshots contained within the selected source dataset snapshots.
d. Repeat to populate the Destination field.
You cannot use zvols as a remote replication destination.
Add a /datasetname to the end of the destination path to create a new dataset in that location.
f. Select Use Sudo for ZFS Commands. Only displays when logged in as the admin user (or the name of the admin user).
This removes the need to issue the cli zfs allow command in Shell on the remote system.
When the dialog displays, click Use Sudo for ZFS Comands. If you close this dialog, select the option on the Add Replication Task wizard screen.
This option only displays when logged in as the admin user.
If not selected you need to issue the cli zfs allow command in Shell on the remote system.
g. Select Replicate Custom Snapshots, then accept the default value in Naming Schema.
Remote sources require entering a snapshot naming schema to identify the snapshots to replicate.
A naming schema is a pattern of naming custom snapshots you want to replicate.
If you want to change the default schema, enter the name and strftime(3) %Y, %m, %d, %H, and %M strings that match the snapshots to include in the replication.
Separate entries by pressing Enter. The number of snapshots matching the patterns display.
h. (Optional) Enter a name for the snapshot in Task Name.
SCALE populates this field with the default name using the source and destination paths separated by a hyphen, but this default can make locating the snapshot in destination dataset a challenge.
To make it easier to find the snapshot, give it a name that is easy for you to identify. For example, a replicated task named dailyfull for a full file system snapshot taken daily.
Click Next to display the scheduling options.
Select the schedule and snapshot retention life time.
a. Select the Replication Schedule radio button you want to use. Select Run Once to set up a replication task you run one time.
Select Run On a Schedule then select when from the Schedule dropdown list.
b. Select the Destination Snapshot Lifetime radio button option you want to use.
This specifies how long SCALE should store copied snapshots in the destination dataset before SCALE deletes it.
Same as Source is selected by default. Select Never Delete to keep all snapshots until you delete them manually.
Select Custom to show two additional settings, then enter the number of the duration you select from the dropdown list. For example, 2 Weeks.
Click START REPLICATION.
A dialog displays if this is the first snapshot taken using the destination dataset.
If SCALE does not find a replicated snapshot in the destination dataset to use to create an incremental snapshot, it deletes any existing snapshots found and creates a full copy of the day snapshot to use as a basis for the future scheduled incremental snapshots for this schedule task.
This operation can delete important data, so ensure you can delete any existing snapshots or back them up in another location.
Click Confirm, then Continue to add the task to the Replication Task widget.
The newly added task shows the status as PENDING until it runs on the schedule you set.
Select Run Now if you want to run the task immediately.
To see a log for a task, click the task State to open a dialog with the log for that replication task.
To see the replication snapshots, go to Datasets, select the destination dataset on the tree table, then select Manage Snapshots on the Data Protection widget to see the list of snapshots in that dataset. Click Show extra columns to add more information columns to the table such as the date created which can help you locate a specific snapshot or enter part of or the full the name in the search field to narrow the list of snapshots.
Enter the administration user (i.e., root or admin) that logs into the remote system with the web UI in Admin Username.
Enter the password in Admin Password.
Enter the administration user (i.e., root or admin) for remote system SSH session.
If you clear root as the the user and type any other name the Enable passwordless sudo for ZFS commands option displays.
This option does nothing so leave it cleared.
Select Generate New from the Private Key dropdown list.
(Optional) Select a cipher from the dropdown list, or enter a new value in seconds for the Connection Timeout if you want to change the defaults.
Click Save to create a new SSH connection and populate the SSH Connection field in the Replication Task Wizard.
Using SSH Transfer Security
Using encryption for SSH transfer security is always recommended.
In situations where you use two systems within an absolutely secure network for replication, disabling encryption speeds up the transfer.
However, the data is completely unprotected from eavesdropping.
Choosing No Encryption for the task is less secure but faster. This method uses common port settings but you can override these by switching to the Advanced Replication Creation options or by editing the task after creation.
After the replication task runs and creates the snapshot on the destination, you must unlock it to access the data.
Click the from the replication task options to download a key file that unlocks the destination dataset.
Replicating to an Unencrypted Destination Dataset
TrueNAS does not support preserving encrypted dataset properties when trying to re-encrypt an already encrypted source dataset.
To replicate an encrypted dataset to an unencrypted dataset on the remote destination system, follow the instructions above to configure the task, then to clear the dataset properties for the replication task:
Select the task on the Replication Task widget. The Edit Replication Task screen opens.
Scroll down to and select Include Dataset Properties to clear the checkbox.
This replicates the unlocked encrypted source dataset to an unencrypted destination dataset.
Using Additional Encryption Options
When you replicate an encrypted pool or dataset you have one level of encryption applied at the data storage level.
Use the passphrase or key created or exported from the dataset or pool to unlock the dataset on the destination server.
To add a second layer of encryption at the replication task level, select Encryption on the Replication Task Wizard, then select the type of encryption you want to apply.
Select either Hex (base-16 numeral format) or Passphrase (alphanumeric format) from the Encryption Key Format dropdown list to open settings for that type of encryption.
Selecting Hex displays Generate Encryption Key preselected. Select the checkbox to clear it and display the Encryption Key field where you can import a custom hex key.
Selecting Passphrase displays the Passphrase field where you enter your alphanumeric passphrase.
Select Store Encryption key in Sending TrueNAS database to store the encryption key in the sending TrueNAS database or leave unselected to choose a temporary location for the encryption key that decrypts replicated data.
6.7.3.2 - Unlocking a Replication Encrypted Dataset or Zvol
Provides information on three methods of unlocking replicated encrypted datasets or zvols without a passphrase.
Unlocking a Replicated Encrypted Dataset or Zvol Without a Passphrase
TrueNAS SCALE users should either replicate the dataset/Zvol without properties to disable encryption at the remote end or construct a special JSON manifest to unlock each child dataset/zvol with a unique key.
Method 1: Construct JSON Manifest.
Replicate every encrypted dataset you want to replicate with properties.
Export key for every child dataset that has a unique key.
For each child dataset construct a proper json with poolname/datasetname of the destination system and key from the source system like this:
{"tank/share01": "57112db4be777d93fa7b76138a68b790d46d6858569bf9d13e32eb9fda72146b"}
Save this file with the extension .json.
On the remote system, unlock the dataset(s) using properly constructed json files.
Method 2: Replicate Encrypted Dataset/zvol Without Properties.
Uncheck properties when replicating so that the destination dataset is not encrypted on the remote side and does not require a key to unlock.
Go to Data Protection and click ADD in the Replication Tasks window.
Click Advanced Replication Creation.
Fill out the form as needed and make sure Include Dataset Properties is NOT checked.
Click Save.
Method 3: Replicate Key Encrypted Dataset/zvol.
Go to Datasets on the system you are replicating from.
Select the dataset encrypted with a key, then click Export Key on the ZFS Encryption widget to export the key for the dataset.
Apply the JSON key file or key code to the dataset on the system you replicated the dataset to.
Option 1: Download the key file and open it in a text editor. Change the pool name/dataset part of the string to the pool name/dataset for the receiving system. For example, replicating from tank1/dataset1 on the replicate-from system to tank2/dataset2 on the replicate-to system.
Option 2: Copy the key code provided in the Key for dataset window.
On the system receiving the replicated pool/dataset, select the receiving dataset and click Unlock.
Unlock the dataset.
Either clear the Unlock with Key file checkbox, paste the key code into the Dataset Key field (if there is a space character at the end of the key, delete the space), or select the downloaded Key file that you edited.
Click Save.
Click Continue.
7 - Network
Tutorials for configuring network interfaces and connections in TrueNAS SCALE.
The Network menu option has several screens that enable configuring network interfaces and general system-level network settings.
The tutorials in this section guide with the various screens and configuration forms contained within this menu item.
Contents
Interface Configurations: Tutorials about configuring the various types of network interfaces available in TrueNAS SCALE.
Managing Interfaces: Provides instructions on how to add, edit, and delete a network interface and how to add an alias to an interface.
Configuring Static Routes: Provides instructions on configuring a static route using the SCALE web UI.
Setting Up IPMI: Guides you through setting up Intelligent Platform Management Interface (IPMI) on TrueNAS SCALE.
7.1 - Interface Configurations
Tutorials about configuring the various types of network interfaces available in TrueNAS SCALE.
TrueNAS SCALE supports configuring different types of network interfaces as part of the various backup, sharing, and virtualization features in the software.
The tutorials in this section guide with each of these types of configurations.
Contents
Managing Interfaces: Provides instructions on how to add, edit, and delete a network interface and how to add an alias to an interface.
Setting Up Static IPs: Provides instructions on setting up and testing a network interface static IP address.
7.1.1 - Managing Interfaces
Provides instructions on how to add, edit, and delete a network interface and how to add an alias to an interface.
The Network screen allows you to add new or edit existing network interfaces, and configure static and alias IP addresses.
Why should I use different interface types?
Use LAGG (Link Aggregation) to optimize multi-user performance, balance network traffic, or have network failover protection.
For example, failover LAGG prevents a network outage by dynamically reassigning traffic to another interface when one physical link (a cable or NIC) fails.
Use a network bridge to enable communication between two networks and provide a way for them to work as a single network.
For example, bridges can serve IPs to multiple VMs on one interface, which allows your VMs to be on the same network as the host.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
Stop running apps before proceeding with network interface changes.
Power off any running virtual machines (VMs) before making interface IP changes. Remove active NIC devices.
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
One Static IP Address or Multiple Aliases?
One or More Aliases?
Static IP addresses set a fixed address for an interface that external devices or websites need to access or remember, such as for VPN access.
Use aliases to add multiple internal IP addresses, representing containers or applications hosted in a VM, to an existing network interface without having to define a separate network interface.
In the UI, you can add aliases when adding or editing an existing interface using the Add button to the right of the Aliases.
To add a static IP. Click Add again to add an additional alias.
From the Console Setup menu, select option 1 to configure network settings and add alias IP addresses.
Adding an Interface
You can use DHCP to provide the IP address for only one network interface and this is most likely for your primary network interface configured during the installation process.
To add another network interface, click Add on the Interfaces widget to display the Add Interface panel.
Leave the DHCP checkbox clear.
Click Add to the right of Aliases, near the bottom of the Add Interface screen and enter a static IP address for the interface.
You must specify the type of interface you want to create.
Select the type of interface from the Type dropdown options: Bridge, Link Aggregation or LAGG, and VLAN or virtual LAN.
You cannot edit the interface type after you click Save.
Each interface type displays new fields on the Add Interface panel.
Links with more information on adding these specific types of interfaces are at the bottom of this article.
Editing an Interface
Click on an existing interface in the Interfaces widget then click on the Edit icon to open the Edit Interface screen.
The Edit Interface and Add Interface settings are identical except for Type and Name.
You cannot edit these settings after you click Save.
Name shows on the Edit Interface screen, but you cannot change the name.
Type only shows on the Add Interface screen.
If you make a mistake with either field you can only delete the interface and create a new one with the desired type.
If you want to change from DHCP to a static IP, you must also add the new default gateway and DNS nameservers that work with the new IP address.
See Setting Up a Static IP for more information.
If you delete the primary network interface you can lose your TrueNAS connection and the ability to communicate with the TrueNAS through the web interface!
You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Deleting an Interface
Click the delete icon for the interface.
A delete interface confirmation dialog opens.
Do not delete the primary network interface!
If you delete the primary network interface you lose your TrueNAS connection and the ability to communicate with the TrueNAS through the web interface!
You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Adding Alias IP Addresses
Multiple interfaces connected to a single TrueNAS system cannot be members of the same subnet.
You can combine multiple interfaces with link aggregation (LAGG) or a network bridge.
Alternatively, you can assign multiple static IP addresses to a single interface by configuring aliases.
Click for more information
When multiple network interface cards (NICs) connect to the same subnet, users might incorrectly assume that the interfaces automatically load balance.
However, ethernet network topology allows only one interface to communicate at a time.
Additionally, both interfaces must handle broadcast messages since they are listening on the same network.
This configuration adds complexity and significantly reduces network throughput.
If you require multiple NICs on a single network for performance optimization, you can use a link aggregation (LAGG) configured with Link Aggregation Control Protocol (LACP).
A single LAGG interface with multiple NICs appears as a single connection to the network.
While LACP is beneficial for larger deployments with many active clients, it might not be practical for smaller setups.
It provides additional bandwidth or redundancy for critical networking situations.
However LACP has limitations as it does not load balance packets.
On the other hand, if you need multiple IP addresses on a single subnet, you can configure one or more static IP aliases for a single NIC.
In summary, we recommend using LACP if you need multiple interfaces on a network.
If you need multiple IP addresses, define aliases. Deviation from these practices might result in unexpected behavior.
For a detailed explanation of ethernet networking concepts and best practices for networking multiple NICs, refer to this discussion from National Instruments.
To configure alias IPs to provide access to internal portions of the network, go to the Network screen:
Click on the Edit icon for the interface to open the Edit Interface screen for the selected interface.
Clear the DHCP checkbox to show Aliases. Click Add for each alias you want to add to this interface.
Enter the IP address and CIDR values for the alias(es).
Select DHCP to control the primary IP for the interface.
Click Save.
7.1.2 - Setting Up a Network Bridge
Provides instructions on setting up a network bridge interface.
In general, a bridge refers to various methods of combining (aggregating) multiple network connections into a single aggregate network.
TrueNAS uses bridge(4) as the kernel bridge driver.
Bridge(8) is a command for configuring the bridge in Linux.
While the examples focus on the deprecated brctl(8) from the bridge-utilities package, we use ip(8) and bridge(8) from iproute2 instead. Refer to the FAQ section that covers bridging topics more generally.
Network bridging does not inherently aggregate bandwidth like link aggregation (LAGG).
Bridging is often used for scenarios where you need to extend a network segment or combine different types of network traffic.
Bridging can be used to integrate different types of networks (e.g., wireless and wired networks) or to segment traffic within the same network.
A bridge can also be used to allow a VM configured on TrueNAS to communicate with the host system. See Accessing NAS From a VM for more information.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
Stop running apps before proceeding with network interface changes.
Power off any running virtual machines (VMs) before making interface IP changes. Remove active NIC devices.
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
To set up a bridge interface, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select Bridge from the Type dropdown list.
You cannot change the Type field value after you click Save.
Enter a name for the interface.
Use the format bondX, vlanX, or brX where X is a number representing a non-parent interface.
You cannot change the Name of the interface after you click Save.
(Optional but recommended) Enter any notes or reminders about this particular bridge in Description.
Select the interfaces on the Bridge Members dropdown list.
Click Add to the right of Aliases to show the IP address fields, and enter the IP address for this bridge interface.
Click Add again to show an additional IP address fields for each additional IP address you want to add.
After TrueNAS finishes testing the interface, click Save Changes to keep the changes.
Click Revert Changes to discard the changes and return to the previous configuration.
Occasionally, a misconfigured bridge or a conflict with a running application, VM, or service can cause the network changes test to fail.
Typically, this is because the bridge is configured using an IP address that is already in use.
If the system does not receive a Save Changes check-in before the test times out (default 60 seconds), TrueNAS automatically reverts all unsaved changes.
The following troubleshooting options are available if you are unable to save the new bridge and network changes.
Options are ordered from the least to the most disruptive.
Try options one and two before proceeding with option three and then four.
Ensure that there are no currently running applications.
Stop any running VMs.
(Optional) Go to Services.
Click editConfigure to view the current configuration of sharing services including SMB and NFS.
Stop any services that have a bind IP address matching the bridge IP address.
Restart the service(s) after network changes are tested and saved.
(Optional) Stop the Kubernetes service.
Connect to a shell session and enter systemctl k3s.service stop.
Press Enter.
After network changes are tested and saved, restart Kubernetes with systemctl k3s.service start.
7.1.3 - Setting Up a Link Aggregation
Provides instructions on setting up a network link aggregation (LAGG) interface.
In general, a link aggregation (LAGG) is a method of combining (aggregating) multiple network connections in parallel to provide additional bandwidth or redundancy for critical networking situations.
TrueNAS uses lagg(4) to manage LAGGs.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
Stop running apps before proceeding with network interface changes.
Power off any running virtual machines (VMs) before making interface IP changes. Remove active NIC devices.
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
To set up a LAGG, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select Link Aggregation from the Type dropdown list. You cannot change the Type field value after you click Save.
Enter a name for the interface using the format bondX, where X is a number representing a non-parent interface.
You cannot change the Name of the interface after clicking Apply.
(Optional, but recommended) Enter any notes or reminders about this particular LAGG interface in Description.
Select the protocol from the Link Aggregation Protocol dropdown. Options are LACP, FAILOVER, or LOADBALANCE. Each option displays additional settings.
In LACP mode, the interfaces negotiate with the network switch to form a group of ports that are all active once.
The network switch must support LACP for this option to function.
a. Select the hash policy from the Transmit Hash Policy dropdown list. LAYER2+3 is the default selection.
b. Select LACPDU Rate option:
SLOW (default) sets the heartbeat request to every second and the timeout to a three-consecutive heartbeat loss that is three seconds.
FAST sets the timeout rate at one per second even after synchronization. FAST allows for rapid detection of faults.
FAILOVER
Select FAILOVER send traffic through the primary interface of the group. If the primary interface fails, traffic diverts to the next available interface in the LAGG.LOADBALANCE
Select LOADBALANCE to accept traffic on any port of the LAGG group and balance the outgoing traffic on the active ports in the LAGG group.
LOADBALANCE is a static setup that does not monitor the link state or negotiate with the switch.
Select the Transmit Hash Policy option from the dropdown list. LAYER2+3 is the default selection.
Select the interfaces to use in the aggregation from the Link Aggregation Interface dropdown list.
(Optional) Click Add to the right of Aliases to show additional IP address fields for each additional IP address to add to this LAGG interface.
Click Save when finished.
7.1.4 - Setting Up a Network VLAN
Provides instructions on setting up a network VLAN interface.
A virtual LAN (VLAN) is a partitioned and isolated domain in a computer network at the data link layer (OSI layer 2).
Click here for more information on VLANs.
TrueNAS uses vlan(4) to manage VLANs.
Before you begin, make sure you have an Ethernet card connected to a switch port and already configured for your VLAN.
Also that you have preconfigured the VLAN tag in the switched network.
To set up a VLAN interface, go to Network, click Add on the Interfaces widget to open the Add Interface screen, then:
Select VLAN from the Type dropdown list. You cannot change the Type field value after you click Apply.
Enter a name for the interface using the format vlanX where X is a number representing a non-parent interface.
You cannot change the Name of the interface after clicking Save.
(Optional, but recommended) Enter any notes or reminders about this particular VLAN in Description.
Select the interface in the Parent Interface dropdown list. This is typically an Ethernet card connected to a switch port already configured for the VLAN.
Enter the numeric tag for the interface in the VLAN Tag field. This is typically preconfigured in the switched network.
Select the VLAN Class of Service from the Priority Code Point dropdown list.
(Optional) Click Add to the right of Aliases to show additional IP address fields for each additional IP address to add to this VLAN interface.
Click Save.
7.1.5 - Setting Up Static IPs
Provides instructions on setting up and testing a network interface static IP address.
This article describes setting up a network interface with a static IP address or changing the main interface from a DHCP-assigned to a manually-entered static IP address.
You must know the DNS name server and default gateway addresses for your IP address.
Disruptive Change!
You can lose your TrueNAS connection if you change the network interface that the web interface uses!
Command line knowledge and physical access to the TrueNAS system are often required to fix misconfigured network settings.
Multiple interfaces connected to a single TrueNAS system cannot be members of the same subnet.
You can combine multiple interfaces with link aggregation (LAGG) or a network bridge.
Alternatively, you can assign multiple static IP addresses to a single interface by configuring aliases.
Click for more information
When multiple network interface cards (NICs) connect to the same subnet, users might incorrectly assume that the interfaces automatically load balance.
However, ethernet network topology allows only one interface to communicate at a time.
Additionally, both interfaces must handle broadcast messages since they are listening on the same network.
This configuration adds complexity and significantly reduces network throughput.
If you require multiple NICs on a single network for performance optimization, you can use a link aggregation (LAGG) configured with Link Aggregation Control Protocol (LACP).
A single LAGG interface with multiple NICs appears as a single connection to the network.
While LACP is beneficial for larger deployments with many active clients, it might not be practical for smaller setups.
It provides additional bandwidth or redundancy for critical networking situations.
However LACP has limitations as it does not load balance packets.
On the other hand, if you need multiple IP addresses on a single subnet, you can configure one or more static IP aliases for a single NIC.
In summary, we recommend using LACP if you need multiple interfaces on a network.
If you need multiple IP addresses, define aliases. Deviation from these practices might result in unexpected behavior.
For a detailed explanation of ethernet networking concepts and best practices for networking multiple NICs, refer to this discussion from National Instruments.
DHCP or Static IP?
By default, during installation, TrueNAS SCALE configures the primary network interface for Dynamic Host Configuration Protocol (DHCP) IP address management.
However, some administrators might choose to assign a static IP address to the primary network interface.
This choice may be made if TrueNAS is deployed on a system that does not allow DHCP for security, stability, or other reasons.
In all deployments, only one interface can be set up for DHCP, which is typically the primary network interface configured during the installation process.
Any additional interfaces must be manually configured with one or more static IP addresses.
One Static IP Address or Multiple Aliases?
One or More Aliases?
Static IP addresses set a fixed address for an interface that external devices or websites need to access or remember, such as for VPN access.
Use aliases to add multiple internal IP addresses, representing containers or applications hosted in a VM, to an existing network interface without having to define a separate network interface.
In the UI, you can add aliases when adding or editing an existing interface using the Add button to the right of the Aliases.
To add a static IP. Click Add again to add an additional alias.
From the Console Setup menu, select option 1 to configure network settings and add alias IP addresses.
Before You Begin
Have the DNS name server addresses, the default gateway for the new IP address, and any static IP addresses on hand to prevent lost communication with the server while making and testing network changes.
You have only 60 seconds to change and test these network settings before they revert back to the current settings, for example back to DHCP assigned if moving from DHCP to a static IP.
Back up your system to preserve your data and system settings. Save the system configuration file and a system debug.
As a precaution, grab a screenshot of your current settings in the Global Configuration widget.
If your network changes result in lost communication with the network and you need to return to the DHCP configuration, you can refer to this information to restore communication with your server.
Lost communication might require reconfiguring your network settings using the Console Setup menu.
Multiple interfaces cannot be members of the same subnet.
If an error displays or the Save button is inactive when setting the IP addresses on multiple interfaces, check the subnet and ensure the CIDR numbers differ.
Click Save.
A dialog opens where you can select to either Test Changes or Revert Changes.
If you have only one active network interface the system protects your connection to the interface by displaying the Test Changes dialog.
You have 60 seconds to test and save the change before the system discards the change and reverts back to the DHCP-configured IP address.
Check the name servers and default router information in the Global Information widget.
If the current settings are not on the same network, click Settings and modify each setting as needed to allow the static IP to communicate over the network.
Add the IP addresses for the DNS name servers in the Nameserver 1, Nameserver 2, and Nameserver 3 fields.
For home users, use 8.8.8.8 for a DNS name server address so you can communicate with external networks.
Add the IP address for the default gateway in the appropriate field.
If the static network is IPv4 enter the gateway in IPv4 Default Gateway, if the static network is IPv6 use IPv6 Default Gateway.
Click Save.
Test the network changes. Click Test Changes. Select Confirm to activate Test Changes button.
Click Save Changes to make the change to the static IP address permanent or click Revert Changes to discard changes and return to previous settings.
The Save Changes confirmation dialog displays. Click SAVE. The system displays a final confirmation that the change is in effect.
Only one interface can use DHCP to assign the IP address and that is likely the primary network interface.
If you do not have an existing network interface set to use DHCP you can convert an interface from static IP to DHCP.
To switch/return to using DHCP:
Click Settings on the Global Configuration widget.
Clear the name server fields and the default gateway, and then click Save.
Click on the Edit icon for the interface to display the Edit Interface screen.
Select DHCP.
Remove the static IP address from the IP Address field.
Click Apply.
Click Settings to display the Global Configuration screen, then enter the name server and default gateway addresses for the new DHCP-provided IP address.
Home users can enter 8.8.8.8 in the Nameserver 1 field.
Click Test Change. If the network settings are correct, the screen displays the Save Changes widget. Click Save Changes.
If the test network operation fails or the system times out, your system returns to the network settings before you attempted the change.
Verify the name server and default gateway information to try again.
7.2 - Adding Network Settings
Provides instructions on adding network settings during initial SCALE installation or after a clean install of SCALE.
Use the Global Configuration Settings screen to add general network settings like the default gateway and DNS name servers to allow external communication.
You can lose your TrueNAS connection if you change the network interface that the web interface uses!
You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Adding Network Settings
Go to Network and click Settings on the Global Configuration widget to open the Edit Global Configuration screen, then:
Enter the host name for your TrueNAS in Hostname. For example, host.
Enter the system domain name in Domain. For example, example.com.
Enter the IP addresses for your DNS name servers in the Nameserver 1, Nameserver 2, and/or Nameserver 3 fields.
For home users, enter 8.8.8.8 in the Nameserver 1 field so your TrueNAS SCALE can communicate externally with the Internet.
Enter the IP address for your default gateway into the IPv4 Default Gateway if you are using IPv4 IP addresses.
Enter the IP address in the IPv6 Default Gateway if you are using IPv6 addresses.
Select the Outbound Network radio button for outbound service capability.
Select Allow All to permit all TrueNAS SCALE services that need external communication to do that or select Deny All to prevent that external communication. Select Allow Specific and then use the dropdown list to pick the services you want to allow to communicate externally.
Click on as many services for which you want to permit external communications. Unchecked services cannot communicate externally.
Click Save. The Global Configuration widget on the Network screen updates to show the new settings.
7.3 - Managing Network Global Configurations
Provides instructions on configuring or managing global configuration settings.
Use the Global Configuration Settings screen to manage existing general network settings like the default gateway, DNS servers, set DHCP to assign the IP address or to set a static IP address, add IP address aliases, and set up services to allow external communication.
Disruptive Change
You can lose your TrueNAS connection if you change the network interface that the web interface uses! You might need command line knowledge or physical access to the TrueNAS system to fix misconfigured network settings.
Can I configure these options elsewhere?
Users can configure many of these interface, DNS, and gateway options in the Console Setup menu.
Be sure to check both locations when troubleshooting network connectivity issues.
Setting Up External Communication for Services
Use the Global Configuration Outbound Network radio buttons to set up services to have external communication capability.
These services use external communication:
ACME DNS-Authenticators
Anonymous usage statistics
Catalog(s) information exchanges
Cloud sync
KMIP
Mail (email service)
Replication
Rsync
Support
TrueCommand iX portal
Updates
VMWare snapshots
Select the Allow All to permit all the above services to communicate externally. This is the default setting.
Select the Deny All to prevent all the above services from communicating externally.
Select the Allow Specific to permit external communication for the services you select.
Allow Specific displays a dropdown list of the services you can select.
Click on all that apply. A checkmark displays next to a selected service, and these services display in the field separated by a comma (,).
Click Save when finished.
Setting Up Netwait
Use Netwait to prevent starting all network services until the network is ready.
Netwait sends a ping to each of the IP addresses you specify until one responds, and after receiving the response then services can start.
To set up Netwait, from the Network screen:
Click on Settings in the Global Configuration widget to open the Global Configuration screen.
Select Enable Netwait Feature. The Netwait IP List field displays.
Enter your list of IP addresses to ping. Press Enter after entering each IP address.
Click Save when finished.
7.4 - Managing Network Settings (Enterprise HA)
Provides instructions on how to make changes to network settings on SCALE Enterprise (HA) systems.
TrueNAS Enterprise
The instructions in the article only apply to SCALE Enterprise (HA) systems.
SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
VIP to provide UI access regardless of which controller is active.
If your system fails over from controller 1 to 2, then fails over back to controller 1 later you might not know which controller is active.
IP for controller 1. If enabled, DHCP assigns an IP to the primary network interface on non-HA systems.
Disable DHCP, and then manually enter the Controller 1 static IP address your network administrator assigned for this controller.
IP for Controller 2. Manually enter the second IP address assigned for this controller.
Have the list of network addresses, name sever and default gateway IP addresses, and host and domain names ready so you can complete the network configuration without disruption or system timeouts.
SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes.
This is to prevent users from breaking their network connection in SCALE.
Configuring Enterprise (HA) Network Settings
Both controllers must be powered on and ready before you configure network settings.
You must disable the failover service before you can configure network settings!
Only configure network settings on controller 1! When ready to sync to peer, SCALE applies settings to controller 2 at that time.
SCALE Enterprise (HA) systems use three static IP addresses for access to the UI:
VIP to provide UI access regardless of which controller is active.
If your system fails over from controller 1 to 2, then fails over back to controller 1 later you might not know which controller is active.
IP for controller 1. If enabled, DHCP assigns an IP to the primary network interface on non-HA systems.
Disable DHCP, and then manually enter the Controller 1 static IP address your network administrator assigned for this controller.
IP for Controller 2. Manually enter the second IP address assigned for this controller.
Have the list of network addresses, name sever and default gateway IP addresses, and host and domain names ready so you can complete the network configuration without disruption or system timeouts.
SCALE safeguards allow a default of 60 seconds to test and save changes to a network interface before reverting changes.
This is to prevent users from breaking their network connection in SCALE.
To configure network settings on controller 1:
Disable the failover service.
Go to System Settings > Services locate the Failover service and click edit.
Select Disable Failover and click Save.
Edit the primary network interface to add failover settings.
Go to Network and click on the primary interface eno1 to open the Edit Interface screen for this interface.
First, enter the IP address for controller 1 into IP Address (This Controller) and select the netmask (CIDR) number from the dropdown list.
Next, enter the controller 2 IP address into IP Address (TrueNAS Controller 2).
Finally, enter the VIP address into Virtual IP Address (Failover Address).
Click Save
Click Test Changes after editing the interface settings.
You have 60 seconds to test and then save changes before they revert. If this occurs, edit the interface again.
Turn failover back on.
Go to System Settings > Failover and select Disable Failover to clear the checkmark and turn failover back on, then click Save.
The system might reboot.
Monitor the status of controller 2 and wait until the controller is back up and running, then click Sync To Peer.
Select Reboot standby TrueNAS controller and Confirm, then click Proceed to start the sync operation.
The controller reboots, and SCALE syncs controller 2 with controller 1, which adds the network settings and pool to controller 2.
Provides instructions on configuring a static route using the SCALE web UI.
TrueNAS does not have defined static routes by default but TrueNAS administrators can use the Static Routes widget on the Network screen to manually enter routes so a router can send packets to a destination network.
If you have a monitor and keyboard connected to the system, you can use the Console Setup menu to configure static routes during the installation process, but we recommend using the web UI for all configuration tasks.
If you need a static route to reach portions of the network, from the Network screen:
Click Add in the Static Routes widget to open the Add Static Route screen.
Enter a value in Destination. Enter the destination IP address and CIDR mask in the format A.B.C.D/E where E is the CIDR mask.
Enter the gateway IP address for the destination address in Gateway.
(Optional) Enter a brief description for this static route, such as the part of the network it reaches.
Click Save.
7.6 - Setting Up IPMI
Guides you through setting up Intelligent Platform Management Interface (IPMI) on TrueNAS SCALE.
IPMI requires compatible hardware! Refer to your hardware documentation to determine if the TrueNAS web interface has IPMI options.
Many TrueNAS Storage Arrays have a built-in out-of-band management port that provides side-band management should the system become unavailable through the web interface.
Intelligent Platform Management Interface (IPMI) allows users to check the log, access the BIOS setup, and boot the system without physical access. IPMI also enables users to remotely access the system to assist with configuration or troubleshooting issues.
Some IPMI implementations require updates to work with newer versions of Java. See here for more information.
IPMI is configured in Network > IPMI. The IPMI configuration screen provides a shortcut to the most basic IPMI configuration.
IPMI Options
We recommend setting a strong IPMI password. IPMI passwords must include at least one upper case letter, one lower case letter, one digit, and one special character (punctuation, e.g. ! # $ %, etc.). It must also be 8-16 characters long. Document your password in a secure way!
After saving the configuration, users can access the IPMI interface using a web browser and the IP address specified in Network > IPMI. The management interface prompts for login credentials. Refer to your IPMI device documentation to learn the default administrator account credentials.
After logging in to the management interface, users can change the default administrative user name and create additional IPMI users. IPMI utility appearance and available functions vary by hardware.
8 - Credentials
Tutorials for configuring the different credentials needed for TrueNAS SCALE features.
SCALE Credential options are collected in this section of the UI and organized into a few different screens:
Local Users allows those with permissions to add, configure, and delete users on the system.
There are options to search for keywords in usernames, display or hide user characteristics, and toggle whether the system shows built-in users.
Local Groups allows those with permissions to add, configure, and delete user groups on the system.
There are options to search for keywords in group names, display or hide group characteristics, and toggle whether the system shows built-in groups.
Directory Services contains options to edit directory domain and account settings, set up Idmapping, and configure access and authentication protocols.
Specific options include configuring Kerberos realms and key tables (keytab), as well as setting up LDAP validation.
Backup Credentials stores credentials for cloud backup services, SSH Connections, and SSH Keypairs.
Users can set up backup credentials with cloud and SSH clients to back up data in case of drive failure.
Certificates contains all the information for certificates, certificate signing requests, certificate authorities, and DNS-authenticators.
TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
2FA allows users to set up Two-Factor Authentication for their system.
Users can set up 2FA, then link the system to an authenticator app (such as Google Authenticator, LastPass Authenticator, etc.) on a mobile device.
Contents
Using Administrator Logins: Explains role-based administrator logins and related functions. Provides instructions on properly configuring SSH and working with the admin and root user passwords.
Managing Users: Provides instructions on adding and managing administrator and local user accounts.
Configuring LDAP: Provides instructions on configuring and managing LDAP in TrueNAS SCALE.
Configuring Kerberos: Provides instructions on configuring and managing Kerberos realms and keytabs in TrueNAS SCALE.
Configuring IDMap: Provides instructions on configuring and managing ID mapping in TrueNAS SCALE.
Backup Credentials: Backup credential tutorials for integrating TrueNAS SCALE with cloud storage providers by setting up SSH connections and keypairs.
Adding Cloud Credentials: Provides basic instructions on how to add backup cloud credentials and more detailed instructions for some cloud storage providers.
Adding SSH Credentials: Provides information on adding SSH connections, generating SSH keypairs, and adding the SSH public key to the root user.
Certificates: Information about adding and managing certificates, CSRs, CAs and ACME DNS-Authenticators in TrueNAS SCALE.
Managing Certificates: Provides information on adding or managing SCALE certificates.
Creating ACME Certificates: Provides information on generating ACME certificates in TrueNAS SCALE using Let's Encrypt.
Configuring KMIP: Describes how to configure KMIP on TrueNAS SCALE Enterprise.
8.1 - Using Administrator Logins
Explains role-based administrator logins and related functions. Provides instructions on properly configuring SSH and working with the admin and root user passwords.
The initial implementation of the TrueNAS SCALE administrator login permitted users to continue using the root user but encouraged users to create a local administrator account when first installing SCALE.
Starting with SCALE Bluefin 22.12.0, root account logins are deprecated for security hardening and to comply with Federal Information Processing Standards (FIPS).
All TrueNAS users should create a local administrator account with all required permissions and begin using it to access TrueNAS.
When the root user password is disabled, only an administrative user account can log in to the TrueNAS web interface.
TrueNAS SCALE plans to permanently disable root account access in a future release.
SCALE has implemented administrator roles and privileges that allow greater control over access to functions in SCALE and to further comply with FIPS security hardening standards.
SCALE includes three predefined admin user account levels:
Full Admin - This is the local administrator account created by the system when doing a clean install using an iso file or by manually creating when logged in as the root user account after upgrading or migrating from CORE or a pre-22.12.3 release of SCALE.
Sharing Admin - This is assigned to users responsible for only managing shares (SMB, NFS, iSCSI).
This user can create shares and the datasets for shares, start/restart the share service, and modify the ACL for the share dataset.
Read-only Admin - This is assigned to users that can monitor the system but not make changes to settings.
About Admin and Root Logins and Passwords
At present, SCALE has both the root and local administrator user logins and passwords.
Root is the default system administration account for CORE, SCALE Angelfish, and early Bluefin releases.
Users migrating from CORE to SCALE or from pre 22.12.3 releases must manually create an admin user account.
Only fresh installations using an iso file provide the option to create the admin user during the installation process.
SCALE systems with only the root user account can log in to the TrueNAS web interface as the root user.
System administrators should thereafter create and begin using the admin login, and then disable the root user password.
SCALE 24.04 (Dragonfish) introduces administrators privileges and role-based administrator accounts.
The root or local administrator user can create new administrators with limited privileges based on their needs.
Predefined administrator roles are read only, share admin, and the default full access local administrator account.
As part of security hardening and to comply with Federal Information Processing standards (FIPS), iXsystems plans to completely disable root login in a future release.
All systems should create the local administrator account and use this account for web interface access.
When properly set up, the local administrator (full admin) account performs the same functions and has the same access as the root user.
Some UI screens and settings still refer to the root account, but these references are updating to the administrator account in future releases of SCALE.
To improve system security after the local administrator account is created, disable the root account password to restrict root access to the system.
As a security measure, the root user is no longer the default account and the password is disabled when you create the admin user during installation.
Do not disable the admin account and root passwords at the same time.
If both root and admin account passwords become disabled at the same time and the web interface session times out, a one-time sign-in screen allows access to the system.
Enter and confirm a password to gain access to the UI.
After logging in, immediately go to Credentials > Local Users to enable the root or admin password before the session times out again.
This temporary password is not saved as a new password and it does not enable the admin or root passwords, it only provides one-time access to the UI.
When disabling a password for UI login, it is also disabled for SSH access.
Accessing the System Through an SSH Session
To enable SSH to access the system as the admin user (or for root):
Configure the SSH service.
a. Go to System Settings > Services, then select Configure for the SSH service.
b. Select Log in as Root with Password to enable the root user to sign in as root.
Select Log in as Admin with Password and Allow Password Authentication to enable the admin user to sign in as admin. Select both options.
c. Click Save and restart the SSH service.
Configure or verify the user configuration options to allow SSH access.
If you want to SSH into the system as the root, you must enable a password for the root user.
If the root password password is disabled in the UI you cannot use it to gain SSH access to the system.
To allow the admin user to issue commands in an ssh session, edit the admin user and select which sudo options are allowed.
Select SSH password login enabled to allow authenticating and logging into an SSH session.
Disable this after completing the SSH session to return to a security hardened system.
Select Allow all sudo commands with no password.
You to see a prompt in the ssh session to enter a password the first time you enter a sudo command but to not see this password prompt again in the same session.
Two-Factor Authentication (2FA) and Administrator Account Log In
To use two-factor authentication with the administrator account (root or admin user), first configure and enable SSH service to allow SSH access, then configure two-factor authentication.
If you have the root user configured with a password and enable it, you can SSH into the system with the root user.
Security best practice is to disable the root user password and only use the local administrator account.
Administrator Logins and TrueCommand
At present, administrator logins work with TrueCommand but you need to set up the TrueNAS connection using an API key.
8.2 - Managing Users
Provides instructions on adding and managing administrator and local user accounts.
In TrueNAS, user accounts allow flexibility for accessing shared data.
Typically, administrators create users and assign them to groups.
Doing so makes tuning permissions for large numbers of users more efficient.
Root is the default system administration account for CORE, SCALE Angelfish, and early Bluefin releases.
Users migrating from CORE to SCALE or from pre 22.12.3 releases must manually create an admin user account.
Only fresh installations using an iso file provide the option to create the admin user during the installation process.
SCALE systems with only the root user account can log in to the TrueNAS web interface as the root user.
System administrators should thereafter create and begin using the admin login, and then disable the root user password.
SCALE 24.04 (Dragonfish) introduces administrators privileges and role-based administrator accounts.
The root or local administrator user can create new administrators with limited privileges based on their needs.
Predefined administrator roles are read only, share admin, and the default full access local administrator account.
As part of security hardening and to comply with Federal Information Processing standards (FIPS), iXsystems plans to completely disable root login in a future release.
When the network uses a directory service, import the existing account information using the instructions in Directory Services.
Using Active Directory requires setting Windows user passwords in Windows.
To see user accounts, go to Credentials > Local Users.
TrueNAS hides all built-in users (except root) by default. Click the toggle Show Built-In Users to see all built-in users.
Creating an Admin User Account
All CORE systems migrating to SCALE, and all Angelfish and early Bluefin releases of SCALE upgrading to 22.12.3+ or to later SCALE major versions should create and begin using an admin user instead of the root user.
After migrating or upgrading from CORE or a pre-SCALE 22.12.3 release to a later SCALE release, use this procedure to create the Local Administrator user.
Go to Credentials > Local Users and click Add.
Enter the name to use for the administrator account. For example, admin.
You can create multiple admin users with any name and assign each different administration privileges.
Enter and confirm the admin user password.
Select builtin_administrators on the Auxiliary Group dropdown list.
Add the home directory for the new admin user.
Enter or browse to select the location where SCALE creates the home directory. For example, /mnt/tank. If you created a dataset to use for home directories, select that dataset.
Select the Read, Write, and Execute permissions for User, Group, and Other this user should have, then select Create Home Directory.
Select the shell for this admin user from the Shell dropdown list.
Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh.
Select the sudo authorization permissions for this admin user.
Some applications, such as Nextcloud, require sudo permissions for the administrator account.
For administrator accounts generated during the initial installation process, TrueNAS SCALE sets authorization to Allow all sudo commands.
Click Save.
The system adds the user to the builtin-users group after clicking Save.
Log out of the TrueNAS system and then log back in using the admin user credentials to verify that the admin user credentials work properly with your network configuration.
After adding the admin user account, disable the root user password:
Go to Credentials > Local Users, click on the root user, and select Edit.
Click the Disable Password toggle to disable the password, then click Save.
Creating User Accounts
When creating a user, you must:
Enter a Full Name or description for the user, such as a first and last name.
Enter a Username or accept the generated user name.
Enter and enable a Password.
Specify or accept the default user ID (UID)
(Optional) Select the Shell the user has access to when they go to System Settings > Shell.
Not all users can select a shell.
All other settings are optional.
Click Save after configuring the user settings to add the user.
Enter a personal name or description in Full Name, for example, John Doe or Share Anonymous User, then either allow TrueNAS to suggest a simplified name derived from the Full Name or enter a name in Username.
Enter and confirm a password for the user.
Make sure the login password is enabled. Click the Disable Password toggle to enable/disable the login password. Setting the Disable Password toggle to active (blue toggle) disables these functions:
The Password field becomes unavailable and TrueNAS removes any existing password from the account.
The Lock User option disappears.
The account is restricted from password-based logins for services like SMB shares and SSH sessions.
Enter a user account email address in the Email field if you want this user to receive notifications
Accept the default user ID or enter a new UID.
TrueNAS suggests a user ID starting at 3000, but you can change it if you wish.
We recommend using an ID of 3000 or greater for non-built-in users.
Leave the Create New Primary Group toggle enabled to allow TrueNAS to create a new primary group with the same name as the user.
To add the user to a different existing primary group, disable the Create New Primary Group toggle and search for a group in the Primary Group field.
To add the user to more groups use the Auxiliary Groups dropdown list.
Configure a home directory and permissions for the user. Some functions, such as replication tasks, require setting a home directory for the user configuring the task.
When creating a user, the home directory path is set to /var/empty, which does not create a home directory for the user.
To add a home directory, enter or browse to a path in Home Directory, then select Create Home Directory.
SCALE 24.04 changes the default user home directory location from /nonexistent to /var/empty.
This new directory is an immutable directory shared by service accounts and accounts that should not have a full home directory.
The 24.04.01 maintenance release introduces automated migration to force home directories of existing SMB users from /nonexistent to /var/empty.
Why the change?
TrueNAS uses the pam_mkhomdir PAM module in the pam_open_session configuration file to automatically create user home directories if they do not exist.
pam_mkhomedir returns PAM_PERM_DENIED if it fails to create a home directory for a user, which eventually turns into a pam_open_session() failure.
This does not impact other PAM API calls, for example, pam_authenticate().
TrueNAS SCALE does not include the customized version of pam_mkhomedir used in TrueNAS CORE that specifically avoided trying to create the /nonexistent directory. This led to some circumstances where users could create the /nonexistent directory on SCALE versions before 24.04.
Starting in SCALE 24.04 (Dragonfish), the root filesystem of TrueNAS is read-only, which prevents pam_mkhomdir from creating the /nonexistent directory in cases where it previously did.
This results in a permissions error if pam_open_session() is called by an application for a user account that has Home Directory set to /nonexistent.
Select Read, Write, and Execute for each role (User, Group, and Other) to set access control for the user home directory.
Built-in users are read-only and can not modify these settings.
Assign a public SSH key to a user for key-based authentication by entering or pasting the public key into the Authorized Keys field.
You can click Choose File under Upload SSH Key and browse to the location of an SSH key file.
Do not paste the private key.
Always keep a backup of an SSH public key if you are using one.
As of SCALE 24.04, users assigned to the trueNAS_readonly_administrators group cannot access the Shell screen.
Select the shell option for the admin user from the Shell dropdown list.
Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh.
To disable all password-based functionality for the account, select Lock User. Clear to unlock the user.
Set the sudo permissions you want to assign this user.
Exercise caution when allowing sudo commands, especially without password prompts.
We recommend limiting this privilege to trusted users and specific commands to minimize security risks.
Allowed sudo commands, Allow all sudo commands, Allowed sudo commands with no password and Allow all sudo commands with no password grant the account limited root-like permissions using the sudo command.
If selecting Allowed sudo commands or Allowed sudo commands with no password, enter the specific sudo commands allowed for this user.
Enter each command as an absolute path to the ELF (Executable and Linkable Format) executable file, for example, /usr/bin/nano.
/usr/bin/ is the default location for commands.
Select Allow all sudo commands or Allow all sudo commands with no password.
Leave Samba Authentication selected to allow using the account credentials to access data shared with SMB.
Click Save.
Editing User Accounts
To edit an existing user account, go to Credentials > Local Users.
Click anywhere on the user row to expand the user entry, then click Edit to open the Edit User configuration screen.
See Local User Screens for details on all settings.
8.3 - Managing Local Groups
Provides instructions to manage local groups.
TrueNAS offers groups as an efficient way to manage permissions for many similar user accounts.
See Users for managing users.
The interface lets you manage UNIX-style groups.
If the network uses a directory service, import the existing account information using the instructions in Active Directory.
View Existing Groups
To see saved groups, go to Credentials > Local Groups.
By default, TrueNAS hides the system built-in groups.
To see built-in groups, click the Show Built-In Groups toggle. The toggle turns blue and all built-in groups display. Click the Show Built-In Groups toggle again to show only non-built-in groups on the system.
Adding a New Group
To create a group, go to Credentials > Local Groups and click Add.
Enter a unique number for the group ID in GID that TrueNAS uses to identify a Unix group.
Enter a number above 3000 for a group with user accounts or enter the default port number as the GID for a system service.
Enter a name for the group.
The group name cannot begin with a hyphen (-) or contain a space, tab, or any of these characters: colon (:), plus (+), ampersand (&), hash (#), percent (%), carat (^), open or close parentheses ( ), exclamation mark (!), at symbol (@), tilde (~), asterisk (*), question mark (?) greater or less than (<) (>), equal (=). The dollar sign ($) can be the last character in a group name.
Allowed sudo commands, Allow all sudo commands, Allowed sudo commands with no password and Allow all sudo commands with no password grant members of the group limited root-like permissions using the sudo command.
Use Allowed sudo commands or Allowed sudo commands with no password to list specific sudo commands allowed for group members.
Enter each command as an absolute path to the ELF (Executable and Linkable Format) executable file, for example /usr/bin/nano.
/usr/bin/ is the default location for commands.
Or click Allow all sudo commands or Allow all sudo commands with no password.
Exercise caution when allowing sudo commands, especially without password prompts.
We recommend limiting this privilege to trusted users and specific commands to minimize security risks.
To allow Samba permissions and authentication to use this group, select Samba Authentication.
To allow more than one group to have the same group ID (not recommended), select Allow Duplicate GIDs.
Use only if absolutely necessary, as duplicate GIDs can lead to unexpected behavior.
Managing Groups
Click anywhere on a row to expand that group and show the group management buttons.
To add a user account to the group, select the user and then click the right arrow .
To remove a user account from the group, select the user and then click the left arrow .
To select multiple users, press Ctrl and click on each entry.
Click Save.
Edit Group
To edit an existing group, go to Credentials > Local Groups, expand the group entry, and click editEdit to open the Edit Group configuration screen. See Local Group Screens for details on all settings.
8.4 - Setting Up Directory Services
Tutorials for configuring the various directory service credentials.
The SCALE Directory Services tutorials contain options to edit directory domain and account settings, set up ID mapping, and configure authentication and authorization services in TrueNAS SCALE.
Choosing Active Directory or LDAP
When setting up directory services in TrueNAS, you can connect TrueNAS to either an Active Directory or an LDAP server but not both.
To view Idmap and Kerberos Services, click Show next to Advanced Settings.
Configuring LDAP: Provides instructions on configuring and managing LDAP in TrueNAS SCALE.
Configuring Kerberos: Provides instructions on configuring and managing Kerberos realms and keytabs in TrueNAS SCALE.
Configuring IDMap: Provides instructions on configuring and managing ID mapping in TrueNAS SCALE.
8.4.1 - Configuring Active Directory
Provides instructions on configuring Active Directory in TrueNAS SCALE.
Configuring Active Directory In TrueNAS
The Active Directory (AD) service shares resources in a Windows network.
AD provides authentication and authorization services for the users in a network, eliminating the need to recreate the user accounts on TrueNAS.
When joined to an AD domain, you can use domain users and groups in local ACLs on files and directories.
You can also set up shares to act as a file server.
Joining an AD domain also configures the Privileged Access Manager (PAM) to let domain users log on via SSH or authenticate to local services.
Users can configure AD services on Windows or Unix-like operating systems using Samba version 4.
To configure an AD connection, you must know the AD controller domain and the AD system account credentials.
Preparing to Configure AD in SCALE
Users can take a few steps before configuring Active Directory (AD) to ensure the connection process goes smoothly.
To confirm that name resolution is functioning, you can use the Shell and issue a ping command and a command to check network SRV records and verify DNS resolution.
To use dig to verify name resolution and return DNS information:
Go to System Settings > Shell and type dig to check the connection to the AD domain controller.
The domain controller manages or restricts access to domain resources by authenticating user identity from one domain to the other through login credentials, and it prevents unauthorized access to these resources. The domain controller applies security policies to request-for-access domain resources.
When TrueNAS sends and receives packets without loss, the connection is verified.
Press Ctrl + C to cancel the ping.
The ping failed!
If the ping fails:
Go to Network and click Settings in the Global Configuration window.
Update the DNS Servers and Default Gateway settings to the connection to your Active Directory Domain Controller.
Use more than one Nameserver for the AD domain controllers so DNS queries for requisite SRV records can succeed.
Using more than one name server helps maintain the AD connection whenever a domain controller becomes unavailable.
Checking Network SRV Records
Also using Shell, check the network SRV records and verify DNS resolution enter command host -t srv <_ldap._tcp.domainname.com> where <_ldap._tcp.domainname.com> is the domain name for the AD domain controller.
Setting Time synchronization
Active Directory relies on the time-sensitive Kerberos protocol.
TrueNAS adds the AD domain controller with the PDC Emulator FSMO Role as the preferred NTP server during the domain join process.
If your environment requires something different, go to System Settings > General to add or edit a server in the NTP Servers window.
Keep the local system time sync within five (5) minutes of the AD domain controller time in a default AD environment.
Use an external time source when configuring a virtualized domain controller.
TrueNAS generates alerts if the system time gets out-of-sync with the AD domain controller time.
TrueNAS has a few options to ensure both systems are synchronized:
Go to System Settings > General and click Settings in the Localization window to select the Timezone that matches location of the AD domain controller.
Set either local time or universal time in the system BIOS.
Connecting to the Active Directory Domain
To connect to Active Directory, in SCALE:
Go to Credentials > Directory Services click Configure Active Directory to open the Active Directory configuration screen.
Enter the domain name for the AD in Domain Name and the account credentials in Domain Account Name and Domain Account Password.
Select Enable to attempt to join the AD domain immediately after saving the configuration.
SCALE populates the Kerberos Realm and Kerberos Principal fields on the Advanced Options settings screen.
Click Save.
TrueNAS offers advanced options for fine-tuning the AD configuration, but the preconfigured defaults are generally suitable.
I don't see any AD information!
TrueNAS can take a few minutes to populate the Active Directory information after configuration.
To check the AD join progress, open the assignmentTask Manager in the upper-right corner.
TrueNAS displays any errors during the join process in the Task Manager.
When the import completes, AD users and groups become available while configuring basic dataset permissions or an ACL with TrueNAS cache enabled (enabled by default).
Joining AD also adds default Kerberos realms and generates a default AD_MACHINE_ACCOUNT keytab.
TrueNAS automatically begins using this default keytab and removes any administrator credentials stored in the TrueNAS configuration file.
Troubleshooting - Resyncing the Cache
If the cache becomes out of sync or fewer users than expected are available in the permissions editors, resync it by clicking Settings in the Active Directory window and selecting Rebuild Directory Service Cache.
When creating the entry, enter the TrueNAS hostname in the name field and make sure it matches the information on the Network > Global Configuration screen in the Hostname and NetBIOS fields.
Disabling Active Directory
You can disable your AD server connection without deleting your configuration or leaving the AD domain.
Click Settings to open the Active Directory settings screen, then select the Enable checkbox to clear it, and click Save to disable SCALE AD service.
This returns you to the main Directory Services screen where you see the two main directory services configuration options.
Click Configure Active Directory to open the Active Directory screen with your existing configuration settings.
Select Enable again, click Save to reactivate your connection to your AD server.
Leaving Active Directory
TrueNAS SCALE requires users to cleanly leave an Active Directory if you want to delete the configuration. To cleanly leave AD, use the Leave Domain button on the Active Directory Advanced Settings screen to remove the AD object. Remove the computer account and associated DNS records from the Active Directory.
If the AD server moves or shuts down without you using Leave Domain, TrueNAS does not remove the AD object, and you have to clean up the Active Directory.
8.4.2 - Configuring LDAP
Provides instructions on configuring and managing LDAP in TrueNAS SCALE.
TrueNAS has an Open LDAP client for accessing the information on an LDAP server.
An LDAP server provides directory services for finding network resources like users and their associated permissions.
You can have either Active Directory or LDAP configured on SCALE but not both.
Does LDAP work with SMB?
LDAP authentication for SMB shares is disabled unless you configured and populated the LDAP directory with Samba attributes.
The most popular script for performing this task is smbldap-tools.
TrueNAS needs to be able to validate the full certificate chain (no self-signed certificates).
TrueNAS does not support non-CA certificates.
Configuring LDAP
To configure SCALE to use an LDAP directory server:
Go to Credentials > Directory Services and click Configure LDAP.
Enter your LDAP server host name. If using a cloud service LDAP server, do not include the full URL.
Enter your LDAP server base DN. This is the top of the top level of the LDAP directory tree to use when searching for resources.
Enter the bind DN (administrative account name for the LDAP server) and the bind password.
Select Enable to activate the server
Click Save.
If you want to further modify the LDAP configuration, click Advanced Options. See the LDAP UI Reference article for details about advanced settings.
Disabling LDAP
To disable LDAP but not remove the configuration, clear the Enable checkbox. The main Directory Services screen returns to the default view showing the options to configure Active Directory or LDAP.
To enable LDAP again, click Configure LDAP to open the LDAP screen with your saved configuration. Select Enable again to reactivate your LDAP directory server configuration.
Removing LDAP from SCALE
To remove the LDAP configuration, click Settings to open the LDAP screen.
Clear all settings and click Save.
8.4.3 - Configuring Kerberos
Provides instructions on configuring and managing Kerberos realms and keytabs in TrueNAS SCALE.
Kerberos is extremely complex. Only system administrators experienced with configuring Kerberos should attempt it.
Misconfiguring Kerberos settings, realms, and keytabs can have a system-wide impact beyond Active Directory or LDAP, and can result in system outages.
Do not attempt configure or make changes if you do not know what you are doing!
Kerberos is a computer network security protocol. It authenticates service requests between trusted hosts across an untrusted network (i.e., the Internet).
If you configure Active Directory in SCALE, SCALE populates the realm fields and the keytab with with what it discovers in AD.
You can configure LDAP to communicate with other LDAP severs using Kerberos, or NFS if it is properly configured, but SCALE does not automatically add the realm or key tab for these services.
After AD populates the Kerberos realm and keytabs, do not make changes. Consult with your IT or network services department, or those responsible for the Kerberos deployment in your network environment for help.
For more information on Kerberos settings refer to the MIT Kerberos Documentation.
Kerberos uses realms and keytabs to authenticate clients and servers.
A Kerberos realm is an authorized domain that a Kerberos server can use to authenticate a client.
By default, TrueNAS creates a Kerberos realm for the local system.
A keytab (“key table”) is a file that stores encryption keys for authentication.
TrueNAS SCALE allows users to configure general Kerberos settings, as well as realms and keytabs.
Kerberos Realms
TrueNAS automatically generates a realm after you configure AD.
Users can configure Kerberos realms by navigating to Directory Services and clicking Add in the Kerberos Realms window.
Enter the realm and key distribution (KDC) names, then define the admin and password servers for the realm.
Click Save.
Kerberos Keytabs
TrueNAS automatically generates a keytab after you configure AD.
A Kerberos keytab replaces the administration credentials for Active Directory after intial configuration.
Since TrueNAS does not save the Active Directory or LDAP administrator account password in the system database, keytabs can be a security risk in some environments.
When using a keytab, create and use a less-privileged account to perform queries.
TrueNAS stores that account password in the system database.
Adding the Windows Keytab to TrueNAS
After generating the keytab, go back to Directory Services in TrueNAS and click Add in the Kerberos Keytab window to add it to TrueNAS.
To make AD use the keytab, click Settings in the Active Directory window and select it using the Kerberos Principal dropdown list.
When using a keytab with AD, ensure the keytab username and userpass match the Domain Account Name and Domain Account Password.
To make LDAP use a keytab principal, click Settings in the LDAP window and select the keytab using the Kerberos Principal dropdown list.
Kerberos Settings
If you do not understand Kerberos auxiliary parameters, do not attempt to configure new settings!
The Kerberos Settings screen includes two fields used to configure auxiliary parameters.
Kerberos is extremely complex. Only system administrators experienced with configuring Kerberos should attempt it.
Misconfiguring Kerberos settings, realms, and keytabs can have a system-wide impact beyond Active Directory or LDAP, and can result in system outages.
Do not attempt configure or make changes if you do not know what you are doing!
8.4.4 - Configuring IDMap
Provides instructions on configuring and managing ID mapping in TrueNAS SCALE.
Idmap settings exist for the purpose of integration with an existing directory domain to ensure that UIDs and GIDs assigned to Active Directory users and groups have consistent values domain-wide.
The correct configuration therefore relies on details that are entirely external to the TrueNAS server, e.g., how the AD administrator has configured other Unix-like computers in the environment.
The default is to use an algorithmic method of generating IDs based on the RID component of the user or group SID in Active Directory.
Only administrators experienced with configuring Id mapping should attempt to add new or edit existing idmaps.
Misconfiguration can lead to permissions incorrectly assigned to users or groups in the case where data is transferred to/from external servers via ZFS replication or rsync (or when access is performed via NFS or other protocols that directly access the UIDs/GIDs on files).
The Idmap directory service lets users configure and select a backend to map Windows security identifiers (SIDs) to UNIX UIDs and GIDs. Users must enable the Active Directory service to configure and use identity mapping (Idmap).
Users can click Add in the Idmap widget to configure backends or click on an already existing Idmap to edit it.
TrueNAS automatically generates an Idmap after you configure AD or LDAP.
Adding an ID Map
From the Directory Services screen, click Show to the right of Advanced Settings and then click Confirm to close the warning dialog.
Click Add on the Idmap widget to open the Idmap Settings screen.
Select the type from the Name field dropdown. Screen settings change based on the selection.
Select the Idmap Backend type from the dropdown list. Screen settings change based on the backend selected.
Enter the required field values.
Click Save.
8.5 - Backup Credentials
Backup credential tutorials for integrating TrueNAS SCALE with cloud storage providers by setting up SSH connections and keypairs.
TrueNAS backup credentials store cloud backup services credentials, SSH connections, and SSH keypairs.
Users can set up backup credentials with cloud and SSH clients to back up data in case of drive failure.
Contents
Adding Cloud Credentials: Provides basic instructions on how to add backup cloud credentials and more detailed instructions for some cloud storage providers.
Adding SSH Credentials: Provides information on adding SSH connections, generating SSH keypairs, and adding the SSH public key to the root user.
8.5.1 - Adding Cloud Credentials
Provides basic instructions on how to add backup cloud credentials and more detailed instructions for some cloud storage providers.
The Cloud Credentials widget on the Backup Credentials screen allows users to integrate TrueNAS with cloud storage providers.
These providers are supported for Cloud Sync tasks in TrueNAS SCALE:
To maximize security, TrueNAS encrypts cloud credentials when saving them.
However, this means that to restore any cloud credentials from a TrueNAS configuration file, you must enable Export Password Secret Seed when generating that configuration backup.
Remember to protect any downloaded TrueNAS configuration files.
Authentication methods for each provider could differ based on the provider security requirements.
You can add credentials for many of the supported cloud storage providers from the information on the Cloud Credentials Screens.
This article provides instructions for the more involved providers.
Before You Begin
We recommend users open another browser tab to open and log into the cloud storage provider account you intend to link with TrueNAS.
Some providers require additional information that they generate on the storage provider account page.
For example, saving an Amazon S3 credential on TrueNAS could require logging in to the S3 account and generating an access key pair found on the Security Credentials > Access Keys page.
Have any authentication information your cloud storage provider requires on-hand to make the process easier. Authentication information could include but are not limited to user credentials, access tokens, and access and security keys.
Adding Cloud Credentials
To set up a cloud credential, go to Credentials > Backup Credentials and click Add in the Cloud Credentials widget.
Enter a credential name.
Select the cloud service from the Provider dropdown list. The provider required authentication option settings display.
Click Verify Credentials to test the entered credentials and verify they work.
Click Save.
Adding Storj Cloud Credentials
The process to set up the Storj-TrueNAS account, buckets, create the S3 access and download the credentials is documented fully in Adding a Storj Cloud Sync Task in the Adding Storj Cloud Credentials section.
Adding Amazon S3 Cloud Credentials
If adding an Amazon S3 cloud credential, you can use the default authentication settings or use advanced settings if you want to include endpoint settings.
Click here for more information
After entering a name and leaving Amazon S3 as the Provider setting:
Navigate to My account > Security Credentials > Access Keys to obtain the Amazon S3 secret access key ID.
Access keys are alphanumeric and between 5 and 20 characters.
If you cannot find or remember the secret access key, go to My Account > Security Credentials > Access Keys and create a new key pair.
Enter or copy/paste the access key into Access Key ID.
Enter or copy/paste the Amazon Web Services alphanumeric password that is between 8 and 40 characters into Secret Access Key
(Optional) Enter a value to define the maximum number of chunks for a multipart upload in Maximum Upload Ports.
Setting a maximum is necessary if a service does not support the 10,000 chunk AWS S3 specification.
(Optional) Select Advanced Settings to display the endpoint settings.
To use the default endpoint for the region and automatically fetch available buckets leave this field blank.
For more information refer to the AWS Documentation for a list of Simple Storage Service Website Endpoints.
To detect the correct public region for the selected bucket leave the field blank.
Entering a private region name allows interacting with Amazon buckets created in that region.
c. (Optional) Configure a custom endpoint URL.
d. (Optional) Select Disable Endpoint Region to prevent automatic detection of the bucket region.
Enable only if your AWS provider does not support regions.
d. (Optional) Select Use Signature Version 2 to force using signature version 2 with the custom endpoint URL.
Select only if your AWS provider does not support default version 4 signatures.
For more information on using this to sign API requests see Signature Version 2.
Click Verify Credentials to check your credentials for any issues.
Click Save
Adding Cloud Credentials that Authenticate with OAuth
Cloud storage providers using OAuth as an authentication method are Box, Dropbox, Google Drive, Google Photo, pCloud and Yandex.
Click here for more information
After logging into the provider with the OAuth credentials, the provider provides the access token.
Google Drive and pCloud use one more setting to authenticate credentials.
Enter the name and select the cloud storage provider from the Provider dropdown list.
Enter the provider account email in OAuth Client ID and the password for that user account in OAuth Client Secret.
Click Log In To Provider. The Authentication window opens. Click Proceed to open the OAuth credential account sign in window.
Yandex displays a cookies message you must accept before you can enter credentials.
Enter the provider account user name and password to verify the credentials.
(Optional) Enter the value for any additional authentication method.
For pCloud, enter the pCloud host name for the host you connect to in Hostname.
For Google Drive when connecting to Team Drive, enter the Google Drive top-level folder ID.
Enter the access token from the provider if not populated by the provider after OAuth authentication. Obtaining the access token varies by provider.
Provider
Access Token
Box
For more information the user access token for Box click here. An access token enables Box to verify a request belongs to an authorized session. Example token: T9cE5asGnuyYCCqIZFoWjFHvNbvVqHjl.
The authentication process creates the token for Google Drive and populates the Access Token field automatically. Access tokens expire periodically, so you must refresh them.
Google Photo
Does not use an access token.
pCloud
Create the pCloud access token here. These tokens can expire and require an extension.
Click Verify Credentials to make sure you can connect with the entered credentials.
Click Save.
Adding BackBlaze B2 Cloud Credentials
BackBlaze B2 uses an application key and key ID to authenticate credentials.
Click here for more information
From the Cloud Credentials widget, click Add and then:
Enter the name and select BackBlaze B2 from the Provider dropdown list.
Log into the BackBlaze account, go to App Keys page and add a new application key. Copy and paste this into Key ID.
Generate a new application key on the BackBlaze B2 website.
From the App Keys page, add a new application key. Copy the application Key string Application Key.
Click Verify Credentials.
Click Save.
Adding Google Cloud Storage Credentials
Google Cloud Storage uses a service account json file to authenticate credentials.
Click here for more information
From the Cloud Credentials widget, click Add and then:
Enter the name and select Google Cloud Storage from the Provider dropdown list.
Go to your Google Cloud Storage website to download this file to the TrueNAS SCALE server.
The Google Cloud Platform Console creates the file.
Upload the json file to Preview JSON Service Account Key using Choose File to browse the server to locate the downloaded file. For help uploading a Google Service Account credential file click here.
Click Verify Credentials.
Click Save.
Adding OpenStack Swift Cloud Credentials
OpenStack Swift authentication credentials change based on selections made in AuthVersion. All options use the user name, API key or password and authentication URL, and can use the optional endpoint settings.
Click here for more information
d. Enter the ID in Tenant ID. Required for v2 and v3 and (optional) enter a Tenant Domain.
e. (Optional) Enter the alternative authentication token in Auth Token.
f. Enter a region name in Region Name
g. (Optional) Enter the URL in Storage URL.
h. (Required) Select service catalog option from the Endpoint Type dropdown. Options are Public, Internal and Admin. Public is recommended.
Click Verify Credentials.
Click Save.
Using Automatic Authentication
Some providers can automatically populate the required authentication strings by logging in to the account.
Click here for more information
To automatically configure the credential, click Login to Provider and entering your account user name and password.
We recommend verifying the credential before saving it.
8.5.2 - Adding SSH Credentials
Provides information on adding SSH connections, generating SSH keypairs, and adding the SSH public key to the root user.
The SSH Connections and SSH Keypairs widgets on the Backup Credentials screen display a list of SSH connections and keypairs configured on the system.
Using these widgets, users can establish Secure Socket Shell (SSH) connections.
You must also configure and activate the SSH Service to allow SSH access.
Creating an SSH Connection
To begin setting up an SSH connection, go to Credentials > Backup Credentials.
This procedure uses the semi-automatic setup method for creating an SSH connection with other TrueNAS or FreeNAS systems.
Click here for more information
Semi-automatic simplifies setting up an SSH connection with another FreeNAS or TrueNAS system without logging in to that system to transfer SSH keys.
This requires an SSH keypair on the local system and administrator account credentials for the remote TrueNAS.
You must configure the remote system to allow root access with SSH.
You can generate the keypair as part of the semiautomatic configuration or a manually created one using SSH Keypairs.
Using the SSH Connections configuration screen:
Enter a name and select the Setup Method. If establishing an SSH connection to another TrueNAS server use the default Semi-automatic (TrueNAS only) option.
If connecting to a non-TrueNAS server select Manual from the dropdown list.
a. Enter a valid URL scheme for the remote TrueNAS URL in TrueNAS URL.
This is a required field.
b. Enter an admin user name, which is the username on the remote system entered to log in via the web UI to set up the connection.
Or, leave Admin Username set to the default root user and enter the user password in Admin Password.
c. If two-factor authentication is enabled, enter the one-time password in One-Time Password (if neccessary).
d. Enter a Username, which is the user name on the remote system to log in via SSH.
e. Enter or import the private key from a previously created SSH keypair, or create a new one using the SSH Keypair widget.
(Optional) Enter the number of seconds you want to have SCALE wait for the remote TrueNAS/FreeNAS system to connect in Connect Timeout.
Click Save. Saving a new connection automatically opens a connection to the remote TrueNAS and exchanges SSH keys.
The new SSH connection displays on the SSH Connection widget.
To edit it, click on the name to open the SSH Connections configuration screen populated with the saved settings.
Configuring a Manual SSH Connection
Follow these instructions to set up an SSH connection to a non-TrueNAS or non-FreeNAS system.
To manually set up an SSH connection, you must copy a public encryption key from the local system to the remote system.
A manual setup allows a secure connection without a password prompt.
Click here for more information
Using the SSH Connections configuration screen:
Enter a name and select Manual from the Setup Method dropdown list.
a. Enter a host name or host IP address for the remote non-TrueNAS/FreeNAS system as a valid URL.
An IP address example is https://10.231.3.76.
This is a required field.
b. Enter the port number of the remote system to use for the SSH connection.
c. Enter a user name for logging into the remote system in Username.
c. Select the private key from the SSH keypair that you use to transfer the public key on the remote NAS from the Private Key dropdown.
d. Click Discover Remote Host Key after properly configuring all other fields to query the remote system and automatically populate thr Remote Host Key field.
(Optional) Enter the number of seconds you want SCALE wait for the remote TrueNAS/FreeNAS system to connect in Connect Timeout.
Click Save. Saving a new connection automatically opens a connection to the remote TrueNAS and exchanges SSH keys.
The new SSH connection displays on the SSH Connection widget.
To edit it, click on the name to open the SSH Connections configuration screen populated with the saved settings.
Adding a Public SSH Key to an Admin User Account
This procedure covers adding a public SSH key to the admin account on the TrueNAS SCALE system and generating a new SSH Keypair to add to the remote system (TrueNAS or other).
Click here for more information
Copy the SSH public key text or download it to a text file:
Log into the TrueNAS system that generated the SSH keypair and go to Credentials > Backup Credentials.
Click on the name of the keypair on the SSH Keypairs widget to open the keypair for the SSH connection.
Copy the text of the public SSH key or download the public key as a text file.
Add the public key to the admin account on the system where you want to register the public key.
Log into the TrueNAS system where you want to register the public key and go to Credentials > Local Users.
Edit the admin account.
Click on the expand_more icon and then click Edit to open the Edit User screen.
Paste the SSH public key text into the Authorized Keys field on the Edit User configuration screen in the Authentication settings.
Alternately, click Choose File to select and upload the SSH key.
Do not paste the SSH private key.
Click Save.
If you need to generate a new SSH keypair:
Go to Credentials > Backup Credentials.
Click Add on the SSH Keypairs widget and select Generate New.
Copy or download the value for the new public key.
Add the new public key to the remote NAS.
If the remote NAS is not a TrueNAS system, refer to the documentation for that system, and find their instructions on adding a public SSH key.
Generating SSH Keypairs
TrueNAS generates and stores RSA-encrypted SSH public and private keypairs on the SSH Keypairs widget found on the Credentials > Backup Credentials screen.
Keypairs are generally used when configuring SSH Connections or SFTP Cloud Credentials.
TrueNAS does not support encrypted keypairs or keypairs with passphrases.
TrueNAS automatically generates keypairs as needed when creating new SSH Connections or Replication tasks.
To manually create a new keypair:
Click Add on the SSH Keypairs widget.
Click Generate New on the SSH Keypairs screen.
Give the new keypair a unique name and click Save.
The keypair displays on the SSH Keypairs widget.
Click the vertical ellipsis more_vert at the bottom of the SSH Keypairs configuration screen to download these strings as text files for later use.
8.6 - Certificates
Information about adding and managing certificates, CSRs, CAs and ACME DNS-Authenticators in TrueNAS SCALE.
Use the Credentials > Certificates screen Certificates, Certificate Signing Requests (CSRs), Certificate Authorities (CA), and ACME DNS-Authenticators widgets to manage certificates, certificate signing requests (CSRs), certificate authorities (CA), and ACME DNS-authenticators.
Each TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
Contents
Managing Certificates: Provides information on adding or managing SCALE certificates.
Creating ACME Certificates: Provides information on generating ACME certificates in TrueNAS SCALE using Let's Encrypt.
8.6.1 - Managing Certificates
Provides information on adding or managing SCALE certificates.
The Certificates screen widgets display information for certificates, certificate signing requests (CSRs), certificate authorities(CAs), and ACME DNS-authenticators configured on the system, and provide the ability to add new ones.
TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can make custom certificates for authentication and validation while sharing data.
Adding Certificates
By default, TrueNAS comes equipped with an internal, self-signed certificate that enables encrypted access to the web interface, but users can import and create more certificates by clicking Add in the Certificates window.
To add a new certificate:
Click Add on the Certificates widget to open the Add Certficates wizard.
First, enter a name as certificate identifier and select the type.
The Identifier and Type step lets users name the certificate and choose whether to use it for internal or local systems, or import an existing certificate.
Users can also select a predefined certificate extension from the Profiles dropdown list.
Next, specify the certificate options. Select the Key Type as this selection changes the settings displayed.
The Certificate Options step provides options for choosing the signing certificate authority (CSR), the type of private key type to use (as well as the number of bits in the key used by the cryptographic algorithm), the cryptographic algorithm the certificate uses, and how many days the certificate authority lasts.
Now enter the certificate location and basic information.
The Certificate Subject step lets users define the location, name, and email for the organization using the certificate.
Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, select any extension types you want to apply. Selecting Extended Key displays settings for Key Usage settings as well. Select any extra constraints you need for your scenario.
The Extra Constraints step contains certificate extension options.
Basic Constraints when enabled this limits the path length for a certificate chain.
Authority Key Identifier when enabled provides a means of identifying the public key corresponding to the private key used to sign a certificate.
Key Usage when enabled defines the purpose of the public key contained in a certificate.
Extended Key Usage when enabled it further refines key usage extensions.
Review the certificate options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the certificate.
Importing a Certificate
To import a certificate, first select Import Certificate as the Type and name the certificate.
Next, if the CSR exists on your SCALE system, select CSR exists on this system and then select the CSR.
Copy/paste the certificate and private Keys into their fields, and enter and confirm the passphrase for the certificate if one exists.
Review the options, and then click Save.
8.6.2 - Managing Certificate Authorities
Provides basic instructions on adding and managing SCALE certificate authorities (CAs).
The Certificate Authorities widget lets users set up a certificate authority (CA) that certifies the ownership of a public key by the named subject of the certificate.
To add a new CA:
First, add the name and select the type of CA.
The Identifier and Type step lets users name the CA and choose whether to create a new CA or import an existing CA. Users can also select a predefined certificate extension from the Profiles drop-down list.
Next, enter the certificate options. Select the key type. The Key Type selection changes the settings displayed.
The Certificate Options step provides options for choosing what type of private key to use (as well as the number of bits in the key used by the cryptographic algorithm), the cryptographic algorithm the CA uses, and how many days the CA lasts.
Now enter the certificate subject information.
The Certificate Subject step lets users define the location, name, and email for the organization using the certificate. Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, enter any extra constraints you need for your scenario.
The Extra Constraints step contains certificate extension options.
Basic Constraints when enabled this limits the path length for a certificate chain.
Authority Key Identifier when enable provides a means of identifying the public key corresponding to the private key used to sign a certificate.
Key Usage when enabled defines the purpose of the public key contained in a certificate.
Extended Key Usage when enabled it further refines key usage extensions.
Review the CA options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the CA.
8.6.3 - Managing Certificate Signing Requests
Provides basic instructions on adding and managing SCALE certificate signing requests (CSRs).
The Certificate Signing Requests widget allows users configure the message(s) the system sends to a registration authority of the public key infrastructure to apply for a digital identity certificate.
To add a new CSR:
First enter the name and select the CSR type.
The Identifier and Type step lets users name the certificate signing request (CSR) and choose whether to create a new CSR or import an existing CSR. Users can also select a predefined certificate extension from the Profiles drop-down list.
Next, select the certficate options for the CSR you selected.
The Certificate Options step provides options for choosing what type of private key type to use, the number of bits in the key used by the cryptographic algorithm, and the cryptographic algorithm the CSR uses.
Now enter the information about the certificate.
The Certificate Subject step lets users define the location, name, and email for the organization using the certificate. Users can also enter the system fully-qualified hostname (FQDN) and any additional domains for multi-domain support.
Lastly, enter any extra constraints you need for your scenario.
The Extra Constraints step contains certificate extension options.
Basic Constraints when enabled this limits the path length for a certificate chain.
Authority Key Identifier when enable provides a means of identifying the public key corresponding to the private key used to sign a certificate.
Key Usage when enabled defines the purpose of the public key contained in a certificate.
Extended Key Usage when enabled it further refines key usage extensions.
Review the certificate options. If you want to change something Click Back to reach the screen with the setting option you want to change, then click Next to advance to the Confirm Options step.
Click Save to add the CSR.
8.6.4 - Adding ACME DNS-Authenticators
Provides basic instructions on adding and managing SCALE ACME DNS-authenticators.
Automatic Certificate Management Environment (ACME) DNS authenticators allow users to automate certificate issuing and renewal. The user must verify ownership of the domain before TrueNAS allows certificate automation.
ACME DNS is an advanced feature intended for network administrators or AWS professionals. Misconfiguring ACME DNS can prevent you from accessing TrueNAS.
The system requires an ACME DNS Authenticator and CSR to configure ACME certificate automation.
Adding a DNS Authenticator
To add an authenticator,
Click Add on the ACME DNS-Authenticator widget to open the Add DNS Authenticator screen.
Enter a name, and select the authenticator you want to configure.
Options are cloudflare, Amazon route53, OVH, and shell.
Authenticator selection changes the configuration fields.
If you select cloudflare as the authenticator, you must enter your Cloudflare account email address, API key, and API token.
If you select route53 as the authenticator, you must enter your Route53 Access key ID and secret access key.
See AWS documentation for information on creating a long-term access key with these credentials.
If you select OVH as the authenticator, you must enter your OVH application key, application secret, consumer key, and endpoint.
See OVHcloud and certbot-dns-ovh for information on retrieving these credentials and configuring access.
Click Save to add the authenticator.
Adding an Authenticator with a Shell Script
The shell authenticator option is meant for advanced users. Improperly configured scripts can result in system instability or unexpected behavior.
If you select shell as the authenticator, you must enter the path to an authenticator script, the running user, a certificate timeout, and a domain propagation delay.
Advanced users can select this option to pass an authenticator script, such as acme.sh, to shell and add an external DNS authenticator.
Requires an ACME authenticator script saved to the system.
8.6.5 - Creating ACME Certificates
Provides information on generating ACME certificates in TrueNAS SCALE using Let’s Encrypt.
TrueNAS SCALE allows users to automatically generate custom domain certificates using Let’s Encrypt.
Requirements
An email address for your TrueNAS SCALE Admin user.
A custom domain that uses Cloudflare, AWS Route 53, or OVH.
A DNS server that does not cache for your TrueNAS SCALE system.
Create an ACME DNS-Authenticator
Go to Credentials > Certificates and click ADD in the ACME DNS-Authenticators widget.
Enter the required fields depending on your provider, then click Save.
For Cloudflare, enter either your Cloudflare Email and API Key, or enter an API Token.
If you create an API Token, make sure to give the token the permission Zone.DNS:Edit, as it’s required by certbot.
For Route53, enter your Access Key ID and Secret Access Key. The associated IAM user must have permission to perform the Route53 actions ListHostedZones, ChangeResourceRecordSets, and GetChange.
For OVH, enter your OVH Application Key, OVH Application Secret, OVH Consumer Key, and OVH Endpoint.
Create a Certificate Signing Request (CSR)
Next, click ADD in the Certificate Signing Requests widget.
You can use default settings except for the Common Name and Subject Alternate Names fields.
Enter your primary domain name in the Common Name field, then enter additional domains you wish to secure in the Subject Alternate Names field.
For example, if your primary domain is domain1.com, entering www.domain1.com secures both addresses.
Create ACME Certificate
Click the icon next to the new CSR.
Fill out the ACME Certificate form. Under Domains, select the ACME DNS Authenticator you created for both domains, then click Save.
You can create testing and staging certificates for your domain.
8.7 - Configuring KMIP
Describes how to configure KMIP on TrueNAS SCALE Enterprise.
TrueNAS Enterprise
KMIP is only available for TrueNAS SCALE Enterprise licensed systems.
Contact the iXsystems Sales Team to inquire about purchasing TrueNAS Enterprise licenses.
The Key Management Interoperability Protocol (KMIP) is an extensible client/server communication protocol for storing and maintaining keys, certificates, and secret objects.
KMIP on TrueNAS SCALE Enterprise integrates the system within an existing centralized key management infrastructure and uses a single trusted source for creating, using, and destroying SED passwords and ZFS encryption keys.
With KMIP, keys created on a single server are then retrieved by TrueNAS.
KMIP supports keys wrapped within keys, symmetric, and asymmetric keys.
KMIP enables clients to ask a server to encrypt or decrypt data without the client ever having direct access to a key.
You can also use KMIP to sign certificates.
Requirements
To simplify the TrueNAS connection process:
Have a KMIP server available with certificate authorities and certificates you can import into TrueNAS.
Have the KMIP server configuration open in a separate browser tab or copy the KMIP server certificate string and private key string to later paste into the TrueNAS web interface.
Log into the TrueNAS web interface and go to Credentials > Certificate.
Click Add on the Certificate Authorities widget.
Select Import CA from the Type dropdown list.
Enter a memorable name for the CA, then paste the KMIP server certificate in Certificate and the private key in Private Key.
Leave Passphrase empty.
Click Save.
Next, click Add on the Certificates widget.
Select Import Certificate from the Type dropdown list.
Enter a memorable name for the certificate, then paste the KMIP server certificate and private key strings into the related TrueNAS fields.
Leave Passphrase empty.
Click Save.
For security reasons, we strongly recommend protecting the CA and certificate values.
Enter the central key server host name or IP address in Server and the number of an open connection on the key server in Port.
Select the certificate and certificate authority that you imported from the central key server.
To ensure the certificate and CA chain is correct, click on Validate Connection. Click Save.
When the certificate chain verifies, choose the encryption values, SED passwords, or ZFS data pool encryption keys to move to the central key server.
Select Enabled to begin moving the passwords and keys immediately after clicking Save.
Refresh the KMIP screen to show the current KMIP Key Status.
If you want to cancel a pending key synchronization, select Force Clear and click Save.
9 - Virtualization
Tutorials for configuring TrueNAS SCALE virtualization features.
The Virtualization section allows users to set up Virtual Machines (VMs) to run alongside TrueNAS.
Delegating processes to VMs reduces the load on the physical system, which means users can utilize additional hardware resources.
Users can customize six different segments of a VM when creating one in TrueNAS SCALE.
What system resources do VMs require?
TrueNAS assigns a portion of system RAM and a new zvol to each VM.
While a VM is running, these resources are not available to the host computer or other VMs.
TrueNAS VMs use the KVM virtual machine software.
This type of virtualization requires an x86 machine running a recent Linux kernel on an Intel processor with VT (virtualization technology) extensions or an AMD processor with SVM extensions (also called AMD-V).
Users cannot create VMs unless the host system supports these features.
To verify that you have Intel VT or AMD-V, check your processor model name on the vendor’s website.
If needed, enable virtualization in the BIOS Advanced > CPU Configuration settings.
A virtual machine (VM) is an environment on a host computer that you can use as if it is a separate, physical computer.
Users can use VMs to run multiple operating systems simultaneously on a single computer.
Operating systems running inside a VM see emulated virtual hardware rather than the host computer physical hardware.
VMs provide more isolation than Jails but also consume more system resources.
What system resources do VMs require?
TrueNAS assigns a portion of system RAM and a new zvol to each VM.
While a VM is running, these resources are not available to the host computer or other VMs.
TrueNAS VMs use the KVM virtual machine software.
This type of virtualization requires an x86 machine running a recent Linux kernel on an Intel processor with VT (virtualization technology) extensions or an AMD processor with SVM extensions (also called AMD-V).
Users cannot create VMs unless the host system supports these features.
To verify that you have Intel VT or AMD-V, check your processor model name on the vendor’s website.
If needed, enable virtualization in the BIOS Advanced > CPU Configuration settings.
Creating a Virtual Machine
Before creating a VM, obtain an installer .iso or image file for the OS you intend to install, and create a zvol on a storage pool that is available for both the virtual disk and the OS install file.
To create a new VM, go to Virtualization and click Add to open the Create Virtual Machine configuration screen.
If you have not yet added a virtual machine to your system, click Add Virtual Machines to open the same screen.
Select the operating system you want to use from the Guest Operating System dropdown list.
Compare the recommended specifications for the guest operating system with your available host system resources when allocating virtual CPUs, cores, threads, and memory size.
Change other Operating System settings per your use case.
Select UTC as the VM system time from the System Clock dropdown if you do not want to use the default Local setting.
Select Enable Display to enable a SPICE Virtual Network Computing (VNC) remote connection for the VM.
The Bind and Password fields display. If Enable Display is selected:
Enter a display Password
Use the dropdown menu to change the default IP address in Bind if you want to use a specific address as the display network interface, otherwise leave it set to 0.0.0.0.
The Bind menu populates any existing logical interfaces, such as static routes, configured on the system.
Bind cannot be edited after VM creation.
If you selected Windows as the Guest Operating System, the Virtual CPUs field displays a default value of 2.
The VM operating system might have operational or licensing restrictions on the number of CPUs.
Do not allocate too much memory to a VM. Activating a VM with all available memory allocated to it can slow the host system or prevent other VMs from starting.
Leave CPU Mode set to Custom if you want to select a CPU model.
Use Memory Size and Minimum Memory Size to specify how much RAM to dedicate to this VM.
To dedicate a fixed amount of RAM, enter a value (minimum 256 MiB) in the Memory Size field and leave Minimum Memory Size empty.
To allow for memory usage flexibility (sometimes called ballooning), define a specific value in the Minimum Memory Size field and a larger value in Memory Size.
The VM uses the Minimum Memory Size for normal operations but can dynamically allocate up to the defined Memory Size value in situations where the VM requires additional memory.
Reviewing available memory from within the VM typically shows the Minimum Memory Size.
Select the network interface type from the Adapter Type dropdown list. Select Intel e82585 (e1000) as it offers a higher level of compatibility with most operating systems, or select VirtIO if the guest operating system supports para-virtualized network drivers.
Select the network interface card to use from the Attach NIC dropdown list.
Click Next.
Upload installation media for the operating system you selected.
An active VM displays options for settings_ethernetDisplay and keyboard_arrow_rightSerial Shell connections.
When a Display device is configured, remote clients can connect to VM display sessions using a SPICE client, or by installing a 3rd party remote desktop server inside your VM.
SPICE clients are available from the SPICE Protocol site.
If the display connection screen appears distorted, try adjusting the display device resolution.
Use the State toggle or click stopStop to follow a standard procedure to do a clean shutdown of the running VM.
Click power_settings_newPower Off to halt and deactivate the VM, which is similar to unplugging a computer.
If the VM does not have a guest OS installed, the VM State toggle and stopStop button might not function as expected.
The State toggle and stopStop buttons send an ACPI power down command to the VM operating system, but since an OS is not installed, these commands time out.
Use the Power Off button instead.
Installing an OS
After configuring the VM in TrueNAS and an OS .iso, file is attached, start the VM and begin installing the operating system.
Some operating systems can require specific settings to function properly in a virtual machine.
For example, vanilla Debian can require advanced partitioning when installing the OS.
Refer to the documentation for your chosen operating system for tips and configuration instructions.
Installing Debian OS Example
Upload the Debian .iso to the TrueNAS system and attach it to the VM as a CD-ROM device.
This example uses Debian 12 and basic configuration recommendations.
Modify settings as needed to suit your use case.
Click Virtualization, then ADD to use the VM wizard.
The table below lists the settings used in this example.
Select the physical interface to associate with the VM.
Installation Media:
Installation ISO is uploaded to local storage. If the ISO is not uploaded, select Upload an installer image file. Select a dataset to store the ISO, click Choose file, then click Upload. Wait for the upload to complete.
GPU:
Leave the default values.
Confirm Options
Verify the information is correct and then click Save.
After creating the VM, start it. Expand the VM entry and click Start.
Click Display to open a SPICE interface and see the Debian Graphical Installation screens.
Press Enter to start the Debian Graphical Install.
a. Enter your localization settings for Language, Location, and Keymap.
b. Debian automatically configures networking and assigns an IP address with DHCP.
If the network configuration fails, click Continue and do not configure the network yet.
c. Enter a name in Hostname.
d. Enter a Domain name
e. Enter the root password and re-enter the root password.
f. Enter a name in New User.
g. Select the username for your account or accept the generated name.
h. Enter and re-enter the password for the user account.
j. Choose the time zone, Eastern in this case.
Detect and partition disks.
a. Select Guided - use entire disk to partition.
b. Select the available disk.
c. Select All files in one partition (recommended for new users).
d. Select Finish partitioning and write changes to disk.
e. Select Yes to Write the changes to disks?.
Install the base system
a. Select No to the question Scan extra installation media.
b. Select Yes when asked Continue without a network mirror.
Install software packages
a. Select No when asked Participate in the package usage survey.
b. Select Standard system utilities.
c. Click Continue when the installation finishes.
After the Debian installation finishes, close the display window.
Remove the device or edit the device order.
In the expanded section for the VM, click Power Off to stop the new VM.
a. Click Devices.
b. Remove the CD-ROM device containing the install media or edit the device order to boot from the Disk device.
To remove the CD-ROM from the devices, click the and select Delete.
Click Delete Device.
To edit the device boot order, click the and select Edit.
Change the CD-ROM Device Order to a value greater than that of the existing Disk device, such as 1005.
Click Save.
Return to the Virtual Machines screen and expand the new VM again.
Click Start, then click Display.
What if the grub file does not run after starting the VM?
The grub file does not run when you start the VM, enter the following after each start.
At the shell prompt:
Enter FS0: and press Enter.
Enter cd EFI and press Enter.
Enter cd Debian and press Enter.
Enter grubx64.efi and press Enter.
To ensure it starts automatically, create the startup.nsh file at the root directory on the VM. To create the file:
Go to the Shell.
At the shell prompt enter edit startup.nsh.
In the editor enter:
a. Enter FS0: and press Enter.
b. Enter cd EFI and press Enter.
c. Enter cd Debian and press Enter.
d. Enter grubx64.efi and press Enter.
Use the Control+s keys (Command+s for Mac OS) then press Enter.
Use the Control+q keys to quit.
Close the display window
To test if it boots up on startup:
a. Power off the VM.
b. Click Start.
c. Click Display.
d. Log into your Debian VM.
Configuring Virtual Machine Network Access
Configure VM network settings during or after installation of the guest OS.
To communicate with a VM from other parts of your local network, use the IP address configured or assigned by DHCP within the VM.
To confirm network connectivity, send a ping to and from the VM and other nodes on your local network.
Debian OS Example
Open a terminal in the Debian VM.
Enter ip addr and record the address.
Enter ping followed by the known IP or hostname of another client on the network, that is not your TrueNAS host.
Confirm the ping is successful.
To confirm internet access, you can also ping a known web server, such as ping google.com.
Log in to another client on the network and ping the IP address of your new VM.
Confirm the ping is successful.
Accessing TrueNAS Storage From a VM
By default, VMs are unable to communicate directly with the host NAS.
If you want to access your TrueNAS SCALE directories from a VM, to connect to a TrueNAS data share, for example, you have multiple options.
If your system has more than one physical interface, you can assign your VMs to a NIC other than the primary one your TrueNAS server uses. This method makes communication more flexible but does not offer the potential speed of a bridge.
To create a bridge interface for the VM to use if you have only one physical interface, stop all existing apps, VMs, and services using the current interface, edit the interface and VMs, create the bridge, and add the bridge to the VM device.
See Accessing NAS from VM for more information.
Accessing NAS From a VM: Provides instructions on how to create a bridge interface for the VM and provides Linux and Windows examples.
9.1 - Adding and Managing VM Devices
Provides instructions on adding or managing devices used by VMs.
Managing Devices
After creating a VM, the next step is to add virtual devices for that VM.
Using the Create Virtual Machine wizard configures at least one disk, NIC, and the display as part of the process.
To add devices, from the Virtual Machines screen, click anywhere on a VM entry to expand it and show the options for the VM.
Click device_hubDevices to open the Devices screen for the VM.
From this screen, you can edit, add, or delete devices.
Click the more_vert icon at the right of each listed device to see device options.
A virtual machine attempts to boot from devices according to the Device Order, starting with 1000, then ascending.
A CD-ROM device allows booting a VM from a CD-ROM image like an installation CD.
The CD image must be available in the system storage.
With a Display device, remote clients can connect to VM display sessions using a SPICE client, or by installing a 3rd party remote desktop server inside your VM.
SPICE clients are available from the SPICE Protocol site.
Before adding, editing, or deleting a VM device, stop the VM if it is running.
Click the State toggle to stop or restart a VM, or use the Stop and Restart buttons.
Editing a Device
Select Edit to open the Edit Device screen.
You can change the type of virtual hard disk, the storage volume to use, or change the device boot order.
To edit a VM device:
Stop the VM if it is running, then click Devices to open the list of devices for the selected VM.
Click on the more_vert icon at the right of the listed device you want to edit, then select Edit to open the Edit Device screen.
Select the path to the zvol created when setting up the VM on the Zvol dropdown list.
Select the type of hard disk emulation from the Mode dropdown list.
Select AHCI for better software compatibility, or select VirtIO for better performance if the guest OS installed in the VM has support for VirtIO disk devices.
(Optional) Specify the disk sector size in bytes in Disk Sector Size.
Leave set to Default or select either 512 or 4096 byte values from the dropdown list.
If not set, the sector size uses the ZFS volume values.
Specify the boot order or priority level in Device Order to move this device up or down in the sequence.
The lower the number the higher the priority in the boot sequence.
Click Save.
Restart the VM.
Deleting a Disk Device
Deleting a device removes it from the list of available devices for the selected VM.
To delete a VM device:
Stop the VM if it is running, then click Devices to open the list of devices for the selected VM.
Click on the more_vert icon at the right of the listed device you want to edit, then select Delete.
The Delete Virtual Machine dialog opens.
Select Delete zvol device to confirm you want to delete the zvol device.
Select Force Delete if you want the system to force the deletion of the zvol device, even if other devices or services are using or affiliated with the zvol device.
Click Delete Device.
Changing the Device Order
Stop the VM if it is running, then click Devices to open the list of devices for the selected VM
Click Edit.
Enter the number that represents where in the boot sequence you want this device to boot in the Devices Order field.
The lower the number, the higher the device is in the boot sequence.
Click Save.
Restart the VM.
Adding a CD-ROM Device
Select CD-ROM as the Device Type on the Add Device screen and set a boot order.
Stop the VM if it is running, then click Devices.
Click Add and select CD-ROM from the Device Type dropdown list.
Specify the mount path.
Click on the to the left of /mnt and at the pool and dataset levels to expand the directory tree. Select the path to the CD-ROM device.
Specify the boot sequence in Device Order.
Click Save.
Restart the VM.
Adding a NIC Device Type
Select NIC in the Device Type on the Add Device screen to add a network interface card for the VM to use.
Stop the VM if it is running, then click Devices.
Click Add and select NIC from the Device Type dropdown list.
Select the adapter type. Choose Intel e82585(e1000) for maximum compatibility with most operating systems.
If the guest OS supports VirtIO paravirtualized network drivers, choose VirtIO for better performance.
Click Generate to assign a new random MAC address to replace the random default address, or enter your own custom address.
Select the physical interface you want to use from the NIC To Attach dropdown list.
(Optional) Select Trust Guest Filters to allow the virtual server to change its MAC address and join multicast groups.
This is required for the IPv6 Neighbor Discovery Protocol (NDP).
Setting this attribute has security risks.
It allows the virtual server to change its MAC address and receive all frames delivered to this address.
Determine your network setup needs before setting this attribute.
Click Save.
Restart the VM
Add a Disk Device Type
Select Disk in Device Type on the Add Device screen to configure a new disk location, drive type and disk sector size, and boot order.
Stop the VM if it is running, then click Devices.
Click Add and select Disk from the Device Type dropdown list.
Select the path to the zvol you created when setting up the VM using the Zvol dropdown list.
Select the hard disk emulation type from the Mode dropdown list.
Select AHCI for better software compatibility, or VirtIO for better performance if the guest OS installed in the VM supports VirtIO disk devices.
Specify the sector size in bytes in Disk Sector Size.
Leave set to Default or select either 512 or 4096 from the dropdown list to change it.
If the sector size remains unset it uses the ZFS volume values.
Specify the boot sequence order for the disk device.
Click Save.
Restart the VM.
Adding a PCI Passthrough Device
Select PCI Passthrough Device in the Device Type on the Add Device screen to configure the PCI passthrough device and boot order.
Depending upon the type of device installed in your system, you might see a warning: PCI device does not have a reset mechanism defined.
You may experience inconsistent or degraded behavior when starting or stopping the VM.
Determine if you want to proceed with this action in such an instance.
Stop the VM if it is running, then click Devices.
Click Add and select PCI Passthrough Device from the Device Type dropdown list.
Enter a value in PCI Passthrough Device using the format of bus#/slot#/fcn#.
Select the Controller Type from the dropdown list.
Select the hub controller type from the Device dropdown list.
If the type is not listed, select Specify custom, then enter the Vendor ID and Product ID.
Specify the boot sequence order.
Click Save.
Restart the VM.
Adding a Display Device
Select Display as Device Type on the Add Device screen to configure a new display device.
Stop the VM if it is running, then click Devices.
Click Add and select Display from the Device Type dropdown list.
Enter a fixed port number in Port.
To allow TrueNAS to assign the port after restarting the VM, set the value to zero (leave the field empty).
Specify the display session settings:
a. Select the screen resolution to use for the display from the Resolution dropdown.
b. Select an IP address for the display device to use in Bind. The default is 0.0.0.0.
c. Enter a unique password for the display device to securely access the VM.
Select Web Interface to allow access to the VNC web interface.
Click Save.
Restart the VM.
Display devices have a 60-second inactivity timeout.
If the VM display session appears unresponsive, try refreshing the browser tab.
9.2 - Accessing NAS From a VM
Provides instructions on how to create a bridge interface for the VM and provides Linux and Windows examples.
If you want to access your TrueNAS SCALE directories from a VM, you have multiple options:
If you have only one physical interface, you must create a bridge interface for the VM.
If your system has more than one physical interface you can assign your VMs to a NIC other than the primary one your TrueNAS server uses.
This method makes communication more flexible but does not offer the potential speed of a bridge.
Prepare your system for interface changes by stopping and/or removing apps, VM NIC devices, and services that can cause conflicts:
Stop running apps before proceeding with network interface changes.
Power off any running virtual machines (VMs) before making interface IP changes. Remove active NIC devices.
If you encounter issues with testing network changes, you might need to stop any services, including Kubernetes and sharing services such as SMB, using the current IP address.
Creating a Bridge: Single Physical Interface
If your system only has a single physical interface, complete these steps to create a network bridge.
Go to Virtualization, find the VM you want to use to access TrueNAS storage and toggle it off.
Go to Network > Interfaces and find the active interface you used as the VM parent interface.
Note the interface IP Address and subnet mask.
Click the interface to open the Edit Interface screen.
If enabled, clear the DHCP checkbox.
Note the IP address and mask under Aliases.
Click the X next to the listed alias to remove the IP address and mask.
The Aliases field now reads No items have been added yet.
Click Save.
Go to Virtualization, expand the VM you want to use to access TrueNAS storage and click Devices.
Click more_vert in the NIC row and select Edit.
Select the new bridge interface from the NIC to Attach dropdown list, then click Save.
You can now access your TrueNAS storage from the VM.
You might have to set up shares or users with home directories to access certain files.
Assigning a Secondary NIC: Multiple Physical Interfaces
If you have more than one NIC on your system, you can assign VM traffic to a secondary NIC.
Configure the secondary interface as described in Managing Interfaces before attaching it to a VM.
If you are creating a new VM, use the Attach NIC dropdown menu under Network Interface to select the secondary NIC.
To edit the NIC attached to an existing VM:
Go to Virtualization, expand the VM you want to use to access TrueNAS storage and click Devices.
Expanding TrueNAS SCALE functionality with additional applications.
TrueNAS applications allow for quick and easy integration of third-party software and TrueNAS SCALE.
Applications are available from official, Enterprise, and community maintained trains.
TrueNAS Apps Support Timeline for 24.04 and 24.10
Summary:
Applications added to the TrueNAS Apps catalog before December 24, 2024, require updates to enable host IP port binding.
These updates roll out on June 1, 2025, and require TrueNAS 25.04 (or later).
Due to breaking changes involved in enabling host IP port binding, June 1, 2025 is the deadline for automatic apps migration on upgrade.
Migrate from 24.04 to 24.10 before June 1, 2025, to ensure automatic app migration.
Upgrade to 24.10.2.2 or 25.04 before June 1 to continue receiving regular app updates.
Previously installed apps on TrueNAS 24.10.2.1 (or earlier) do not receive updates after this point.
Normal application update functionality resumes after TrueNAS updates to 24.10.2.2 or 25.04.
Timeframe
App Migration 24.04 → 24.10
App Migration 24.10 → 25.04
Before June 1, 2025
✅ Supported
✅ Supported
After June 1, 2025
❌ Not Supported
✅ Supported (24.10.2.2 or later)
✅ Supported
✅ Supported
Read More
Application host IP port binding is being developed for all applications in the TrueNAS Apps catalog, starting with TrueNAS 25.04.
This feature allows per-app selection of any IP address from the available aliases assigned to an interface to bind the WebUI port to.
It includes port bind mode options to publish the port for external access or expose it for inter-container communication.
A small but growing list of applications currently support this functionality in TrueNAS 24.10 or later.
Applications that currently support host IP port binding
The following applications currently support host IP port binding.
calibre-web
esphome
handbrake-web
homearr
invoiceninja
it-tools
jelu
lyrion-music-server
minecraft-bedrock
romm
satisfactory-server
steam-headless
terreria
tianji
umami
urbackup
zigbee2mqtt
emby
All future applications, as well as those added to the TrueNAS Apps catalog after December 24, 2024, support this feature.
However, applications that were in the TrueNAS Apps catalog before implementation of this feature require OS-level changes to enable support.
Catalog updates to provide host IP port functionality to these applications are scheduled for June 1, 2025.
The updated versions of these applications do not function on TrueNAS versions earlier than 25.04.
Applications that do not currently support host IP port binding
actual-budget
adguard-home
audiobookshelf
autobrr
bazarr
briefkasten
calibre
castopod
chia
clamav
dashy
ddns-updater
deluge
distribution
dockge
drawio
eclipse-mosquitto
filebrowser
filestash
firefly-iii
flame
flaresolverr
freshrss
frigate
fscrawler
gaseous-server
gitea
grafana
handbrake
homepage
homer
immich
invidious
ipfs
jellyfin
jellyseerr
jenkins
joplin
kapowarr
kavita
komga
lidarr
linkding
listmonk
logseq
mealie
metube
minecraft
mineos
mumble
n8n
navidrome
netbootxyz
nginx-proxy-manager
node-red
odoo
ollama
omada-controller
open-webui
organizr
overseerr
palworld
paperless-ngx
passbolt
penpot
pgadmin
pigallery2
piwigo
planka
portainer
postgres
prowlarr
qbittorrent
radarr
readarr
redis
roundcube
rsyncd
rust-desk
sabnzbd
scrutiny
searxng
sftpgo
sonarr
tautulli
tdarr
terraria
tftpd-hpa
tiny-media-manager
transmission
twofactor-auth
unifi-controller
uptime-kuma
vaultwarden
vikunja
webdav
whoogle
wordpress
asigra-ds-system
syncthing
collabora
diskoverdata
elastic-search
emby
home-assistant
ix-app (Custom App)
minio
netdata
nextcloud
photoprism
pihole
plex
prometheus
storj
wg-easy
As a result, June 1 is also the cutoff date for two related app behaviors:
24.04 to 24.10 App Migrations:
TrueNAS 24.10 introduced a new Docker-based TrueNAS Apps backend and automated migration for Kubernetes-based apps on upgrade.
Due to breaking changes involved in enabling host IP port binding, June 1, 2025 is the deadline for automatic apps migration on upgrade.
Any users still running TrueNAS Apps on 24.04 after June 1 must re-deploy those apps after upgrading to 24.10 or later.
App Updates in 24.10
Update to TrueNAS 24.10.2.2 before June 1, 2025 to continue receiving app updates without interruption, including the new IP port binding functionality.
Previously installed apps on TrueNAS 24.10.2.1 (or earlier) do not receive updates after this point.
Normal application update functionality resumes after TrueNAS updates to 24.10.2.2 or 25.04.
App Updates in 25.04
Users of TrueNAS 25.04 continue receiving app updates without interruption, including the new IP port binding functionality.
Applications installed on TrueNAS 25.04 before June 1, 2025 automatically update to enable the new functionality.
No manual management is required.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Installed Applications Screen
The first time you go to Apps, the Installed applications screen displays an Apps Service Not Configured status on the screen header.
Click Check Available Apps or Discover Apps to open the Discover screen to see application widgets available in the TRUENAS catalog.
After installing an application, the Installed screen populates the Applications area with a table listing installed applications.
Select an application to view the information widgets for applications, with options to edit the application settings, open container pod shell or logs, and access the Web Portal for the application, if applicable.
Application widgets vary by app, but all include the Application Info and Workloads widgets. Some include the History and Notes widgets.
Choosing the Apps Pool
You must choose the pool apps use before you can add applications.
The first time you go to the Applications screen, click Settings > Choose Pool to choose a storage pool for Apps.
We recommend keeping the application use case in mind when choosing a pool.
Select a pool with enough space for all the applications you intend to use.
For stability, we also recommend using SSD storage for the applications pool.
TrueNAS creates an ix-applications dataset on the chosen pool and uses it to store all container-related data.
The dataset is for internal use only.
Set up a new dataset before installing your applications if you want to store your application data in a location separate from other storage on your system.
For example, create the datasets for the Nextcloud application, and, if installing Plex, create the dataset(s) for Plex data storage needs.
Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to System Settings > General > GUI > Settings and locate the Web Interface HTTP -> HTTPS Redirect checkbox. To disable HTTPS redirects, clear this option and click Save, then clear the browser cache before attempting to connect to the app again.
After an apps storage pool is configured, the status changes to Apps Service Running.
Unsetting the Apps Pool
To select a different pool for apps to use, click Settings > Unset Pool. This turns off the Apps service until you choose another pool for apps to use.
Changing Official Application Networking
Official applications use the default system-level Kubernetes node IP settings.
You can change the Kubernetes node IP to assign an external interface to your apps, separate from the web UI interface, in Apps > Settings > Advanced Settings.
To download a specific image, click the button and enter a valid path and tag to the image.
Enter the path using the format registry/repository/image to identify the specific image.
The default latest tag downloads the most recent image version.
When downloading a private image, enter user account credentials that allow access to the private registry.
Upgrading Apps
Apps display a yellow circle with an exclamation point that indicates an upgrade is available, and the Installed application screen banner displays an Update or Update All button when upgrades are available.
To upgrade an app to the latest version, click Update on the Application Info widget or to upgrade multiple apps, click the Update All button on the Installed applications banner.
Both buttons only display if TrueNAS SCALE detects an available update for installed applications.
Update opens an upgrade window that includes two selectable options, Images (to be updated) and Changelog.
Click on the down arrow to see the options available for each.
Click Upgrade to begin the process. A counter dialog opens showing the upgrade progress.
When complete, the update badge and buttons disappear and the application Update state on the Installed screen changes from Update Available to Up to date.
Deleting Apps
To delete an application, click Stop on application row.
After the app status changes to stopped, click Delete on the Application Info widget for the selected application to open the Delete dialog.
The Discover screen displays New & Updated Apps application widgets for the official TRUENAS catalog Chart, Community, and Enterprise trains.
Non-Enterprise systems show the Chart catalog of app by default.
The Chart catalog train has official applications that are pre-configured and only require a name during deployment.
Enterprise applications display automatically for Enterprise=licensed systems, but community users can add these apps using the Manage Catalogs screen.
App trains display based on the Trains settings on the Edit Catalog screen.
The Custom App button opens a wizard where you can install unofficial apps or an app not included in a catalog.
Browse the widgets or use the search field to find an available applications.
Click on an application widget to go to the application information screen.
Refreshing Charts
You can refresh the charts catalog by clicking Refresh Charts on the Discover screen.
You can also refresh all catalogs from the Catalogs screen. Click Manage Catalogs, then click Refresh All.
Refresh the catalog after adding or editing the catalogs on your system.
Using the Discover Screen Filters
To filter the app widgets shown, click the down arrow to the right of Filters.
You can filter by catalog, app category, name, catalog name, and date last updated.
Click on the option then begin typing the name of the app into the search field to narrow the widgets to fit the filter criteria.
Click in Categories to select apps based on the selection you make.
Click in the field again to add another category from the dropdown list to select multiple categories.
Installing Official Applications
From the application information screen, click Install to open the installation wizard for the application.
After installing an application, the Installed applications screen shows the application in the Deploying state.
It changes to Running when the application is ready to use.
The installation wizard configuration sections vary by application, with some including more configuration areas than others.
Click Install to review settings ahead of time to check for required settings.
Click Discover on the breadcrumb at the top of the installation wizard to exiting the screen without saving and until you are ready return and configure the app settings.
All applications include these basic setting sections:
Application Name Settings
Application Name shows the default name for the application.
If deploying more than one instance of the application, you must change the default name. Also includes the version number for the application.
Do not change the version number for official apps or those included in a SCALE catalog.
When a new version becomes available, the Installed application screen banner and application row displays an update alert, and the Application Info widget displays an update button> Updating the app changes the version shown on the edit wizard for the application.
Application Configuration Settings
Application Configuration shows settings that app requires to deploy.
This section can be named anything. For example, the MinIO app uses MinIO Configuration.
Typical settings include user credentials, environment variables, additional argument settings, name of the node, or even sizing parameters.
If not using the default user and group provided, add the new user (and group) to manage the application before using the installation wizard.
Network Configuration Settings
Network Configuration shows network settings the app needs to communicate with SCALE and the Internet.
Settings include the default port assignment, host name, IP addresses, and other network settings.
If changing the port number to something other than the default setting, refer to Default Ports for a list of used and available port numbers.
Some network configuration settings include the option to add a certificate. Create the certificate authority and certificate before using the installation wizard if using a certificate is required for the application.
Storage Configuration Settings
Storage Configuration shows options to configure storage for the application.
Storage options include using the default ixVolume setting that adds a storage volume under the ix-applications dataset, host path where you select existing dataset(s) to use, or in some cases the SMB share option where you configure a share for the application to use.
The Add button allows you to configure additional storage volumes for the application to use in addition to the main storage volume (dataset).
If the application requires specific datasets, configure these before using the installation wizard.
If setting host path storage, select Enable ACL to configure ACL entries for the selected dataset.
Click the arrow to the left of the folder icon to expand that folder and show any child datasets and directories.
A solid folder icon shows for datasets and an outlined folder for directories.
A selected dataset or directory folder and name shows in blue.
Select Add next to ACL Entries to add a set of ID Type, ID, and Access fields to configure an entry.
Click Add again for each additional ACL entry.
Select Force Flag under ACL Options to apply the ACL even if the path has existing data.
Resources Configuration Settings
Resources Configuration shows CPU and memory settings for the container pod.
This section can also be named Resource Limits. In most cases you can accept the default settings, or you can change these settings to limit the system resources available to the application.
After installing an app, you can modify most settings by selecting the app on the Installed applications screen and then clicking the Edit button on the Application Info widget for that app.
Refer to individual tutorials in the Community or Enterprise sections of the Documentation Hub for more details on application settings.
Installation and editing wizards include tooltips to help users configure application settings.
Allocating GPU
Users with compatible hardware can allocate one or more GPU devices to an application for use in hardware acceleration.
This is an advanced process that could require significant troubleshooting depending on installed GPU device(s) and application-specific criteria.
GPU devices can be available for the host operating system (OS) and applications or can be isolated for use in a Virtual Machine (VM).
A single GPU cannot be shared between the OS/applications and a VM.
Allocate GPU from the Resources Configuration section of the application installation wizard screen or the Edit screen for a deployed application.
Click the GPU Resource allocation row for the type of GPU (AMD, Intel, or Nvidia) and select the number of GPU devices the application is allowed access to.
It is not possible at this time to specify which available GPU device is allocated to the application and assigned devices can change on reboot.
If installed GPU devices do not populate as available for allocation in GPU Configuration:
Ensure the GPU devices you want to allocate are not configured as isolated.
Go to System Settings > Advanced and locate the Isolated GPU Device(s) widget.
If necessary, click Configure, deselect the device(s) you want to allocate, and click Save.
Ensure the GPU devices you want to allocate are not assigned to any existing VMs.
Go to Virtualization.
Select an existing VM and click on that row to expand it, then click Edit.
Scroll down to GPU and review configured devices.
If necessary, deselect the device you want to allocate to applications.
Repeat for any additional VMs on the system.
If the GPU was previously isolated and/or assigned to a VM, a reboot could be required to free it up for app allocation.
Restart the system then return to the Resources Configuration section of the application to see if expected devices are available.
Installing Custom Applications
To deploy a custom application, go to Discover and click Custom App to open the Install Custom App screen.
See Using Install Custom App for more information.
Changing Custom Application Networking
Custom applications use the system-level Kubernetes Node IP settings by default.
You can assign an external interface to custom apps using one of the Networking section settings found on the Install Custom App screen.
Unless you need to run an application separately from the Web UI, we recommend using the default Kubernetes Node IP (0.0.0.0) to ensure apps function correctly.
Section Contents
Using SCALE Catalogs: Provides basic information on adding or managing application catalogs in TrueNAS SCALE.
Using Install Custom App: Provides information on using Install Custom App to configure custom or third-party applications in TrueNAS SCALE.
Securing Apps: Securing TrueNAS applications with VPNs and Zero Trust.
Community Apps: Notes about community applications and individual tutorials for applications.
Enterprise Applications: Tutorials for using TrueNAS SCALE applications in an Enterprise-licensed deployment.
Sandboxes (Jail-like Containers): Provides advanced users information on deploying custom FreeBSD jail-like containers in SCALE.
10.1 - Using SCALE Catalogs
Provides basic information on adding or managing application catalogs in TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS SCALE has a pre-built official catalog of over 50 available iXsystems-approved applications.
Users can configure custom apps catalogs if they choose, but iXsystems does not directly support non-official apps in a custom catalog.
TrueNAS uses outbound ports 80/443 to retrieve the TRUENAS catalog.
Managing Catalogs
Users can manage the catalog from the Catalogs screen.
Click Manage Catalogs at the top right side of the Discover screen to open the Catalogs screen.
Users can edit, refresh, delete, and view the catalog summary by clicking on a catalog to expand and show the options.
Edit opens the Edit Catalog screen, where users can change the name SCALE uses to look up a catalog or change the trains from which the UI retrieves available applications for the catalog.
Refresh pulls the catalog from its repository and refreshes it by applying any updates.
Delete allows users to remove a catalog from the system. Users cannot delete the default TRUENAS catalog.
Summary lists all apps in the catalog and sorts them by train, app, and version.
Users can filter the list by Train and Status (All, Healthy, or Unhealthy).
Adding Catalogs
For best stability during upgrades to future major versions of TrueNAS SCALE, use applications provided by the default TRUENAS catalog.
Third-party app catalogs available for TrueNAS are provided and maintained by individuals or organizations outside of iXsystems.
iXsystems does not provide support for third-party applications, nor can we guarantee app updates and consistent functionality over time.
Users who wish to deploy third-party catalogs should be prepared to self-support installed applications or rely on support services from the catalog provider.
To add a catalog, click Add Catalog at the top right of the Catalogs screen.
Enter a name in Catalog Name, for example, type mycatalog.
We do not recommend enabling Force Create, since it overrides safety mechanisms that prevent adding a catalog to the system even if some trains are unhealthy.
Provides information on using Install Custom App to configure custom or third-party applications in TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
SCALE includes the ability to run third-party apps in containers (pods) using Kubernetes settings.
What is Kubernetes?
Kubernetes (K8s) is an open-source system for automating deployment, scaling, and managing containerized applications.
Always read through the documentation page for the application container you are considering installing so that you know all of the settings that you need to configure.
To set up a new container image, first, determine if you want the container to use additional TrueNAS datasets.
If yes, create a dataset for host volume paths before you click Custom App on the Discover application screen.
Custom Docker Applications
Custom Docker applications typically follow open container specifications and deploy in TrueNAS following the custom application deployment process described below.
Carefully review documentation for the app you plan to install before attempting to install a custom app.
Take note of any required environment variables, optional variables you want to define, start-up commands or arguments, networking requirements, such as port numbers, and required storage configuration.
If your application requires specific directory paths, datasets, or other storage arrangements, configure these before you start the Install Custom App wizard.
You cannot save settings and exit the configuration wizard to create data storage or directories in the middle of the process.
If you are unsure about any configuration settings, review the Install Custom App Screen UI reference article before creating a new container image.
To create directories in a dataset on SCALE, before you begin installing the container, open the TrueNAS SCALE CLI and enter storage filesystem mkdir path="/PATH/TO/DIRECTORY".
Adding Custom Applications
When you are ready to create a container, go to Apps, click Discover Apps, then click Custom App.
Enter the Docker Hub repository for the application you want to install in Image Repository using the format maintainer/image, for example storjlabs/storagenode, or image, such as debian, for Docker Official Images.
If the application requires it, enter the correct value in Image Tag and select the Image Pull Policy to use.
If the application requires it, enter the executables you want or need to run after starting the container in Container Entrypoint.
Define any commands and arguments to use for the image.
These can override any existing commands stored in the image.
Click Add for Container CMD to add a command.
Click Add for Container Args to add a container argument.
Enter the Container Environment Variables to define additional environment variables for the container.
Not all applications use environment variables.
Check the application documentation to verify the variables that particular application requires.
Enter the networking settings.
To use a unique IP address for the container, set up an external interface.
Users can create additional network interfaces for the container if needed.
Users can also give static IP addresses and routes to a new interface.
a. Click Add to display the Host Interface and IPAM Type fields required when configuring network settings.
Select the interface to use.
Select Use static IP to display the Static IP Addresses and Static Routes fields, or select Use DHCP.
b. Scroll down to select the DNS Policy and enter any DNS configuration settings required for your application.
By default, containers use the DNS settings from the host system.
You can change the DNS policy and define separate nameservers and search domains.
See the Kubernetes DNS services documentation for more details.
Click Add for each port you need to enter.
Enter the required Container Port and Node Port settings, and select the Protocol for these ports.
The node port number must be over 9000.
Ensure no other containers or system services are using the same port number.
Repeat for all ports.
Add the Storage settings.
Review the image documentation for required storage volumes.
See Setting up Storage below for more information.
Click Add for each host path volume.
Enter or browse to select the Host Path for the dataset on TrueNAS.
Enter the Mount Path to mount the host path inside the container.
Add any memory-backed or other volumes you need or want to use.
You can add more volumes to the container later, if needed.
Enter any additional settings required for your application, such as workload details or adding container settings for your application.
Select the Scaling/Upgrade Policy to use.
The default is Kill existing pods before creating new ones.
Use Resource Reservation to allocate GPU resources if available and required for the application.
Set any Resource Limits you want to impose on this application.
Select Enable Pod resource limits to display the CPU Limit and Memory Limit fields.
Enter or select any Portal Configuration settings to use.
Select Enable WebUI Portal to display UI portal settings.
Click Install to deploy the container.
If you correctly configured the app, the widget displays on the Installed Applications screen.
When complete, the container becomes active. If the container does not automatically start, click Start on the widget.
Click on the App card reveals details.
Setting up App Storage
Defining Host Path Volumes
You can mount SCALE storage locations inside the container.
To mount SCALE storage, define the path to the system storage and the container internal path for the system storage location to appear.
You can also mount the storage as read-only to prevent using the container to change any stored data.
For more details, see the Kubernetes hostPath documentation.
Defining Other Volumes
Users can create additional Persistent Volumes (PVs) for storage within the container.
PVs consume space from the pool chosen for application management.
You need to name each new dataset and define a path where that dataset appears inside the container.
To view created container datasets, go to Datasets and expand the dataset tree for the pool you use for applications.
Setting Up Persistent Volume Access
Users developing applications should be mindful that if an application uses Persistent Volume Claims (PVC), those datasets are not mounted on the host and therefore are not accessible within a file browser. Upstream zfs-localpv uses this behavior to manage PVC(s).
To consume or have file browser access to data that is present on the host, set up your custom application to use host path volumes.
Alternatively, you can use the network to copy directories and files to and from the pod using k3s kubectl commands.
To copy from a pod in a specific container:
k3s kubectl cp <file-spec-src> <file-spec-dest> -c <specific-container>
To copy a local file to the remote pod:
k3s kubectl cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar
To copy a remote pod file locally:
k3s kubectl cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar
10.3 - Securing Apps
Securing TrueNAS applications with VPNs and Zero Trust.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Enhancing app security is a multifaceted challenge and there are various effective approaches. We invite community members to share insights on their methods by contributing to the documentation.
Securing Apps with VPNs and Zero Trust
TrueNAS SCALE offers various applications, either directly provided or via the community.
While applications can greatly expand TrueNAS functionality, making them accessible from outside the local network can create security risks that need to be solved.
Regardless of the VPN or reverse proxy you use, follow best practices to secure your applications.
Update the applications regularly to fix security issues.
Use strong passwords and 2FA, preferably TOTP, or passkeys for your accounts.
Don’t reuse passwords, especially not for admin accounts.
Don’t use your admin account for daily tasks.
Create a separate admin account and password for every application you install.
The tutorials in this section aim to provide a general overview of different options to secure apps by installing an additional application client like Cloudflared or WireGuard to proxy traffic between the user and the application.
See the available guides below.
Section Contents
Cloudflare Tunnel: Securing the Nextcloud application using a Cloudflare Tunnel.
10.3.1 - Cloudflare Tunnel
Securing the Nextcloud application using a Cloudflare Tunnel.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
This Guide shows how to create a Cloudflare tunnel and configure the Nextcloud and Cloudflared applications in TrueNAS SCALE.
The goal is to allow secure access from anywhere.
Exposing applications to the internet can create security risks.
Always follow best practices to secure your applications.
Review the Nextcloud documentation to get a better understanding of the security implications before proceeding.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Setting Up Cloudflare
Cloudflare Tunnel is a system that proxies traffic between the user and the application over the Cloudflare network.
It uses a Cloudflared client that is installed on the TrueNAS SCALE system.
This allows a secure and encrypted connection without the need of exposing ports or the private IP of the TrueNAS system to the internet.
This video from Lawrence Systems provides a detailed overview of setting up Cloudflare tunnels for applications.
It assumes that the applications run as a docker container, but the same approach can be used to secure apps running on TrueNAS SCALE in Kubernetes.
In the Cloudflare One dashboard:
Go to Networks and select Tunnels.
Click Create Tunnel, choose type Cloudflared and click Next.
Choose a Tunnel Name and click Save tunnel.
Copy the tunnel token from the Install and run a connector screen.
This is needed to configure the Cloudflared app in TrueNAS SCALE.
The operating system selection does not matter as the same token is used for all options.
For example, the command for a docker container is:
docker run cloudflare/cloudflared:latest tunnel --no-autoupdate run --token
eyJhIjoiNjVlZGZlM2IxMmY0ZjEwNjYzMDg4ZTVmNjc4ZDk2ZTAiLCJ0IjoiNWYxMjMyMWEtZjE
2YS00MWQwLWFhN2ItNjJiZmYxNmI4OGIwIiwicyI6IlpqQmpaRE13WXpBdFkyRmpPUzAwWVRCbU
xUZ3hZVGd0TlRWbE9UQmpaakEyTlRFMCJ9
Copy the string after --token, then click Next.
Add a public hostname for accessing Nextcloud, for example: nextcloud.example.com.
Set service Type to HTTPS.
Enter the local TrueNAS IP with the Nextcloud container port, for example 192.168.1.1:9001.
After creating the Cloudflare tunnel, go to Apps in the TrueNAS SCALE UI and click Discover Apps.
Search or browse to select the Cloudflared app from the community train and click Install.
Accept the default Application Name and Version.
Copy the Cloudflare tunnel token from the Cloudflare dashboard
Paste the token from Cloudflare, that you copied earlier, in the Tunnel Token field.
The first application deployment may take a while and starts and stops multiple times.
This is normal behavior.
The Nextcloud documentation provides information on running Nextcloud behind a reverse proxy.
Depending on the reverse proxy and its configuration, these settings may vary.
For example, if you don’t use a subdomain, but a path like example.com/nextcloud.
If you want to access your application via subdomain (shown in this guide) two environment variables must be set in the Nextcloud application: overwrite.cli.url and overwritehost.
Enter the two environment variables in Name as OVERWRITECLIURL and OVERWRITEHOST.
Enter the address for the Cloudflare Tunnel, configured above in Value, for example nextcloud.example.com.
Testing the Cloudflare Tunnel
With the Cloudflare connector and Nextcloud installed and configured, in your Cloudflare dashboard, go to Networks and select Tunnels.
Note: there are additional options for policy configuration, but these are beyond the scope of this tutorial.
10.4 - Community Apps
Notes about community applications and individual tutorials for applications.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The TrueNAS community creates and maintains numerous applications intended to expand system functionality far beyond what is typically expected from a NAS.
The TrueNAS catalog is loaded by default and is used to populate the Discover apps screen.
To view the catalog settings, select the Manage Catalogs at the top of the Discover apps screen.
Applications are provided “as-is” and can introduce system stability or security issues when installed.
Some applications deploy as the root user for initial configuration before operating as a non-root user.
Make sure the application is required for your specific use requirements and does not violate your security policies before installation.
The remaining tutorials in this section are for specific applications that are commonly used or replace some functionality that was previously built-in with TrueNAS.
Section Contents
Syncthing Charts App: Provides general information, guidelines, installation instructions, and use scenarios for the community version of the Syncthing app.
Chia: Provides basic installation instructions for the Chia application using both the TrueNAS web UI and CLI commands.
Collabora: Provides basic configuration instructions for adding the Collabora app using the TrueNAS webUI.
DDNS-Updater: Provides basic configuration instructions for the DDNS-Updater application.
Immich: Provides installation instructions for the Immich application.
Jellyfin: Provides installation instructions for the Jellyfin application.
MinIO: Tutorials for using the MinIO community and Enterprise applications available for TrueNAS SCALE.
Netdata: Provides information on installing and configuring the Netdata app on TrueNAS SCALE.
Nextcloud: Provides instructions to configure TrueNAS SCALE and install Nextcloud to support hosting a wider variety of media file previews such as HEIC, Mp4 and MOV files.
Pi-Hole: Provides information on installing Pi-hole to support network-level advertisement and internet tracker blocking.
Prometheus: Provides installation instructions for the Prometheus application.
Rsync Daemon: Installation and basic usage instructions for the Rsync Daemon application.
Storj: Provides information on the steps to set up a Storj node on your TrueNAS SCALE system.
TFTP Server: Provides instructions for installing the TFTP Server application.
WebDAV: Instructions for installing and configuring the WebDAV app and sharing feature.
WG Easy: Provides installation instructions for the WG Easy application.
10.4.1 - Syncthing Charts App
Provides general information, guidelines, installation instructions, and use scenarios for the community version of the Syncthing app.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Adding the Enterprise App
To add the enterprise train Syncthing application to the list of available applications:
Go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, then select enterprise to add it to the list of trains.
Syncthing is a file synchronization application that provides a simple and secure environment for file sharing between different devices and locations.
Use it to synchronize files between different departments, teams, or remote workers.
Syncthing is tested and validated to work in harmony with TrueNAS platforms and underlying technologies such as ZFS to offer a turnkey means of keeping data across many systems.
It can seamlessly integrate with TrueNAS.
Syncthing does not use or need a central server or cloud storage.
All data is encrypted and synchronized directly between devices to ensure files are protected from unauthorized access.
Syncthing is easy to use and configure.
You can install on a wide range of operating systems, including Windows, MacOS, Linux, FreeBSD, iOS or Android.
The Syncthing web UI provides users with easy management and configuration of the application software.
How does Syncthing work?
Syncthing does not have a central directory or cache to manage.
It segments files into pieces called blocks.
These blocks transfer data from one device to another.
Multiple devices can share the synchronization load in a similar way to the torrent protocol.
With more devices and smaller blocks, devices receive data faster because all devices fetch blocks in parallel.
Syncthing renames files and updates metadata more efficiently because renaming does not cause a re-transmission of that file.
Temporary files store partial data downloaded from devices.
Temporary files are removed when a file transfer completes or after the configured amount of time elapses.
This article provides information on installing and using the TrueNAS Syncthing app.
SCALE has two versions of the Syncthing application, the community version in the charts train and a smaller version tested and polished for a safe and supportable experience for enterprise customers in the enterprise train.
Community members can install either the enterprise or community version.
Before Installing Syncthing
You can allow the app to create a storage volume(s) or use existing datasets created in SCALE.
The TrueNAS Syncthing app requires a main configuration storage volume for application information.
You can also mount existing datasets for storage volume inside the container pod.
If you want to use existing datasets for the main storage volume, [create any datasets]/scaletutorials/datasets/datasetsscale/ before beginning the app installation process (for example, syncthing for the configuration storage volume).
If also mounting storage volume inside the container, create a second dataset named data1. If mounting multiple storage volumes, create a dataset for each volume (for example, data2, data3, etc.).
You can have multiple Syncthing app deployments (two or more Charts, two or more Enterprise, or Charts and Enterprise trains, etc.).
Each Syncthing app deployment requires a unique name that can include numbers and dashes or underscores (for example, syncthing2, syncthing-test, syncthing_1, etc.).
Use a consistent file-naming convention to avoid conflict situations where data does not or cannot synchronize because of file name conflicts.
Path and file names in the Syncthing app are case sensitive.
For example, a file named MyData.txt is not the same as mydata.txt file in Syncthing.
If not already assigned, set a pool for applications to use.
Either use the default user and group IDs or create the new user with Create New Primary Group selected.
Make note of the UID/GID for the new user.
Installing the Syncthing Application
Go to Apps > Discover Apps and locate the Syncthing charts app widget.
Click Install to open the Install Syncthing screen.
Application configuration settings are presented in several sections, each explained below.
To find specific fields, click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default owner user and group ID settings. You can customize your Syncthing charts deployment by adding environment variables but these are not required.
Add the storage volume(s).
Either allow the Syncthing app to create the configuration storage volume or use an existing dataset created for this app.
To use an existing dataset, select Enable Custom Host Path for Syncthing Configuration Volume, then browse to and select the dataset to populate the field.
See Storage Settings for more details on adding existing datasets.
Accept the default port numbers in Networking.
See Network Settings below for more information on network settings.
Before changing the default port number, see Default Ports for a list of assigned port numbers.
When selected, Host Network binds to the default host settings programmed for Syncthing. We recommend leaving this disabled.
Syncthing does not require advanced DNS options. If you want to add DNS options, see Advanced DNS Settings below.
Accept the default resource limit values for CPU and memory or select Enable Pod resource limits to show the CPU and memory limit fields, then enter the new values you want to use for Syncthing. See Resource Configuration Settings below for more information.
Click Install.
The system opens the Installed Applications screen with the Syncthing app in the Deploying state.
After installation completes the status changes to Running.
Secure Syncthing by setting up a username and password.
Understanding Syncthing Settings
The following sections provide more detail explanations of the settings found in each section of the Install Syncthing screen.
Application Name Settings
Accept the default value or enter a name in Application Name field.
In most cases use the default name but adding a second application deployment requires a different name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Configuration Settings
Accept the defaults in the Configuration settings or enter new user and group IDs. The default value for Owner User ID and Owner Group ID is 568.
For a list of Syncthing environment variables, go to the Syncthing documentation website and search for environment variables.
You can add environment variables to the Syncthing app configuration after deploying the app. Click Edit on the Syncthing Application Info widget found on the Installed Application screen to open the Edit Syncthing screen.
Storage Settings
You can allow the Syncthing app to create the configuration storage volume or you can create datasets to use for the configuration storage volume and to use for storage within the container pod.
To use existing datasets, select Enable Custom Host Path for Syncthing Configuration Volume to show the Host Path for Syncthing Configuration Volume and Extra Host Path Volumes fields.
Enter the host path in Host Path for Syncthing Configuration Volume or browse to and select the dataset an existing dataset created for the configuration storage volume.
Enter the data1 dataset in Mount Path in Pod, then enter or browse to the dataset location in Host Path.
If you added extra datasets to mount inside the container pod, click Add for each extra host path you want to mount inside the container pod.
Enter or browse to the dataset created for the extra storage volumes to add inside the pod.
Networking Settings
Accept the default port numbers in Web Port for Syncthing, TCP Port for Syncthing and UDP Port for Syncthing.
The SCALE Syncthing chart app listens on port 20910.
The default TCP port is 20978 and the default UDP port is 20979.
Before changing default ports and assigning new port numbers, refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network. This binds to the host network.
Advanced DNS Settings
Syncthing does not require configuring advanced DNS options.
Accept the default settings or click Add to the right of DNS Options to enter the option name and value.
Accept the default values in Resources Configuration or select Enable Pod resource limits to enter new CPU and memory values.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Syncthing uses, enter new CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi
Securing the Syncthing Web UI
After installing and starting the Syncthing application, launch the Syncthing web portal.
Go to Actions > Settings and set a user password for the web UI.
The Syncthing web portal allows administrators to monitor and manage the synchronization process, view logs, and adjust settings.
Folders list configured sync folders, details on sync status and file count, capacity, etc.
To change folder configuration settings, click on the folder.
This Device displays the current system IO status including transfer/receive rate, number of listeners, total uptime, sync state, and the device ID and version.
Actions displays a dropdown list of options.
Click Advanced to access GUI, LDAP, folder, device, and other settings.
You can manage directional settings for sync configurations, security, encryption, and UI server settings through the Actions options.
Managing Syncthing Folder
To change or enter a directory path to share a folder, click on the folder, then select Edit.
We recommend each shared folder have a sync folder to allow for more granular traffic and data flow.
Syncthing creates a default sync folder in the main user or HOME directory during installation of the application.
Untrusted Device Password is a beta feature and not recommended for production environments.
This feature is for edge cases where two users want to share data on a given device but cannot risk interception of data.
Only trusted users with the code can open the file(s) with shared data.
Using Syncthing File Versioning
File Versioning applies to changes received from other devices.
For example, Bill turns on versioning and Alice changes a file.
Syncthing archives the old version on Bill’s computer when it syncs the change from Alice.
But if Bill changes a file locally on his computer, Syncthing does not and cannot archive the old version.
For more information on specific file versioning, see Versioning
Using Syncthing Advanced Administration
Go to Actions > Advanced to access advanced settings.
These setting options allow you to set up network isolation, directory services, database, and bandwidth throttling, and to change device-specific settings and global default settings.
Incorrect configuration can damage folder contents and render Syncthing inoperable!
Viewing Syncthing Logs and Debugs
Go to Logs to access current logs and debug files.
The Log tab displays current logs, and the Debugging Facilities tab provides access to debug logging facilities.
Select different parameters to add to the logs and assist with debugging specific functionalities.
You can forward logs to a specific folder or remote device.
Maintaining File Ownership (ACL Preservation)
Syncthing includes the ability to maintain ownership and extend attributes during transfers between nodes (systems).
This ensures ACLs and permissions remain consistent across TrueNAS SCALE systems during one and bi-directional Syncthing moves.
You can configure this setting on a per folder basis.
10.4.2 - Chia
Provides basic installation instructions for the Chia application using both the TrueNAS web UI and CLI commands.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The SCALE Chia app installs the Chia Blockchain architecture in a Kubernetes container.
Chia Blockchain is a cryptocurrency ecosystem that uses Proof of Space and Time, and allows users to work with digital money and interact with their assets and resources.
Instead of using expensive hardware that consumes exorbitant amounts of electricity to mine crypto, it leverages existing empty hard disk space on your computer(s) to farm crypto with minimal resources.
First Steps
Before you install the application, you have the option to create the config and plots datasets for the Chia app storage volumes, or you can allow the SCALE to automatically create these storage volumes.
You also have the option to mount datasets inside the container for other Chia storage needs.
You can allow SCALE to create these storage volumes, or you can create and name datasets according to your intended use or as sequentially-named datasets (i.e., volume1, volume2, etc.) for each extra volume to mount inside the container.
Create all datasets before you begin the app installation process if using existing datasets and the host path option.
See Creating a Dataset for information on correctly configuring the datasets.
Setup the Chia GUI or Chia command line (CLI) to configure Chia and start farming.
Deploying the SCALE Chia App
Log into SCALE, go to Apps, click on Discover Apps, then either begin typing Chia into the search field or scroll down to locate the Chai application widget.
Application configuration settings are presented in several sections, each explained in Understanding SCALE Chia App Settings below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default value or enter a name in Application Name.
Accept the default value in Version.
Select the timezone for your TrueNAS system location from the Timezone dropdown list of options.
Select the service from the Chia Service Mode dropdown list.
The default option is Full Node, but you can select Farmer or Harvester.
Harvester displays additional settings, each described in Chia Configuration below. Refer to Chia-provided documentation for more information on these services.
You can enter the network address (IP address or host name) for a trusted peer in Full Node Peer now or after completing the app installation.
This is the trusted/known or untrusted/unknown server address you want to use in sync operations to speed up the sync time of your Chia light wallet.
If not already configured in Chia, you can add this address as a trusted peer in Chia after completing the app installation.
Accept the default values in Chia Port and Farmer Port.
You can enter port numbers below 9000, but check the Default Ports list to verify the ports are available. Setting ports below 9000 automatically enabled host networking.
By default, SCALE can create the storage volumes (datasets) for the app.
If you created datasets to use, select Host Path (Path that already exists on the system).
Enter or browse to select the mount path for the config and plot datasets created in First Steps and populate the Host Path field for both Data and Plots storage volumes.
Accept the defaults in Resource Configuration or change the CPU and memory limits to suit your use case.
Click Install.
The system opens the Installed Applications screen with the SCALE Chia app in the Deploying state.
When the installation completes, it changes to Running.
The first time the SCALE Chia app launches, it automatically creates and sets a new private key for your Chia plotting and wallet, but you must preserve this key across container restarts.
Obtaining and Preserving Keys
To make sure your plots and wallet private key persists (is not lost) across container restarts, save the mnemonic seed created during the app installation and deployment.
On the Installed apps screen, click on the Chia app row, then scroll down to the Workloads widget and the Shell and Logs icons.
To show Chia key file details and the 24 word recovery key, enter /chia-blockchain/venv/bin/chia keys show --show-mnemonic-seed.
The command should return the following information:
If you loose the keyfile at any time, use this information to recover your account.
To copy from the SCALE Pod Shell, highlight the part of the screen you want to copy, then press Ctrl+Insert.
Open a text editor like Notepad, paste the information into the file and save it. Back up this file and keep it secured where only authorized people can access it.
Now save this mnemonic-seed phrase to one of the host volumes on TrueNAS. Enter this command at the prompt:
echo type all 24 unique secret words in this command string > /plots/keyfile
Where type all 24 unique secret words in this command string is all 24 words in the mnemonic-seed.
Next, edit the SCALE Chia app to add the key file.
Adding Keys to the SCALE Chia App
Click Installed on the breadcrumb at the top of the Pod Shell screen to return to the Apps > Installed screen.
Click on the Chia app row, then click Edit in the Application Info widget to open the Edit Chia screen.
Click on Chia Configuration on the menu on the right of the screen or scroll down to this section.
Click Add to the right of Additional Environments to add the Name and Value fields.
Scroll down to the bottom of the screen and click Save.
The container should restart automatically.
After the app returns to the Running state, you can confirm the keys by returning to the Pod shell screen and entering the /chia-blockchain/venv/bin/chia keys show --show-mnemonic-seed command again.
If the keys are not identical, edit the Chia app again and check for any errors in the name of values entered.
If identical, the keys file persists for this container.
You can now complete your Chia configuration using either the Chia command line (CLI) or web interface (GUI).
Setting Up the Chia GUI
To complete the Chia software and client setup, go to the Chia Crash Course and Chai Getting Started guides and follow the instructions provided.
The following shows the simplest option to install the Chia GUI.
Click on the link to the Chia downloads and select the option that fits your operating system environment. This example shows the Windows setup option.
After downloading the setup file and opening the Chia Setup wizard, agree to the license to show the Chia setup options.
Click Install to begin the installation. When complete, click Next to show the Chia Setup Installation Complete wizard window.
Launch Chia is selected by default. Select the Add the Chia Command Line executable to PATH advanced option if you want to include this. Click Finish.
After the setup completes the Chia web portal opens in a new window where you configure your Chia wallet and farming modes, and other settings to customize Chia for your use case.
The following sections provide more details on the settings found in each section of the SCALE Install Chia and Edit Chia screens.
Application Name Settings
Accept the default value or enter a name in Application Name field.
In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Chia Configuration
The Chia Configuration section includes four settings: Timezone, Chia Service Node, Full Node Peer, and Additional Environments.
Select the time zone for the location of the TrueNAS server from the dropdown list.
The Chia Service Node has three options: Full Node, Farmer, and Harvestere. The default Full Node, and Farmer do not have extra settings.
Selecting Harvester shows the required Farmer Address and Farmer Port settings, and CA for the certificate authority for the farmer system.
Refer to Chia documentation on each of these services and what to enter as the farmer address and CA.
After configuring Chia in the Chia GUI or CLI, you can edit these configuration settings. You can also create a second SCALE Chia app deployment by repeating the instructions above if you want to create a second app deployment as a Harvester service node.
You can enter the network address (IP address or host name) for a trusted peer in Full Node Peer now or after completing the app installation and setting up the Chia GUI or CLI and configuring the Chia blockchain.
Enter the trusted/known or untrusted/unknown server address you want to use in sync operations to speed up the sync time of your Chia light wallet.
You can also edit this after the initial app installation in SCALE.
Click Add to the right of Additional Environments to add a Name and Value field.
You can add environment variables here to customize Chia, and to make the initial key file persist, survive after a container restart.
Click Add for each environment variable you want to add. Refer to Chia documentation for information on environment variables you might want to implement.
Network Configuration
Accept the default port numbers in Chia Port and Farmer Port.
The SCALE Chai app listens on port 38444 and 38447.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter an available number within the range 9000-65535.
You can allow SCALE to create the datasets for Chia plots and configuration storage, or you can create the datasets you want to use as storage volumes for the app or to mount in the container.
If manually creating and using datasets, follow the instructions in Creating a Dataset to correctly configure the datasets.
Add one dataset named config and the another named plots.
If also mounting datasets in the container, add and name these additional storage volumes according to your intended use or use volume1, volume2, etc. for each additional volume.
In the SCALE Chia app Storage Configuration section, select Host Path (Path that already exists on the system) as the Type for the Data storage volume.
Enter or browse to and select the location of the existing dataset to populate the Host Path field. Repeat this for the Plots storage volume.
If adding storage volumes inside the container pod, click Add to the right of Additional Volumes for each dataset or ixVolume you want to mount inside the pod.
You can edit the SCALE Chia app after installing it to add additional storage volumes.
Resource Configuration
The Resources Configuration section allows you to limit the amount of CPU and memory the application can use.
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory.
The application might use considerably less system resources.
Tune these limits as needed to prevent the application from over-consuming system resources and introducing performance issues.
10.4.3 - Collabora
Provides basic configuration instructions for adding the Collabora app using the TrueNAS webUI.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The SCALE Apps catalogue now includes Collabora from the developers of Nextcloud.
With Collabora, you can host your online office suite at home.
Click on the Collabora app Install button in the Available Applications list.
Name your app and click Next. In this example, the name is collabora1.
Select a Timezone and, if you wish, enter a custom Username and Password.
You can also add extra parameters to your container as you see fit. See The LibreOffice GitHub Parameters page for more.
After you select your container settings, choose a Certificate and click Next.
Enter Environmental Variables as needed, then click Next.
Choose a node port to use for Collabora (we recommend the default), then click Next.
Configure extra host path volumes for Collabora as you see fit, then click Next.
Confirm your Collabora container options and click Save to complete setup.
After a few minutes, the Collabora container displays as ACTIVE.
After it does, you can click Web Portal to access the admin console.
10.4.4 - DDNS-Updater
Provides basic configuration instructions for the DDNS-Updater application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The DDNS-Updater application is a lightweight universal dynamic DNS (DDNS) updater with web UI.
When installed, a container launches with root privileges in order to apply the correct permissions to the DDNS-Updater directories.
Afterwards, the container runs as a non-root user.
First Steps
Make sure to have account credentials ready with the chosen DNS provider before installing the application in TrueNAS.
To grant access to specific user (and group) accounts other than using the default apps user (UID: 568), add a non-root TrueNAS administrative user from Credentials > Local Users and record the UID and GID for this user.
Using a non-default user/group account forces permissions changes on any defined data storage for this application.
Have the TRUENAS catalog loaded and community train enabled.
To view and adjust the current application catalogs, go to Apps and click Discover Apps > Manage Catalogs.
Configuring the Dynamic DNS Service Application
Go to Apps, click Discover Apps, and locate the DDNS-Updater application widget by typing the first few characters of the application name in the search bar.
Click Install to open the DDNS-Updater configuration screen.
Application configuration options are presented in several sections.
Find specific fields or skip to a particular section with the navigation box in the upper-right corner.
Leave these fields at their default settings.
Changing the application version is only recommended when a specific version is required.
DDNS Updater Configuration
Select the timezone that applies to the TrueNAS location from the Timezone dropdown list.
Click Add to the right of DNS Provider Configuration to display provider setting options.
Select the DDNS provider from the Provider dropdown list.
Each provider displays the settings required to establish a connection with and authenticate to that specific provider.
Enter the domain and host name split between the Domain and Host fields.
For example, populate domain myhostname.ddns.net with ddns.net in Domain and myhostname afer the @ in Host or @myhostname.
Define how often to check IP addresses with Update Period and Update Cooldown Period.
The application also creates .zip backups for the data/config.json and data/updates.json files according to a defined schedule in Backup Period.
Define the HTTP request and DNS query time out values with HTTP Timeout and PUblic IP DNS Timeout.
To configure notifications with the Shoutrrr service, click Add and enter the service Address under Shoutrrr Addresses.
Use the Public IP options to define which providers to use for the various DNS, IPv4, and IPv6 public addresses.
The default All providers allows for quick app usability but these options can be tuned as needed.
User and Group Configuration
By default, the TrueNAS apps (UID/GID 568) user and group account manages this application.
Entering an alternate UID or GID reconfigures the application to run as that account.
When using a custom account for this application, make sure the account is a member of the Builtin_administrators group and that the storage location defined in Storage Configuration has permissions tuned for this account after the application is installed.
Network Configuration
By default, this application uses TrueNAS port 30007 to access the application web interface.
Adjust the Web Port integer when a different network port is required.
Select Host Network to bind to the host network, but we recommend leaving this disabled.
Select the DDNS Updater Data Storage option from the Type dropdown list.
Options are the iXVolume or a predefined host path.
Resources Configuration
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory.
The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Review the configuration settings then click Install for TrueNAS to download and initialize the application.
10.4.5 - Immich
Provides installation instructions for the Immich application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Immich is a self-hosted photo and video backup tool.
Immich integrates photo and video storage with a web portal and mobile app.
It includes features such as libraries, automatic backup, bulk upload, partner sharing, Typesense search, facial recognition, and reverse geocoding.
TrueNAS SCALE makes installing Immich easy, but you must use the Immich web portal and mobile app to configure accounts and access libraries.
First Steps
The Immich app in TrueNAS SCALE installs, completes the initial configuration, then starts the Immich web portal.
When updates become available, SCALE alerts and provides easy updates.
Before installing the Immich app in SCALE, review their Environment Variables documentation and to see if you want to configure any during installation.
You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
You can allow SCALE to create the datasets Immich requires automatically during app installation.
Or before beginning app installation, create the datasets to use in the Storage Configuration section during installation.
Immich requires seven datasets: library, pgBackup, pgData, profile, thumbs, uploads, and video.
You can organize these as one parent with seven child datasets, for example mnt/tank/immich/library, mnt/tank/immich/pgBackup, and so on.
Installing the Immich Application
To install the Immich application, go to Apps, click Discover Apps, either begin typing Immich into the search field or scroll down to locate the Immich application widget.
Click Install to open the Immich application configuration screen.
Application configuration settings are presented in several sections, each explained below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default value in Timezone or change to match your local timezone.
Timezone is only used by the Immich exiftool microservice if it cannot be determined from the image metadata.
Accept the default port in Web Port.
Immich requires seven storage datasets.
You can allow SCALE to create them for you, or use the dataset(s) created in First Steps.
Select the storage options you want to use for Immich Uploads Storage, Immich Library Storage, Immich Thumbs Storage, Immich Profile Storage, Immich Video Storage, Immich Postgres Data Storage, Immich Postgres Backup Storage.
Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Accept the defaults in Resources or change the CPU and memory limits to suit your use case.
Click Install.
The system opens the Installed Applications screen with the Immich app in the Deploying state.
When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Immich web interface to set up your account and begin uploading photos.
See Immich Post Install Steps for more information.
Go to the Installed Applications screen and select Immich from the list of installed applications.
Click Edit on the Application Info widget to open the Edit Immich screen.
The settings on the edit screen are the same as on the install screen.
You cannot edit Storage Configuration paths after the initial app install.
Click Update to save changes.
TrueNAS automatically updates, recreates, and redeploys the Immich container with the updated environment variables.
Understanding Immich Settings
The following sections provide more detailed explanations of the settings found in each section of the Install Immich screen.
Application Name Settings
Accept the default value or enter a name in the Application Name field.
In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Immich Configuration Settings
You can accept the defaults in the Immich Configuration settings, or enter the settings you want to use.
Accept the default setting in Timezone or change to match your local timezone.
Timezone is only used by the Immich exiftool microservice if it cannot be determined from the image metadata.
You can enter a Public Login Message to display on the login page, or leave it blank.
Networking Settings
Accept the default port numbers in Web Port.
The SCALE Immich app listens on port 30041.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
You can install Immich using the default setting ixVolume (dataset created automatically by the system) or use the host path option with datasets created before installing the app.
Accept the default values in Resources Configuration or enter new CPU and memory values
By default, this application is limited to use no more than 4 CPU cores and 8 gibibytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container Immich uses, enter new CPU values as a plain integer value followed by the suffix m (milli).
Default is 4000m, which means Immich is able to use 4 cores.
Accept the default value 8Gi allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example 4G or 123Mi.
Systems with compatible GPU(s) display devices in GPU Configuration.
Use the GPU Resource dropdown menu(s) to configure device allocation.
See Allocating GPU for more information about allocating GPU devices in TrueNAS SCALE.
10.4.6 - Jellyfin
Provides installation instructions for the Jellyfin application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Jellyfin is a volunteer-built media solution that puts you in control of managing and streaming your media.
Jellyfin enables you to collect, manage, and stream media files. Official and third-party Jellyfin streaming clients are available on most popular platforms.
TrueNAS SCALE makes installing Jellyfin easy, but you must use the Jellyfin web portal to configure accounts and manage libraries.
First Steps
The Jellyfin app in TrueNAS SCALE installs, completes the initial configuration, then starts the Jellyfin web portal.
When updates become available, SCALE alerts and provides easy updates.
You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
You can allow SCALE to create the datasets Jellyfin requires automatically during app installation.
Or before beginning app installation, create the datasets to use in the Storage Configuration section during installation.
Jellyfin requires two datasets: config and cache.
You can organize these as one parent with two child datasets, for example mnt/tank/jellyfin/config, mnt/tank/jellyfin/cache, and so on.
You can choose to create a static transcodes dataset or use temporary storage in the disk or memory for transcoding.
If you want to run the application with a user or group other than the default apps (568) user and group, create them now.
Installing the Jellyfin Application
To install the Jellyfin application, go to Apps, click Discover Apps, either begin typing Jellyfin into the search field or scroll down to locate the Jellyfin application widget.
Click Install to open the Jellyfin application configuration screen.
Application configuration settings are presented in several sections, each explained below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the defaults in Jellyfin Configuration, User and Group Configuration, and Network Configuration or change to suit your use case.
You must select Host Network under Network Configuration if using DLNA.
Jellyfin requires three app storage datasets.
You can allow SCALE to create them for you, or use the dataset(s) created in First Steps.
Select the storage options you want to use for Jellyfin Config Storage and Jellyfin Cache Storage.
Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Jellyfin also requires a dataset or emptyDir for Jellyfin Transcodes Storage.
Select ixVolume (dataset created automatically by the system) in Type to let SCALE create the dataset, select Host Path to use an existing dataset created on the system, or select emptyDir to use a temporary storage volume on the disk or in memory.
Solid state storage is recommended for config and cache storage.
Do not use the same spinning disk device for both cache and config and media storage libraries.
Mount one or more media libraries using Additional Storage.
Click Add to enter the path(s) on your system.
Select Host Path (Path that already exists on the system) or SMB Share (Mounts a persistent volume claim to a SMB share) in Type.
Enter a Mount Path to be used within the Jellyfin container. For example, the local Host Path /mnt/tank/video/movies could be assigned the Mount Path /media/movies.
Define the Host Path or complete the SMB Share Configuration fields.
See Mounting Additional Storage below for more information.
Accept the defaults in Resource Configuration or change the CPU and memory limits to suit your use case.
Click Install.
A container launches with root privileges to apply the correct permissions to the Jellyfin directories.
Afterward, the Jellyfin container runs as a non-root user (default: 568).
Configured storage directory ownership is changed if the parent directory does not match the configured user.
The system opens the Installed Applications screen with the Jellyfin app in the Deploying state.
When the installation completes it changes to Running.
Click Web Portal on the Application Info widget to open the Jellyfin web interface initial setup wizard to set up your admin account and begin administering libraries.
Go to the Installed Applications screen and select Jellyfin from the list of installed applications.
Click Edit on the Application Info widget to open the Edit Jellyfin screen.
The settings on the edit screen are the same as on the install screen.
You cannot edit Storage Configuration paths after the initial app install.
Click Update to save changes.
TrueNAS automatically updates, recreates, and redeploys the Jellyfin container with the updated environment variables.
Understanding Jellyfin Settings
The following sections provide more detailed explanations of the settings found in each section of the Install Jellyfin screen.
Application Name Settings
Accept the default value or enter a name in the Application Name field.
In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Jellyfin Configuration Settings
You can accept the defaults in the Jellyfin Configuration settings, or enter the settings you want to use.
This user and group are used for running the Jellyfin container only and cannot be used to log in to the Jellyfin web interface.
Create an admin user in the Jellyfin initial setup wizard to access the UI.
Networking Settings
Select Host Network under Network Configuration if using DLNA, to bind network configuration to the host network settings.
Otherwise, leave Host Network unselected.
Accept the default port numbers in Web Port.
The SCALE Jellyfin app listens on port 30013.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
Storage Settings
You can install Jellyfin using the default setting ixVolume (dataset created automatically by the system) or use the host path option with datasets created before installing the app.
For Jellyfin Transcodes Storage, choose ixVolume, Host Path, or emptyDir (Temporary directory created on the disk or in memory). An emptyDir uses ephemeral storage either on the disk or by mounting a tmpfs (RAM-backed filesystem) directory for storing transcode files.
Mounting Additional Storage
Click Add next to Additional Storage to add the media storage path(s) on your system.
Select Host Path (Path that already exists on the system) or SMB Share (Mounts a persistent volume claim to a SMB share) in Type.
You can select iXvolume (dataset created automatically by the system) to create a new library dataset, but this is not recommended.
Mounting an SMB share allows data synchronization between the share and the app.
The SMB share mount does not include ACL protections at this time. Permissions are currently limited to the permissions of the user that mounted the share. Alternate data streams (metadata), finder colors tags, previews, resource forks, and MacOS metadata is stripped from the share along with filesystem permissions, but this functionality is undergoing active development and implementation planned for a future TrueNAS SCALE release.
For all types, enter a Mount Path to be used within the Jellyfin container.
For example, the local Host Path/mnt/tank/video/movies could be assigned the Mount Path/media/movies.
Additional Storage Fields
Type
Field
Description
All
Mount Path
The virtual path to mount the storage within the container.
Host Path
Host Path
The local path to an existing dataset on the System.
ixVolume
Dataset Name
The name for the dataset the system creates.
SMB Share
Server
The server for the SMB share.
SMB Share
Share
The name of the share.
SMB Share
Domain (Optional)
The domain for the SMB share.
SMB Share
Username
The user name used to access the SMB share.
SMB Share
Password
The password for the SMB share user.
SMB Share
Size (in Gi)
The quota size for the share volume. You can edit the size after deploying the application if you need to increase the storage volume capacity for the share.
Resource Configuration Settings
Accept the default values in Resources Configuration or enter new CPU and memory values
By default, this application is limited to use no more than 4 CPU cores and 8 gibibytes available memory.
To customize the CPU and memory allocated to the container Jellyfin uses, enter new CPU values as a plain integer value followed by the suffix m (milli).
Default is 4000m, which means Jellyfin is allowed to use 4 CPU cores.
Accept the default value 8Gi allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example 4G.
Systems with compatible GPU(s) display devices in GPU Configuration.
Use the GPU Resource dropdown menu(s) to configure device allocation.
See Allocating GPU for more information about allocating GPU devices in TrueNAS SCALE.
10.4.7 - MinIO
Tutorials for using the MinIO community and Enterprise applications available for TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
This section has tutorials for using the MinIO apps available for TrueNAS SCALE.
SCALE has two version of the MinIO application.
The community version of the S3 application available in the charts train of TRUENAS catalog application.
The MinIO Enterprise version of the application is a smaller version of MinIO that is tested and polished for a safe and supportable experience for TrueNAS Enterprise customers.
Community members can install either the Enterprise or community version.
Adding the MinIO (Enterprise) App
To add the Enterprise MinIO application to the list of available applications:
Go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, then select enterprise to add it to the list of trains.
Both the charts and enterprise train versions of the MinIO app widget display on the Discover application screen.
MinIO High Performance Object Storage, released under the Apache Licenses v2.0 is an Open Source, Kubernetes Native, and Amazon S3 cloud storage compatible object storage solution. For more on MinIO, see MinIO Object Storage for Kubernetes.
The Minio applications, chart and enterprise train versions, allow users to build high performance infrastructure for machine learning, analytics, and application data workloads.
MinIO supports distributed mode.
Distributed mode, allows pooling multiple drives, even on different systems, into a single object storage server.
For information on configuring a distributed mode cluster in SCALE using MinIO, see Setting Up MinIO Clustering.
The instructions in this section cover the basic requirements and instruction on how to install and configure the community MinIO application, charts train version.
For instructions on installing the Enterprise version of the MinIO application see Configuring
Enterprise MinIO.
First Steps
Before configuring MinIO, create a dataset and shared directory for the persistent MinIO data.
Go to Datasets and select the pool or dataset where you want to place the MinIO dataset. For example, /tank/apps/minio or /tank/minio.
You can use either an existing pool or create a new one.
After creating the dataset, create the directory where MinIO stores information the application uses.
There are two ways to do this:
In the TrueNAS SCALE CLI, use storage filesystem mkdir path="/PATH/TO/minio/data" to create the /data directory in the MinIO dataset.
In the web UI, create a share (i.e. an SMB share), then log into that share and create the directory.
MinIO uses /data but allows users to replace this with the directory of their choice.
Configuring MinIO (S3) Community App
To install the S3 MinIO (community app), go to Apps, click on Discover Apps, then either begin typing MinIO into the search field or scroll down to locate the charts version of the MinIO widget.
Accept the default values for Application Name and Version.
The best practice is to keep the default Create new pods and then kill old ones in the MinIO update strategy. This implements a rolling upgrade strategy.
Next, enter the MinIO Configuration settings.
The MinIO application defaults include all the arguments you need to deploy a container for the application.
Enter a name in Root User to use as the MinIO access key. Enter a name of five to 20 characters in length, for example admin or admin1.
Next enter the Root Password to use as the MinIO secret key. Enter eight to 40 random characters, for example MySecr3tPa$$w0d4Min10.
Keep all passwords and credentials secured and backed up.
MinIO containers use server port 9000. The MinIO Console communicates using port 9001.
You can configure the API and UI access node ports and the MinIO domain name if you have TLS configured for MinIO.
To store your MinIO container audit logs, select Enable Log Search API and enter the amount of storage you want to allocate to logging.
The default is 5 disks.
Configure the storage volumes.
Accept the default /export value in Mount Path.
Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above.
Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section in Host Path.
If you want to create volumes for postgres data and postgres backup, select Postgres Data Volume and/or Postgres Backup Volume to add the mount and host path fields for each.
If not set, TrueNAS uses the defaults for each postgres-data and postgres-backup.
Accept the defaults in Advanced DNS Settings.
If you want to limit the CPU and memory resources available to the container, select Enable Pod resource limits then enter the new values for CPU and/or memory.
Click Install when finished entering the configuration settings.
The Installed applications screen displays showing the MinIO application in the Deploying state.
It changes to Running when the application is ready to use.
Select Create new pods then kill old ones to implement a rolling update strategy where the existing container (pod) remains until the update completes, then it is removed.
Select Kill existing pods before creating new ones to implement a recreate update strategy where you remove the existing container (pod) and then create a new one.
The recommended option is to keep the default and use the the rolling update strategy.
MinIO Configuration
The MinIO Configuration section provides options to set up a cluster, add arguments, credentials, and environment variables to the deployment.
Select Enable Distributed Mode when setting up a cluster of SCALE systems in a distributed cluster.
MinIO in distributed mode allows you to pool multiple drives or TrueNAS SCALE systems (even if they are different machines) into a single object storage server for better data protection in the event of single or multiple node failures because MinIO distributes the drives across several nodes.
For more information, see the Distributed MinIO Quickstart Guide.
To create a distributed cluster, click Add to show the Distributed MinIO Instance URI(s) fields for each TrueNAS system (node) IP addresses/host names to include in the cluster. Use the same order across all the nodes.
Enter the name for the root user (MinIO access key) in Root User. Enter a name of five to 20 characters in length. For example admin or admin1.
Next enter the root user password (MinIO secret key) in Root Password. Enter eight to 40 random characters. For example MySecr3tPa$$w0d4Min10.
You do not need to enter extra arguments or environment variables to configure the MinIO app.
Accept the default port settings in MinIO Service Configuration. Before changing ports, refer to Default Ports.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function. This deploys a postgres database to store the logs.
Enabling this option displays the Disk Capacity in GB field. Use this to specify the storage in gigabytes the logs are allowed to occupy.
Storage
MinIO storage settings include the option to add mount paths and storage volumes to use inside the container (pod).
There are three storage volumes, data, postgres data, and postgres backup. The data volume is the only required storage volume.
Accept the default /export value in Mount Path.
Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above.
Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section above in Host Path.
To add host paths for postgress storage volumes, select Enable Host Path for Postgres Data Volume and/or Enable Host Path for Postgres Backup Volumes.
SCALE default values for each of these postgres volumes are postgres-data and postgres-backup.
Advanced DNS
MinIO does not require configuring advanced DNS options.
Accept the default settings or click Add to the right of DNS Options to show the Name and Value fields for a DNS option.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory.
The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) the MinIO app uses, select Enable Pod resource limits.
This adds the CPU Resource Limit and Memory Limit fields.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Setting Up MinIO Clustering: Provides configuration instructions using the MinIO Offical Charts application widget. It includes instructions on setting up a distributed cluster configuration.
10.4.7.1 - Updating MinIO from 1.6.58
Provides information on updating MinIO from 1.6.58 to newer versions.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
This article applies to the public release of the S3 MinIO community application in the charts train of the TRUENAS catalog.
Manual Update Overview
MinIO fails to deploy if you update your version 2022-10-24_1.6.58 Minio app to 2022-10-29_1.6.59 or later using the TrueNAS web UI.
Your app logs display an error similar to the following:
ERROR Unable to use the drive /export: Drive /export: found backend type fs, expected xl or xl-single: Invalid arguments specified.
If you get this error after upgrading your MinIO app, use the app Roll Back function, found on the Application Info widget on the Installed applications screen, and return to 2022-10-24_1.6.58 to make your MinIO app functional again.
If your system has sharing (SMB, NFS, iSCSI) configured, disable the share service before adding and configuring a new MinIO deployment.
After completing the installation and starting MinIO, enable the share service.
When adding a new MinIO deployment, verify your storage settings are correct in the MinIO application configuration. If not set, click Install and enter the required information.
Follow the instructions here to make a new, up-to-date MinIO deployment in TrueNAS.
Make sure it is version 2022-10-29_1.6.59 or later.
Downloading the MinIO Client
Download the MinIO Client here for your OS and follow the installation instructions.
The MinIO Client (mc) lets you create and manage MinIO deployments via your system command prompt.
Adding both TrueNAS MinIO Deployments to mc.exe
Open a terminal or CLI.
If you are on a Windows computer, open PowerShell and enter wsl to switch to the Linux subsystem.
Change directories to the folder that contains mc.exe.
Add your old deployment to mc by entering: ./mc alias set old-deployment-name http://IPaddress:port/ rootuser rootpassword.
Example
old-deployment-name is your old MinIO app name in TrueNAS.
http://IPaddress:port/ is the IP address and port number the app uses.
Add your new deployment to mc using the same command with the new alias: ./mc alias set new-deployment-name http://IPaddress:port/ rootuser rootpassword.
Example
new-deployment-name is your new MinIO app name in TrueNAS.
http://IPaddress:port/ is the IP address and port number the app uses.
Porting Configurations from Old to New MinIO Deployment
To port your configuration from your old MinIO deployment to your new, export your old MinIO app configurations by entering ./mc.exe admin config export old-deployment-name > config.txt.
MinIO Client exports the config file to the current directory path.
Example
old-deployment-name is your old MinIO app name in TrueNAS.
After moving all data from the old app to the new one, return to the TrueNAS UI Apps screen and stop both MinIO apps.
Delete the old MinIO app. Edit the new one and change the API and UI Access Node Ports to match the old MinIO app.
Restart the new app to finish migrating.
When complete and the app is running, restart any share services.
10.4.7.2 - Setting Up MinIO Clustering
Provides configuration instructions using the MinIO Offical Charts application widget. It includes instructions on setting up a distributed cluster configuration.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
This article applies to the public release of the S3 MinIO charts application in the TRUENAS catalog.
On TrueNAS SCALE 23.10 and later, users can create a MinIO S3 distributed instance to scale out and handle individual node failures.
A node is a single TrueNAS storage system in a cluster.
The examples below use four TrueNAS systems to create a distributed cluster.
For more information on MinIO distributed setups, refer to the MinIO documentation.
First Steps
Before configuring MinIO, create a dataset and shared directory for the persistent MinIO data.
Go to Datasets and select the pool or dataset where you want to place the MinIO dataset. For example, /tank/apps/minio or /tank/minio.
You can use either an existing pool or create a new one.
After creating the dataset, create the directory where MinIO stores information the application uses.
There are two ways to do this:
In the TrueNAS SCALE CLI, use storage filesystem mkdir path="/PATH/TO/minio/data" to create the /data directory in the MinIO dataset.
In the web UI, create a share (i.e. an SMB share), then log into that share and create the directory.
MinIO uses /data but allows users to replace this with the directory of their choice.
For a distributed configuration, repeat this on all system nodes in advance.
Take note of the system (node) IP addresses or host names and have them ready for configuration. Also, have your S3 user name and password ready for later.
Configuring MinIO
Configure the MinIO application using the full version Minio charts widget.
Go to Apps, click Discover Apps then
We recommend using the Install option on the MinIO application widget.
If your system has sharing (SMB, NFS, iSCSI) configured, disable the share service before adding and configuring a new MinIO deployment.
After completing the installation and starting MinIO, enable the share service.
If the dataset for the MinIO share has the same path as the MinIO application, disable host path validation before starting MinIO.
To use host path validation, set up a new dataset for the application with a completely different path. For example, for the share /pool/shares/minio and for the application /pool/apps/minio.
Configuring MinIO Using Install
Begin on the first node (system) in your cluster.
To install the S3 MinIO (community app), go to Apps, click on Discover Apps, then either begin typing MinIO into the search field or scroll down to locate the charts version of the MinIO widget.
Accept the default values for Application Name and Version.
The best practice is to keep the default Create new pods and then kill old ones in the MinIO update strategy. This implements a rolling upgrade strategy.
Next, enter the MinIO Configuration settings.
Select Enable Distributed Mode when setting up a cluster of SCALE systems in a distributed cluster.
MinIO in distributed mode allows you to pool multiple drives or TrueNAS SCALE systems (even if they are different machines) into a single object storage server for better data protection in the event of single or multiple node failures because MinIO distributes the drives across several nodes.
For more information, see the Distributed MinIO Quickstart Guide.
To create a distributed cluster, click Add to show the Distributed MinIO Instance URI(s) fields for each TrueNAS system (node) IP addresses/host names to include in the cluster. Use the same order across all the nodes.
The MinIO application defaults include all the arguments you need to deploy a container for the application.
Enter a name in Root User to use as the MinIO access key. Enter a name of five to 20 characters in length, for example admin or admin1.
Next enter the Root Password to use as the MinIO secret key. Enter eight to 40 random characters, for example MySecr3tPa$$w0d4Min10.
Keep all passwords and credentials secured and backed up.
For a distributed cluster, ensure the values are identical between server nodes and have the same credentials.
MinIO containers use server port 9000. The MinIO Console communicates using port 9001.
You can configure the API and UI access node ports and the MinIO domain name if you have TLS configured for MinIO.
To store your MinIO container audit logs, select Enable Log Search API and enter the amount of storage you want to allocate to logging.
The default is 5 disks.
Configure the storage volumes.
Accept the default /export value in Mount Path.
Click Add to the right of Extra Host Path Volumes to add a data volume for the dataset and directory you created above.
Enter the /data directory in Mount Path in Pod and the dataset you created in the First Steps section in Host Path.
If you want to limit the CPU and memory resources available to the container, select Enable Pod resource limits then enter the new values for CPU and/or memory.
Click Install when finished entering the configuration settings.
Now that the first node is complete, configure any remaining nodes (including datasets and directories).
After installing MinIO on all systems (nodes) in the cluster, start the MinIO applications.
Accessing the MinIO Setup
After you create datasets, you can navigate to the TrueNAS address at port :9000 to see the MinIO UI. After creating a distributed setup, you can see all your TrueNAS addresses.
Log in with the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD keys you created as environment variables.
Click Web Portal to open the MinIO sign-in screen.
Provides information on installing and configuring the Netdata app on TrueNAS SCALE.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The TrueNAS SCALE Netdata app provides an easy way to install and access the Netdata infrastructure monitoring solution.
SCALE deploys the Netdata app in a Kubernetes container using the Helm package manager.
After successfully deploying the app, you can access the Netdata web portal from SCALE.
The Netdata web portal opens on the local dashboard, and where you can create new dashboards, add plugins, metric databases, physical and virtual systems, containers, and other cloud deployments you want to monitor.
The portal also provides access to the Netdata Cloud sign-in screen.
Before You Begin
The SCALE Netdata app does not require advance preparation.
You can allow SCALE to automatically create storage volumes for the Netdata app or you can create specific datasets to use for configuration, cache, and library storage and extra storage volumes in the container pod.
If using specific datasets, create these before beginning the app installation.
The administrator account must have sudo permissions enabled.
To verify, go to Credentials > Local User.
Click on the administrator user (e.g., admin), then click Edit. Scroll down to the sudo permissions.
Select either Allow all sudo commands to permit changes after entering a password (not recommended in this instance) or Allow all sudo commands with not password to permit changes without requiring a password.
If you upgraded from Angelfish or early releases of Bluefin that do not have an admin user account, see Creating an Admin User Account for instructions on correctly creating an administrator account with the required permissions.
You can create a Netdata account before or after installing and deploying the Netdata app.
Installing Netdata on SCALE
To install the Netdata application, go to Apps, click on Discover Apps, then either scroll down to the Netdata app widget or begin typing Netdata in the search field to filter the list to find the Netdata app widget.
Application configuration settings presented in several sections, are explained in Understanding Netdata Settings below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default settings in Netdata Configuration and the default port in Node Port to use for Netdata UI.
The SCALE Netdata app uses the default port 20489 to communicate with Netdata and show the Netdata local dashboard.
Make no changes in the Storage section to allow SCALE to create the storage volumes for the app, or to use datasets created for Netdata configuration storage, select Enable Host Path for Netdata to show the Host Path for Netdata Configuration settings.
Enter or browse to select the dataset created for Netdata configuration storage to populate the mount path.
If using datasets created for cache and library storage, enable these options, then enter or browse to the datasets for each.
Accept the default settings in Advanced DNS Settings.
Accept the default values in Resources Limits or select Enable Pod Resource limits to show resource configuration options for CPU and memory and enter new values to suit your use case.
Click Install.
The system opens the Installed Applications screen with the Netdata app in the Deploying state.
When the installation completes it changes to Running.
The following sections provide more detailed explanations of the settings found in each section of the Install Netdata screen.
Application Name Settings
Accept the default value or enter a name in Application Name.
In most cases use the default name, but if adding a second deployment of the application you must change the name.
Accept the default version number in Version.
When a new version becomes available, the application shows an update badge on the Installed Applications screen and adds Update buttons to the Application Info widget and the Installed applications screen.
Netdata Configuration Settings
You can accept the defaults in the Netdata Configuration settings or enter the settings you want to use.
Click Add to the right of Netdata image environment to display the environment variable Name and Value fields.
Netdata does not require using environment variables to deploy the application but you can enter any you want to use to customize your container.
The SCALE Netdata app uses port 20489 to communicate with Netdata and open the web portal.
Netdata documentation states it uses 19999 as the default port, but it recommends restricting access to this for security reasons.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
Netdata Storage Settings
SCALE defaults to automatically creating storage volumes for Netdata without enabling the host path options.
To create and use datasets for the Netdata configuration, cache, and library storage or extra storage volumes inside the container pod, first create these datasets.
Go to Datasets and create the datasets before you begin the app installation process.
See Add Datasets for more information.
Select Enable Host Path for Netdata to show the volume mount path field to add the configuration storage dataset.
Enter or browse to select the dataset and populate the mount path field.
To use datasets created for cache and library storage volumes, first enable each option and then enter or browse to select the datasets tp populate the mount path fields for each.
If you want to add storage volumes inside the container pod for other storage, click Add to the right of Extra Host Path Volumes for each storage volume (dataset) you want to add.
You can add extra storage volumes at the time of installation or edit the application after it deploys. Stop the app before editing settings.
Advanced DNS Settings
The default DNS Configuration is sufficient for a basic installation.
To specify additional DNS options, click Add to the right of DNS Options to add the DNS Option Name and Option Value fields.
Accept the default values in Resources Limits or select Enable Pod Resource limits to show CPU and memory resource configuration options.
By default, the application is limited to use no more than four CPU cores and eight gigabytes available memory.
The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Netdata uses, enter new CPU values as a plain integer value followed by the suffix m (milli).
Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi.
Using the Netdata Web Portal
After deploying the SCALE Netdata app click on Web Portal to open the Netdata agent local dashboard.
This Netdata dashboard provides a system overview of CPU usage and other vital statistics for the TrueNAS server connecting to Netdata.
The Netdata System Overview dashboard displays a limited portion of the reporting capabilities.
Scroll down to see more information or click on a listed metric on the right side of the screen to show the graph and reporting on that metric.
Click the other tabs at the top left of the dashboard to view other dashboards for nodes, alerts, anomalies, functions, and events.
You can add your own Netdata dashboards using Netdata configuration documentation to guide you.
Click on the Nodes tab to better understand the differences between the Netdata agent and Netdata Cloud service reporting.
The Netdata Cloud monitors your cloud storage providers added to Netdata.
Use the Netdata-provided documentation to customize Netdata dashboards to suit your use case and monitoring needs.
10.4.9 - Nextcloud
Provides instructions to configure TrueNAS SCALE and install Nextcloud to support hosting a wider variety of media file previews such as HEIC, Mp4 and MOV files.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Nextcloud is a drop-in replacement for many popular cloud services, including file sharing, calendar, groupware and more.
One of its more common uses for the home environment is serving as a media backup, and organizing and sharing service.
This procedure demonstrates how to set up Nextcloud on TrueNAS SCALE, and configure it to support hosting a wider variety of media file previews, including High Efficiency Image Container (HEIC), MP4 and MOV files.
Before You Begin
Before using SCALE to install the Nextcloud application you need to create four datasets to use as storage for the Nextcloud application.
If you are creating a new user account to manage this application or using the local administrator account, enable sudo permissions for that account.
If creating a new user for Nextcloud, add the user to the dataset ACL permissions.
If you want to use a certificate for this application, create a new self-signed CA and certificate, or import the CA and create the certificate if using one already configured for Nextcloud. A certificate is not required to deploy the application.
Set up an account with Nextcloud if you don’t already have one. Enter this user account in the application configuration.
Installing Nextcloud on SCALE
In this procedure you:
Add the storage for Nextcloud to use.
Install the Nextcloud app in SCALE.
Adding Nextcloud Storage
Nextcloud needs five datasets. A primary dataset for the application (nextcloud) with four child datasets.
The four child datasets are named and used as follows:
appdata that contains HTML, apps, custom_themes, config, etc.
userdata that contains the actual files uploaded by the user
pgdata that contains the database files.
pgbackup that contains the database backups
SCALE creates the ix-applications dataset in the pool you set as the application pool when you first go to the Apps screen.
This dataset is internally managed, so you cannot use this as the parent when you create the required Nextcloud datasets.
To create the Nextcloud app datasets, go to Datasets, select the dataset you want to use as the parent dataset, then click Add Dataset to add a dataset.
In this example, we create the Nextcloud datasets under the root parent dataset tank.
Enter nextcloud in Name, select Apps as the Dataset Preset.
Click Advanced Options to make any other setting changes you want to make, and click Save.
When prompted, select Return to Pool List.
Next, select the nextcloud dataset, click Add Dataset to add the first child dataset.
Enter appdata in Name and select Apps as the Dataset Preset.
Click Advanced Options to make any other setting changes you want to make for the dataset, and click Save.
Repeat this three more times to add the other three child datasets to the nextcloud parent dataset.
When finished you should have the nextcloud parent dataset with four child datasets under it. Our example paths are:
Click on the widget to open the Nextcloud details screen, then click Install.
If this is the first application installed, SCALE displays a dialog about configuring apps.
Accept the default name for the app in Application Name or enter a new name if you want to change what displays or have multiple Nextcloud app deployments on your system.
This example uses the default nextcloud.
Scroll down to or click on Nextcloud Configuration to show the app configuration settings.
For a basic installation you can leave the default values in all settings except Username and Password.
a. Enter the username and password created in the Before You Begin section or for the existing Nextcloud administrator user account credentials.
This example uses admin as the user.
TrueNAS populates Host with the IP address for your TrueNAS server and Nextcloud data directory populates with the correct path.
b. Click Add to the right of Command to show the Command field then click in that field and select Install ffmpeg to automatically install the FFmpeg utility when the container starts.
c. (Optional) Click in the Certificate Configuration field and select the certificate for Nextcloud if already created and using a certificate.
Select Install ffmpeg to automatically install the utility FFmpeg when the container starts.
d. Leave Cronjobs selected which enables this by default. Select the schedule you want to use for the cron job option.
Nextcloud Cron Jobs
NextCloud cron jobs only run while the app is running. If you stop the app, the cron job(s) do not run until you start the app again.
For more information on formatting and using cron jobs, see [Managing Cron Jobs](/scaletutorials/systemsettings/advanced/managecronjobsscale/).
e. To specify an optional Environment Variable name and value, click the Add button.
Accept the port number TrueNAS populates in the Web Port field in Network Configuration.
Enter the storage settings for each of the four datasets created for the Nextcloud app.
Do not select Pre v2 Storage Structure if you are deploying Nextcloud for the first time as this slows down the installation and is not necessary.
If you are upgrading where your Nextcloud deployment in SCALE was a 1.x.x release, select this option.
a. Select Host Path (Path that already exists on the system) in Type, then browse to and select the appdata dataset to populate the Host Path for the Nextcloud AppData Storage fields.
You can set the ACL permissions here by selecting Enable ACL but it not necessary. You can also change dataset permissions from the Datasets screen using the Edit button on the Permissions widget for the Nextcloud Data dataset.
b. Select Host Path (Path that already exists on the system) in Type, then browse to and select the userdata dataset to populate the Host Path for the Nextcloud User Data Storage fields.
c. Scroll down to the Nextcloud Postgres Data Storage option.
Select Host Path (Path that already exists on the system) in Type, then browse to and select the pgpdata dataset to populate the Host Path.
d. Scroll down to Nextcloud Postgres Backup Storage, select Host Path, and then enter or browse to the path for the pgbbackup dataset.
When complete, the four datasets for Nextcloud are configured.
Scroll up to review the configuration settings and fix any errors or click Install to begin the installation.
The Installed screen displays with the nextcloud app in the Deploying state.
It changes to Running when ready to use.
Click Web Portal on the Application Info widget to open the Nextcloud web portal sign-in screen.
There are known issues with Nextcloud app releases earlier than 2.0.4. Use the Upgrade option in the SCALE UI to update your Nextcloud release to 2.0.4.
For more information on known issues, click here.
If the app does not deploy, add the www-data user and group to the /nextcloud dataset but do not set recursive.
Stop the app before editing the ACL permissions for the datasets.
Next, try adding the www-data user and group to the /nextcloud/data dataset. You can set this to recursive, but it is not necessary.
To do this:
Select the dataset, scroll down to the Permissions widget, click Edit to open the ACL Editor screen.
Click Add Item, select User in Who and www-data in the User field, and select Full Control in Permissions.
Add an entry for the group by repeating the above steps but select Group.
Click Save Access Control List.
10.4.10 - Pi-Hole
Provides information on installing Pi-hole to support network-level advertisement and internet tracker blocking.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
SCALE includes the ability to run Docker containers using Kubernetes.
What is Docker?
Docker is an open platform for developing, shipping, and running applications. Docker enables the separation of applications from infrastructure through OS-level virtualization to deliver software in containers.What is Kubernetes?
Kubernetes is a portable, extensible, open-source container-orchestration system for automating computer application deployment, scaling, and management with declarative configuration and automation.
Always read through the Docker Hub page for the container you are considering installing so that you know all of the settings that you need to configure.
To set up a Docker image, first determine if you want the container to use its own dataset. If yes, create a dataset for host volume paths before you click Launch Docker Image.
Installing Pi-hole Application
If you want to create a dataset for Pi-hole data storage, you must do this before beginning the Pi-hole application install.
When you are ready to create a container, click Apps to open the Applications screen, then click on Available Applications.
Locate the pihole widget and click Install on the widget.
Fill in the Application Name and click Version to verify the default version is the only, and most current version.
Enter the password to use for the administrative user in Admin password in the Container Environment Variables section. The password entered can not be edited after you click Save.
Adjust the Configure timezone setting if it does not match where your TrueNAS is located.
To add the WEBPASSWORD environment variable, click Add for Pihole Environment to add a block of environment variable settings.
Enter WEBPASSWORD in Name, then a secure password like the example the one used, s3curep4$$word.
Scroll down to the Storage settings.
Select Enable Custom Host Path for Pihole Configuration Volume to add the Host Path for Pihole Configuration Volume field and dataset browse option.
Click the arrow to the left of folder /mnt and at each dataset to expand the tree and browse to the dataset and directory paths you created before beginning the container deployment.
Pi-hole uses volumes store your data between container upgrades.
You need to create these directories in a dataset on SCALE before you begin installing this container.
To create a directory, open the TrueNAS SCALE CLI and enter storage filesystem mkdir path="/PATH/TO/DIRECTORY".
Click Add to display setting options to add extra host path volumes to the container if you need them.
Enter any Networking settings you want to use or customize.
TrueNAS adds the port assignments Pi-hole requires in the Web Port for pihole, DNS TCP Port for pihole, and DNS UDP Port for pihole fields. TrueNAS SCALE requires setting all Node Ports above 9000.
Select Enable Host Network to add host network settings.
Click Add for DNS Options to add a block of DNS settings if you want to configure DNS options.
Click Add for DNS Options if you want to configure DNS for your pod.
Select Enable Pod resource limits if you want to limit the CPU and memory for your Pi-hole application.
Click Save.
TrueNAS SCALE deploys the container.
If correctly configured, the Pi-Hole widget displays on the Installed Applications screen.
When the deployment is completed the container becomes active. If the container does not automatically start, click Start on the widget.
Clicking on the App card reveals details on the app.
With Pi-hole as our example we navigate to the IP of our TrueNAS system with the port and directory address :9080/admin/.
10.4.11 - Prometheus
Provides installation instructions for the Prometheus application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Prometheus is a monitoring platform that collects metrics from targets it monitors. Targets are system HTTP endpoints configured in the Prometheus web UI. Prometheus is itself an HTTP endpoint so it can monitor itself.
Prometheus collects and stores metrics such as time series data. Information stored is time-stamped at the point when it is recorded.
Prometheus uses key-value pairs called labels to differentiate characteristics of what is measured.
Use the Prometheus application to record numeric time series.
Also use it to diagnose problems with the monitored endpoints when there is a system outage.
TrueNAS SCALE makes installing Prometheus easy, but you must use the Prometheus web portal to configure targets, labels, alerts, and queries.
First Steps
The Prometheus app in SCALE installs, completes the initial configuration, then starts the Prometheus Rule Manager.
When updates become available, SCALE alerts and provides easy updates.
Before installing the Prometheus app in SCALE, review their Configuration documentation and list of feature flags and environment variables to see if you want to include any during installation.
You can configure environment variables at any time after deploying the application.
SCALE does not need advance preparation.
If not using the default user and group to manage the application, create a new user (and group) and take note of the IDs.
You can allow SCALE to create the two datasets Prometheus requires automatically during app installation.
Or before beginning app installation, create the datasets named data and config to use in the Storage Configuration section during installation.
Installing the Prometheus Application
To install the Prometheus application, go to Apps, click Discover Apps, either begin typing Prometheus into the search field or scroll down to locate the Prometheus application widget.
Click Install to open the Prometheus application configuration screen.
Application configuration settings are presented in several sections, each explained below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Accept the default value in Retention Time or change to suit your needs.
Enter values in days (d), weeks (w), months (m), or years (y). For example, 15d, 2w, 3m, 1y.
Enter the amount of storage space to allocate for the application in Retention Size.
Valid entries include integer and suffix, for example: 100MB, 10GB, etc.
You can add arguments or environment variables to customize your installation but these are not required.
To show the Argument entry field or the environment variable Name and Value fields, click Add for whichever type you want to add.
Click again to add another argument or environment variable.
Accept the default port in API Port.
Select Host Network to bind to the host network, but we recommend leaving this disabled.
Prometheus requires two storage datasets.
You can allow SCALE to create these for you, or use the datasets named data and config created before in First Steps.
Select the storage option you want to use for both Prometheus Data Storage and Prometheus Config Storage.
Select ixVolume in Type to let SCALE create the dataset or select Host Path to use the existing datasets created on the system.
Accept the defaults in Resources or change the CPU and memory limits to suit your use case.
Click Install.
The system opens the Installed Applications screen with the Prometheus app in the Deploying state.
When the installation completes it changes to Running.
The following sections provide more detailed explanations of the settings found in each section of the Install Prometheus screen.
Application Name Settings
Accept the default value or enter a name in Application Name field.
In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Prometheus Configuration Settings
You can accept the defaults in the Prometheus Configuration settings, or enter the settings you want to use.
Accept the default in Retention Time or change to any value that suits your needs.
Enter values in days (d), weeks (w), months (m), or years (y). For example, 15d, 2w, 3m, 1y.
Retention Size is not required to install the application. To limit the space allocated to retain data, add a value such as 100MB, 10GB, etc.
Select WAL Compression to enable compressing the write-ahead log.
Add Prometheus environment variables in SCALE using the Additional Environment Variables option.
Click Add for each variable you want to add.
Enter the Prometheus flag in Name and desired value in Value. For a complete list see Prometheus documentation on Feature Flags.
Networking Settings
Accept the default port numbers in API Port.
The SCALE Prometheus app listens on port 30002.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network. This binds to the host network.
Storage Settings
You can install Prometheus using the default setting ixVolume (dataset created automatically by the system) or use the host path option with the two datasets created before installing the app.
Select Host Path (Path that already exists on the system) to browse to and select the data and config datasets.
Set Prometheus Data Storage to the data dataset path, and Prometheus Config Storage to the config dataset path.
Accept the default values in Resources Configuration or enter new CPU and memory values
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory. The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Prometheus uses, enter new CPU values as a plain integer value followed by the suffix m (milli).
Default is 4000m.
Accept the default value 8Gi allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi.
10.4.12 - Rsync Daemon
Installation and basic usage instructions for the Rsync Daemon application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
This application in not needed when rsync is configured externally with SSH or with the TrueNAS built-in rsync task in SSH mode.
It is always recommended to use rsync with SSH as a security best practice.
You do not need this application to schedule or run rsync tasks from the Data Protections screen using the Rsync Task widget.
This application is an open source server that provides fast incremental file transfers.
When installed, the Rsync Daemon application provides the server function to rsync clients given the server information and ability to connect.
Installing the Rsync Daemon Application
The before installing the Rsync Daemon application (rsyncd) add a dataset the application can use for storage.
To install this application, go to Apps, click on Discover Apps, then either begin typing rsync into the search field or scroll down to locate the Rsync Daemon application widget.
Add and configure at least one module.
A module creates an alias for a connection (path) to use rsync with.
Click Add to display the Module Configuration fields.
Enter a name and specify the path to the dataset this module uses for the rsync server storage.
Leave Enable Module selected.
Select the type of access from the Access Mode dropdown list.
Accept the rest of the module setting defaults.
To limit clients that connect, enter IP addresses in Hosts Allow and Hosts Deny.
The following sections provide more detailed explanations of the settings found in each section of the Install Rsync Daemon configuration screen.
Application Name
The Application Name section includes only the Application Name setting. Accept the default rsyncd or enter a new name to show on the Installed applications screen in the list and on the Application Info widget.
Rsync Configuration
The Rysnc Configuration section Auxiliary Parameters allow you to customize the rsync server deployment.
Enter rsync global or module parameters using the Auxiliary Parameters fields.
Click Add to the right of Auxiliary Parameters for each parameter you want to add.
Enter the name of the parameter in Parameter and the value for that parameter in Value.
Network Configuration
The Network Configuration section includes the Host Network and Rsync Port settings.
Accept the default port number 30026 which is the port the Rsync app listens on.
Before changing the port number refer to Default Ports to verify the port is not already assigned. Enter a new port number in Rsync Port.
We recommend that you leave Host Network unselected.
Module Configuration
The Module Configuration section includes settings to add and customize a module for the rsync server and to configure the clients allowed or denied access to it.
Click Add for each module to add.
There are seven required settings to add a module and four optional settings.
Module Name is whatever name you want to give the module and is an alias for access to the server storage path.
A name can include upper and lowercase letters, numbers, and the special characters underscore (_), hyphen (-) and dot (.).
Do not begin or end the name with a special character.
Enable Module, selected by default, allows the list client IP addresses added to connect to the server after the app is installed and started.
Use optional Comment to enter a description that displays next to the module name when clients obtain a list of available modules.
Default is to leave this field blank.
Enter or browse to the location of the dataset to use for storage for this module on the rsync server in Host Path.
Select the access permission for this storage path from the Access Mode dropdown list. Options are Read Only, Read Write, and Write Only.
Enter a number in Max Connections for the number of client connections to allow. The default, 0, allows unlimited connections to the rsync server.
Accept the UID (user ID) and GID (group ID) default 568. If you create an administration user and group to use for this module in this application, enter that UID/GID number in these fields.
Use Hosts Allow and Hosts Deny to specify IP addresses for client systems that can to connect to the rsync server through this module.
Enter multiple IP addresses separated by a comma and space between entries in the field.
Leave blank to allow all or deny hosts.
Use the Auxiliary Parameters to enter parameters and their values to further custiize the module.
Do not enter parameters already available as the settings included in this section.
You can specify rsync global or module parameters using the module Auxiliary Parameters fields.
Authentication
By default, the rsync daemon will allow access to everything within the dataset that has been specified for each module, without authentication.
In order to set up password authentication you needs to add two auxilary parameters for the module:
Parameter: “auth users”
Value: comma separated list of usernames
Parameter: “secrets file”
Value: path to the rsyncd.secrets file
You will have to place the file inside your module dataset and use the value:
“/data//rsyncd.secrets”
The file will have to be chmod 600 and owned by root:root in order for the rsync daemon to accept it for authentication.
The file should contain list of username:password in plaintext, one user per line:
admin:password1234
user:password5678
Resource Configuration
The Resources Configuration section allows you to limit the amount of CPU and memory the application can use.
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory.
The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
10.4.13 - Storj
Provides information on the steps to set up a Storj node on your TrueNAS SCALE system.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
Storj is an open-source decentralized cloud storage (DCS) platform.
Storj permits a computer running this software to configure the system as a node, and where you can rent unused system storage capacity and bandwidth on your system to other users.
Before You Begin
Before you can configure your system to act as a Storj node:
Review the Storj node hardware and bandwidth considerations at Storj Node.
Configure your router and firewall.
Open ports on your router and configure port forwarding. Configure firewall rules to allow access for these ports.
Default 20988
Open 28967 for both TCP and UDP.
Open ports 7777 and 8888 for outbound communication.
Alternatively, use a dynamic DNS (DDNS) service such as NoIP to to create a host name if you do not have a static IP address for the system nodes.
Create a publicly-available domain name to access the Storj application. Point this to your router public IP address.
Create a Storj identity and authorize it for every node.
Every node must have a unique identifier on the network. Use NFS/SMB shares or or a file transfer service such as FTP to upload the credentials generated.
If the identity is not present on the storage directory, it generates and authorizes one automatically.
This can take a long time and consume resources of the system while it generates one.
Storj provides a Quickstart Node Setup Guide with step-by-step instructions to help users create a Storj node.
Getting a Wallet Address
Use Google Chrome MetaMask extension to create a wallet address, or if you already have one, you can use the exiting wallet.
See Storj Wallet Configuration.
Special considerations regarding how to protect and manage a wallet are outside the scope of this article.
Generating an Authentication Token for Storj
Open a browser window and go to Storj Host a Node.
Enter an email address to associate with the account, select the I’m not a robot checkbox, then click Continue.
Copy the auth token to use later in this procedure. Keep this token in a secure location.
Configuring the Router and Firewall
To allow the Storj application to communicate with Storj and the nodes, configure your router with port forwarding and the firewall to allow these ports to communicate externally:
Add a new port forwarding rule to your router for:
28967 for both TCP and UDP protocols
7777 for outgoing communication with the satellites
8888 for outgoing communication while creating and signing the identities.
Enter the internal IP address of your TrueNAS system in Destination Device.
Enter 20988 in Public and Private ports for both TCP and UDP for the Protocol.
With the TrueNAS system up and running, then check your open port using something like https://www.yougetsignal.com/tools/open-ports/. If your port forwarding is working, port 20988 is open.
This enables QUIC, which is a protocol based on UDP that provides more efficient usage of the Internet connection with both parallel uploads and downloads.
Creating a DDNS Host Name
Create a DDNS host name that points to your router WAN IP address, and provide a domain name to use for access the Storj application.
You can use a dynamic DNS service that allows you to set up a DDNS host name. You can use a service such as NoIP to create a domain name (i.e., name.ddns.net) and then point it at the WAN IP address of your router.
Use nislookup name.ddns.net to verfiy it works.
Creating the Storj Datasets on TrueNAS SCALE
Create three new datasets, one a parent to two child datasets nested under it.
Log into TrueNAS SCALE, then go to Datasets and click Add Dataset to open the Add Dataset screen.
Accept the default name or enter a new name for your Storj application.
You can enter a name for the Storj app using lowercase alphanumeric characters that begin and end with an alphanumeric characters.
Do not use a hyphenas the first or last character. For example, storjnode, or storj-node, but not -storjnode or storjnode-.
Enter the authentication token copied from Storj in Configure Auth token for Storj Node.
Enter the email address associated with the token in Configure Email for Storj.
Enter the storage domain (i.e., the public network DNS name) added for Storj in Add Your Storage Domain for Storj.
If using Dynamic DNS (DDNS), enter that name here as well. For example, name.ddns.net.
Accept the default values in Owner User ID and Owner Group ID.
Configure the storage size (in GB) you want to share. Enter the value in Configure Storage Size You Want to Share in GB’s.
Enter the host paths for the new datasets created for the Storj application.
Select Enable Custom Host Path for Storj Configuration Volume and browse to the newly created dataset (config).
Next, select Configure Identity Volume for Storage Node and browse to the second newly created dataset (identity).
Enter the web port 28967 in Web Port for Storj, and 20988 in Node Port for Storj.
The time required to install the Storj App varies depending on your hardware and network configuration.
When complete, the Installed Applications screen displays the Storj app with the status of active.
Enviromental variables are optional.
If you want to include additional variables, see Storj Environment Variables for a list.
Click Add for each variable you want to add.
Using the Web Portal
Click the Web Portal button to view additional details about the application.
The Storj Node dashboard displays stats for the storage node. These could include bandwidth utilization, total disk space, and disk space used for the month.
Payout information is also provided.
10.4.14 - TFTP Server
Provides instructions for installing the TFTP Server application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The new TFTP Server application provides Trivial File Transfer Protocol (TFTP) server functions.
The TFTP Server application is a lightweight TFTP-server container in TrueNAS SCALE. It is not intended for use as a standalone container.
The app runs as root and drops privileges to the tftp (9069) user for the TFTP service.
Every application start launches a container with root privileges.
This checks the parent directory permissions and ownership.
If it finds a mismatch, the container applies the correct permissions to the TFTP directories.
If Allow Create is selected, the container also checks and chmods TFTP directories to 757 or to 555 if not checked.
Afterwards, the TFTP container runs as root user, dropping privileges to the tftp (9069) user for the TFTP service.
First Steps
Configure your DHCP server for network boot to work.
To grant access to a specific user (and group) different from defaults, add the new non-root administrative user and note the UID and GID for this user.
To use a specific dataset or volume for files, create this in the Storage screen first.
Installing the TFTP Service App
You can install the application using all default settings, or you can customize setting to suit your use case.
To install the TFTP Server app, go to Apps, click Discover Apps. Either begin typing TFTP into the search field or scroll down to locate the TFTP Server application widget.
Application configuration settings are presented in several sections.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
After accepting or changing the default settings explained in the sections below, click Install to start the installation process.
The TFTP Server application displays on the Installed applications screen when the installation completes.
Select Allow Create to allow creating new files. This sets CREATE to 1 and MAPFILE to "". This changes the permissions of the tftpboot directory to 757, otherwise the tftpboot directory permissiong is 555.
Click Add to the right of Additional Environmental Variables to display the Name and Value fields.
Enter the name as shown in the environment variables table below. Do not enter variables that have setting fields or the system displays an error.
TFTP Server Environment Variables
This table lists docker environmental variables for the TFTP Server (tftpd-hpa) application.
Variable
Default
Description
BLOCKSIZE
n/a
Specifies the maximum permitted blocksize.
CREATE
0
Use Allow Create to set to 1. 0 means only upload files if they exist. 1 allows creating new files.
MAPFILE
""
Specifies whether to use filename remapping. Enter as /mapfile or leave empty "" if not using a mapfile. If entering a mapfile, set ownership to uid/gid 9096.
PERMISSIVE
0
Performs no additional permission checks.
PORTRANGE
4096:32736
Force the server port number (transaction ID) to be in the specified range of port numbers. Docker container default range is 4096:32760.
REFUSE
""
Indicates that a specific RFC 2347 option should be accepted.
RETRANSMIT
1 second
Determines the default timeout in micro-seconds before the first packet retransmits.
SECURE
“1”
Changes root directory on startup.
TIMEOUT
900 seconds
Specifies the number of seconds to wait for a second connection before transmitting the server.
UMASK
“020”
Sets the umask for newly created files.
VERBOSE
“1”
Increases the logging verbosity of tftpd.
VERBOSITY
3
Sets the verbosity value from 0 to 4.
Network Configuration Settings
When selected, Host Network sets the app to use the default port 69, otherwise the default port is 30031.
To change the default port number, clear the Host Network checkmark to display the TFTP Port field.
Enter a new port number in TFTP Port within the range 9000-65535.
Refer to the TrueNAS default port list for a list of assigned port numbers.
Storage Configuration Settings
Storage sets the path to store TFTP boot files.
The default storage type is ixVolume (Dataset created automatically by the system) where the system automatically creates a dataset named tftpboot.
Select Host Path (Path that already exists on the system) to show the Host Path field.
Enter or browse to select a dataset you created on the system for the application to use.
Instructions for installing and configuring the WebDAV app and sharing feature.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
The WebDAV application is a set of extensions to the HTTP protocol that allows users to collaboratively edit and manage files on remote web servers. It serves as the replacement for the built-in TrueNAS SCALE WebDAV feature.
When installed and configured with at least one share, a container launches with temporary root privileges to configure the shares and activate the service.
First Steps
To grant access to a specific user (and group) other than the default for the webdav user and group (666), add a new non-root administrative user and take note of the UID and GID for this user.
If you want to create a dataset to use for the WebDAV application share(s), create it before you install the application.
Installing the WebDAV Application
To install the application, you can accept the default values or customize the deployment to suit your use case.
You create the WebDAV share as part of the application installation.
To install the WebDAV application, go to Apps, click Discover Apps, then either begin typing WebDAV into the search field or scroll down to locate the WebDAV application widget.
Application configuration settings are presented in several sections.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Accept the defaults in Application Name and Version.
Accept the defaults or customize the settings in WebDAV Configuration. Accept the default authentication setting No Authentication or to require entering authentication credentials, select Basic Authentication as the type.
The application includes all the setting fields you need to install and deploy the container, but you can add additional environment variables to further configure the container.
The default network protocol is HTTP and uses the port 30035. To use HTTPS and add encryption to the web traffic, clear the checkmark in Enable HTTP and select Enable HTTPS.
HTTPS uses port 30035 and adds the Certificate field. The default certificate is 0.
We recommend not selecting Host Network as this binds to the host network.
Create at least one share in Storage Configuration.
Click Add to display the share settings.
Enable the share is selected by default. It enables the share at start (when the app starts).
Enter a name using lower or uppercase letters and or numbers. Names can include the underscore (_) or dash (-).
Accept the default Resource Configuration, or enter the CPU and memory settings you want to apply to the WebDAV application container.
After configuring the container settings, click Install to save the application configuration, deploy the app, and make the share(s) accessible.
After the installation completes, the application displays on the Installed applications screen.
The WebDAV widget on the Discover and WebDAV information screens includes the Installed badge.
Application Name Settings
Accept the default values in Application Name and Version.
If you want to change the application name, enter a new name.
WebDAV Configuration Settings
WebDAV configuration settings include the type of share authentication to use, none or basic.
No Authentication means any system can discover TrueNAS and access the data shared by the WebDAV application share, so this is not recommended.
Basic Authentication adds the Username and Password fields and provides some basic security.
The WebDAV application configuration includes all the settings you need to install the Docker container for the app.
You can use the Docker container environment variables listed in the table below to further customize the WebDAV Docker container.
Docker Container Environment Variables
Variable
Description
WEBDRIVE_URL
Use to specify a URL where you find the WebDAV resource other than the default. The default URL is http://webdav-ip:webdav-port/share1 where webdav-ip is the IP address for the TrueNAS system and webdav-port is 30034. If enabling HTTPS the URL is https://webdav-ip:webdav-port/share1 where the webdav-ip is the IP address for the TrueNAS system and webdav-port is 30035.
WEBDRIVE_PASSWORD_FILE
Use to specify a file that contains the password instead of using the Password field. Use when Authentication Type is set to Basic Authorization.
WEBDRIVE_MOUNT
Use to specify the location within the container where to mount the WebDAV resource (drive) into the container. This defaults to /mnt/webdrive and is not meant to be changed.
User and Group Configuration Settings
The default user and group for WebDAV is 666. To specify a different user, create the user and group before installing the application, then enter the user ID (UID) and group ID (GID) in the fields for these settings.
To add encryption to the web traffic between clients and the server, clear the checkmark in Enable HTTP and select Enable HTTPS.
This changes the default port in HTTPS Port to 30035, and adds a system Certificate.
The default certificate is 0. You can use the default as the Certificate if no other specific certificate is available.
Storage Configuration Settings
Create one or more shares in the Storage Configuration section. For the application to work, create at least one share.
Click Add for each share you want to create.
Each share must have a unique name.
To add a WebDAV share to the application, click Add to the right of Shares in the Storage Configuration section.
Enter a name in Share Name.
The name can have upper and lowercase letters and numbers. It can include an underscore (_) and/or a dash (-).
Enter share purpose or other descriptive information about the share in Description. This is not required.
Enter or browse to the Host Path location for the where the app adds the WebDAV share.
If you created a dataset before installing the app, you can browse to it here.
Select Read Only to disable write access to the share.
When selected, data accessed by clients cannot be modified.
Select Fix Permissions to change the Host Path file system permissions.
When enabled, the dataset owner becomes the User ID and Group ID set in the User and Group Configuration section.
By default, this is the webdav user with UID and GID 666.
Fix Permissions allows TrueNAS to apply the correct permissions to the WebDAV shares and directories and simplify app deployment.
After first configuration, the WebDAV container runs as the dedicated webdav user (UID: 666).
WebDAV only supports Unix-style permissions.
When deployed with Fix Permissions enabled, it destroys any existing permissions scheme on the shared dataset.
It is recommended to only share newly created datasets that have the Share Type set to Generic.
Resources Configuration Settings
By default, this application is limited to use no more than 4 CPU cores and 8 Gibibytes available memory.
The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Testing the Share
At the end of the installation process, test access to your WebDAV share.
In a browser, this is done by opening a new tab and entering the configured protocol, system host name or IP address, WebDAV port number, and Share Name.
Example: https://my-truenas-system.com:30001/mywebdavshare
When authentication is set to something other than No Authentication, a prompt requests a user name and password.
Enter the saved Username and Password entered in the webdav application form to access the shared data.
To change files shared with the WebDAV protocol, use client software such as WinSCP to connect to the share and make changes.
The WebDAV share and dataset permissions must be configured so that the User ID and Group ID can modify shared files.
10.4.16 - WG Easy
Provides installation instructions for the WG Easy application.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
WG Easy is the easiest way to install and manage WireGuard on any Linux host.
The application is included in the Community catalog of applications.
WG EASY is a Docker image designed to simplify setting up and managing WireGuard connections. This app provides a pre-configured environment with all the necessary components and a web-based user interface to manage VPN connections.
Installing the WG Easy Application
WG Easy does not require advanced preparation before installing the application.
To install the WG Easy application, go to Apps, click Discover Apps, then either begin typing WG Easy into the search field or scroll down to locate the WG Easy application widget.
Click Install to open the WG Easy application configuration screen.
Application configuration settings are presented in several sections.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section or click on the section heading on the navigation area in the upper-right corner.
Enter the public host name or IP of your VPN server in Hostname or IP.
If you use or want to protect access to the WG Easy web UI, enter a password in Password for WebUI.
Accept the default values in Persistent Keep Alive or change the value to the number of seconds you want to keep the session alive.
When set to zero, connections are not kept alive. Alternate value to use 25.
Accept the default setting for WireGuard (1420) in Clients MTU or enter a new value.
Accept the default IPs in Clients IP Address Range and Clients DNS Server or enter the IP addresses the client uses. If not provided, the default value 1.1.1.1 is used.
To specify allowed IP addresses, click Add to the right of Allowed IPs for each IP address you want to enter.
If you do not specify allowed IPs, the application uses 0.0.0.0/0.
To specify environment variables, click Add to the right of WG Easy Environment for each environment variable you want to add.
Environment Variables
Variable
Description
WD_DEVICE
Enter the interface name or ID for the ethernet device WireGuard traffic should forward through.
You can install WG Easy using the default settings or enter your own values to customize the storage settings.
Select Enable Custom Host Path for WG-Easy Configuration Volume to add the Host Path for WG-Easy Configuration Volume field.
Enter or browse to select a preconfigured mount path for the host path.
Enter the path in Mount Path in Pod where you want to mount the volume inside the pod.
Enter or browse to the host path for the WG Easy application dataset.
Networking Settings
Accept the default port numbers in WireGuard UDP Node Port for WG-Easy and WebUI Node Port for WG-Easy.
WireGuard always listens on 51820 inside the Docker container.
Refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
WG Easy does not require configuring advanced DNS options.
Accept the default settings or click Add to the right of DNS Options to show the fields for option name and value.
Accept the default values in Resources Configuration or select Enable Pod resource limits to show the fields to enter new CPU and memory values for the destination system.
Enter CPU values as a plain integer value followed by the suffix m (milli). Default is 4000m.
Accept the default value 8Gi, or enter the memory limit in bytes. Enter a plain integer followed by the measurement suffix, for example 129M or 123Mi
Click Save.
10.5 - Enterprise Applications
Tutorials for using TrueNAS SCALE applications in an Enterprise-licensed deployment.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
SCALE Enterprise licensed systems do not have applications available by default.
This feature can be enabled as part of the Enterprise license after consulting with iXsystems.
Only install qualified applications from the Enterprise applications train with the assistance of iXsystems Support.
Contacting Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
TrueNAS is certified with leading hypervisors and backup solutions to streamline storage operations and ensure compatibility with your existing IT infrastructure.
TrueNAS Enterprise storage appliances deliver a wide range of features and scalability for virtualization and private cloud environments, with the ability to create off-site backups with scheduled sync and replication features.
TrueNAS applications expand the capabilities of your system by adding third-party software but can add significant risk to system stability and security.
There are general best practices to keep in mind when using applications with TrueNAS SCALE:
Select a Pool and Create a Dataset
We recommend users keep the container use case in mind when choosing a pool. Select a pool that has enough space for all the application containers you intend to use.
TrueNAS creates an ix-applications dataset on the chosen pool and uses it to store all container-related data. This is for internal use only.
If you intend to store your application data in a location that is separate from other storage on your system, create a new dataset.
File Sharing
Since TrueNAS considers shared host paths non-secure, apps that use shared host paths (such as those services SMB is using) might fail to deploy.
The best practice is to create datasets for applications that do not share the same host path as an SMB or NFS share.
Kubernetes Settings
Kubernetes is an open-source container orchestration system that manages container scheduling and deployment, load balancing, auto-scaling, and storage.
The default system-level Kubernetes Node IP settings are found in Apps > Settings > Advanced Settings.
Using Custom App
The Custom App button starts the configuration wizard where users can install applications not included in the approved application catalog.
You cannot interrupt the configuration wizard and save settings to leave and go create data storage or directories in the middle of the process.
We recommend having your storage, user, or other configuration requirements ready before starting the wizard. You should have access to information such as:
The path to the image repository
Any container entrypoint commands or arguments
Container environment variables
Network settings
DNS nameservers
Container and node port settings
Storage volume locations
Directory Services
TrueNAS SCALE allows you to configure an Active Directory or LDAP server to handle authentication and authorization services, domain, and other account settings.
You should know your Kerberos realm and keytab information.
You might need to supply your LDAP server host name, LDAP server base and bind distinguished names (DN), and the bind password.
Determine the container and node port numbers. TrueNAS SCALE requires a node port to be greater than 9000.
Refer to the Default Ports for a list of used and available ports before changing default port assignments.
iXsystems Support can assist Enterprise customers with configuring directory service settings in SCALE with the information customers provide, but they do not configure customer Active Directory system settings.
Section Contents
MinIO Enterprise: Tutorials for installing and configuring the MinIO Enterprise application in an Enterprise-licensed deployment.
Syncthing Enterprise App: Provides general information, guidelines, installation instructions, and use scenarios for the Enterprise version of the Syncthing app.
10.5.1 - MinIO Enterprise
Tutorials for installing and configuring the MinIO Enterprise application in an Enterprise-licensed deployment.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
SCALE Enterprise licensed systems do not have applications available by default.
This feature can be enabled as part of the Enterprise license after consulting with iXsystems.
Only install qualified applications from the Enterprise applications train with the assistance of iXsystems Support.
Contacting Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
The instructions in this article apply to the Official TrueNAS Enterprise MinIO application.
This smaller version of MinIO is tested and polished for a safe and supportable experience for TrueNAS Enterprise customers.
The Enterprise MinIO application is tested and verified as an immutable target for Veeam Backup and Replication.
Adding MinIO Enterprise App
Community members can add and use the MinIO Enterprise app or the default community version.
Adding Enterprise Train Apps
To add the Enterprise MinIO application to the list of available applications, go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, click on enterprise to add it to the list of trains, and then click Save.
Both the charts and enterprise train versions of the MinIO app widget display on the Discover application screen.
First Steps
If your system has active sharing configurations (SMB, NFS, iSCSI), disable them in System Settings > Services before adding and configuring the MinIO application.
Start any sharing services after MinIO completes the installation and starts.
Installing MinIO Enterprise
This basic procedure covers the required MinIO Enterprise app settings.
It does not provide instructions for optional settings.
To install the Minio Enterprise app, go to Apps, click on Discover Apps, then scroll down to locate the enterprise version of the Minio widget.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version.
SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user.
If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO.
The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise.
If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000.
Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
The Certificates setting is not required for a basic configuration but is required when setting up multi-mode configurations and when using MinIO as an immutable target for Veeam Backup and Replication.
The Certificates dropdown list includes valid unrevoked certificates, added using Credentials > Certificates.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, http://ipaddress:30000.
Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, http://ipaddres:30001.
Select the storage type you want to use.
ixVolume (Dataset created automatically by the system) is the default storage type.
This creates a dataset for your deployment and populates the rest of the storage fields.
To use an existing dataset, select Host Path (Path that already exists on the system).
Mount Path populates with the default /data1.
Browse to the dataset location and click on it to populate the Host Path.
If you are setting up a cluster configuration, select Enable Multi Mode (SNMD or MNMD), then click Add in MultiMode Configuration.
MinIO recommends using MNMD for enterprise-grade performance and scalability. See the related MinIO articles listed below for SNMD and MNMD configuration tutorials.
The following section provides more detailed explanations of the settings in each section of the Install MinIO configuration screen.
Application Name Settings
Accept the default value or enter a name in Application Name field.
Accept the default version number in Version.
MinIO Credentials
MinIO credentials establish the login credentials for the MinIO web portal and as the MinIO administration user.
If you have existing MinIO credentials, enter them or create new login credentials for the first time you log into MinIO.
The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Enter the name of five to 20 characters in length for the root user (MinIO access key) in Root User. For example admin or admin1.
Enter eight to 40 random characters for the root user password (MinIO secret key) in Root Password. For example, MySecr3tPa$$w0d4Min10.
User and Group Configuration
Accept the default values in User and Group Configuration.
If you configured SCALE with a new administration user for MinIO, enter the UID and GID in these fields.
Network Configuration
Accept the default port numbers in API Port and Web Port, which are the port numbers MinIO uses to communicate with the app and web portal.
MinIO does not require a certificate for a basic configuration and installation of MinIO Enterprise, but if installing and configuring multi-mode SNMD or MNMD, you must use a certificate.
A SNMD configuration can use the same self-signed certificate created for MNMD, but a MNMD configuration cannot use the certificate created for a SNMD configuration because that certificate would only include the IP address for one system.
Enter the system IP address in URL format followed by the port number for the API separated by a colon in MinIO Server URL (API). For example, https://10.123.12.123:30000.
Enter the system IP address in URL format followed by the port number for the web portal separated by a colon in MinIO Browser Redirect URL. For example, https://10.123.12.123:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
Storage Configuration
MinIO storage settings include the option to add storage volumes to use inside the container (pod).
The default storage Type is ixVolume *(Dataset created automatically by the system), which adds a storage volume for the application.
To select an existing dataset, select Host Path (Path that already exists on the system) from the Type dropdown list.
The Host Path and Mount Path fields display.
Enter or browse to select and populate the Host Path field.
Accept the default Mount Path /data1 for the first storage volume for a basic installation.
Click Add to add a block of storage volume settings.
When configuring multi-mode, click Add three times to add three additional datasets created to serve as the drives in these configurations.
Multi mode uses four dataset named data1, data2, data3, and data4.
Change the Mount Path for the added volumes to /data2, /data3, or /data4, then either enter or browse to select the dataset of the same name to populate the Host Path.
When configuring MNMD, repeat the storage settings on each system in the node.
Click Enable Multi Mode (SNMD or MNMD) to enable multi-mode and display the Multi Mode (SNMD or MNMD) and Add options.
Click Add to display the field where you enter the storage or system port and storage URL string.
Enter /data{1…4} in the field if configuring SNMD.
Where /data represents the dataset name and the curly brackets enclosing 1 and 4 separated by three dots represent the numeric value of the dataset names.
Enter https://10.123.123.10{0…3}:30000/data{1…4} in the field if configuring MNMD.
The last number in the final octet of the IP address number is the first number in the {0…3} string.
Separate the numbers in the curly brackets with three dots.
If your sequential IP addresses are not using 100 - 103, for example 10.123.123.125 through 128, then enter them as https://10.123.123.12{5…8}:30000/data{1…4}.
If you do not have sequentially numbered IP addresses assigned to the four systems, assign sequentially numbered host names.
For example, minio1.mycompany.com through minio4.mycompany.com.
Enter https://minio{1…4}.mycompany.com:30000/data{1…4} in the Multi Mode (SNMD or MNMD) field.
Logging is an optional setting.
If setting up logging, select Anonymous to hide sensitive information from logging or Quiet to omit (disable) startup information.
Select Enable Log Search API to enable LogSearch API and configure MinIO to use this function and add the configuration settings for LogSearch. This deploys a Postgres database to store the logs.
Enter the disk capacity LogSearch can use in Disk Capacity (GB).
Accept the default ixVolume in Postgres Data Storage to allow the app to create a storage volume.
To select an existing dataset instead of the default, select Host Path from the dropdown list.
Enter or browse to the dataset to populate the Host Path field.
Accept the default ixVolume in Postgres Backup Storage to allow the app to create the storage volume.
To select an existing dataset instead of the default, select Host Path from the dropdown list.
Enter or browse to the dataset to populate the Host Path field.
Resource Configuration
By default, TrueNAS limits this application to using no more than 4 CPU cores and 8 Gigabytes of available memory.
The application might use considerably less system resources.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
Contents
Installing MinIO Enterprise MNMD: Provides instructions on installing and configuring MinIO Enterprise in a Multi-Node Multi-Disk (MNMD) configuration.
Installing MinIO Enterprise SNMD: Provides instructions on installing and configuring MinIO Enterprise in a Single-Node Multi-Disk (SNMD) configuration.
10.5.1.1 - Installing MinIO Enterprise MNMD
Provides instructions on installing and configuring MinIO Enterprise in a Multi-Node Multi-Disk (MNMD) configuration.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
SCALE Enterprise licensed systems do not have applications available by default.
This feature can be enabled as part of the Enterprise license after consulting with iXsystems.
Only install qualified applications from the Enterprise applications train with the assistance of iXsystems Support.
Contacting Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
The instructions in this article apply to the TrueNAS MinIO Enterprise application installed in a Multi-Node Multi-Disk (MNMD) multi-mode configuration.
Community members can add and use the MinIO Enterprise app or the default community version.
To add the Enterprise MinIO application to the list of available applications, go to Apps and click on Discover Apps.
Click on Manage Catalogs at the top of the Discover screen to open the Catalog screen.
Click on the TRUENAS catalog to expand it, then click Edit to open the Edit Catalog screen.
Click in the Preferred Trains field, click on enterprise to add it to the list of trains, and then click Save.
Both the charts and enterprise train versions of the MinIO app widget display on the Discover application screen.
First Steps
Complete these steps for every system (node) in the cluster.
Assign four sequential IP addresses or host names such as minio1.mycompany.com through minio4.mycompany.com to the TrueNAS SCALE system.
If you assign IP address numbers such as *#.#.#.*100 - 103 or *#.#.#.134 - .137, you can use these in the command string in the Multi Mode field.
If not using sequential IP addresses, use sequentially numbered host names.
Add network settings using either the Network screen. Enter host names on the Global Configuration screen.
When creating the certificate, enter the system IP addresses for each system in Subject Alternate Names.
If configuring MinIO in an MNMD cluster, enter the system IP addresses for each system in the cluster.
If the system has active sharing configurations (SMB, NFS, iSCSI), disable these sharing services in System Settings > Services before adding and configuring the MinIO application.
Start any sharing services after MinIO completes the install and starts.
Multi mode configurations require a self-signed certificate. If creating a cluster each system requires a self-signed certificate.
Adding an App Certificate
Go to Credentials > Certificates to add a self-signed certificate authority (CA) and certificate for the application to use.
Click Add on the Certificate Authorities (CA) widget to open the Add Certificate Authority screen.
a. Enter a name for the CA. For example, minio, syncthing, etc.
Accept the defaults for Type and Profile, then click Next.
b. Accept the defaults on Certificate Options unless you want to set an expiration on the certificate.
Enter a new value in Lifetime to impose an expiration time period, then click Next.
c. Enter location and organization values for your installation in the Certificate Subject fields.
Enter the email address you want to receive system notifications.
d. Enter your system IP address in Subject Alternate Names, then click Next.
When configuring a cluster, enter the system IP addresses for each system in the cluster.
e. Accept the default values on Extra Constraints, then click Next.
f. Review the CA configuration then click Save.
Click Add on the Certificates widget to open the Add Certificate screen.
a. Enter a name for the certificate. For example, minio, syncthing, etc.
Select Internal Certificate as Type and HTTPS RSA Certificate in Profiles, then click Next.
b. Select the newly-created CA in Signing Certificate Authority.
Accept the rest of the defaults unless you want to set an expiration on the certificate.
Enter a new value in Lifetime to impose an expiration time period, then click Next.
c. Enter location and organization values for your installation in the Certificate Subject fields.
Enter the email address you want to receive system notifications.
d. Enter your system IP address in Subject Alternate Names, then click Next.
When configuring a cluster, enter the system IP addresses for each system in the cluster.
e. Accept the default values on Extra Constraints, then click Next.
f. Review the CA configuration then click Save.
Download the certificate and install it.
a. Click the download icon on the Certificates widget to start the download.
When complete, click the browser download icon to open in a File Explorer window.
b. Right click on the certificate.crt file, then click Install Certificate. Click Open on the Open File window.
c. Click Install Certificate, then select Local Machine on the Welcome to the Certificate Import Wizard window. Click Next.
d. Select Place all certificates in the following store, then select Trusted Root Certificate Authorities and click OK.
c. Click Next then Finish.
Add a self-signed certificate for the MinIO application to use.
Create four datasets named, data1, data2, data3, and data4.
Do not nest these datasets under each other. Select the parent dataset, for example apps, before you click Create Dataset.
Set the Share Type to apps for each dataset.
MinIO assigns the correct properties during the installation process so you do not need to configure the ACL or permissions.
Installing MinIO Enterprise
This procedure covers the required Enterprise MinIO App settings.
Repeat this procedure for every system (node) in the MNND cluster.
To install the Minio Enterprise app, go to Apps, click on Discover Apps, then scroll down to locate the enterprise version of the Minio widget.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version.
SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user.
If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO.
The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise.
If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000.
Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
Scroll down to or click on Storage Configuration on the list of sections at the right of the screen.
Click Add three times in the Storage Configuration section to add three more sets of storage volume settings.
In the first set of storage volume settings, select Host Path (Path that already exists on the system) and accept the default /data1 in Mount Path.
Enter or browse to the data1 dataset to populate Host Path with the mount path. For example, /mnt/tank/apps/data1.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data2, and enter or browse to the location of the data2 dataset to populate the Host Path.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data3, and enter or browse to the location of the data3 dataset to populate the Host Path.
Scroll down to the last set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data4, and enter or browse to the location of the data4 dataset to populate the Host Path.
Select Enable Multi Mode (SNMD or MNMD), then click Add.
If the systems in the cluster have sequentially assigned IP addresses, use the IP addresses in the command string you enter in the Multi Mode (SNMD or MNMD) field.
For example, https://10.123.12.10{0…3}:30000/data{1…4} where the last number in the last octet of the IP address number is the first number in the {0…3} string.
Separate the numbers in the curly brackets with three dots.
If your sequential IP addresses are not using 100 - 103, for example 10.123.12.125 through 128, then enter them as https://10.123.12.12{5…8}:30000/data{1…4}.
Enter the same string in the Multi Mode (SNMD or MNMD) field in all four systems in the cluster.
If you do not have sequentially numbered IP addresses assigned to the four systems, assign sequentially numbered host names.
For example, minio1.mycompany.com through minio4.mycompany.com.
Enter https://minio{1…4}.mycompany.com:30000/data{1…4} in the Multi Mode (SNMD or MNMD) field.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function and deploy a postgres database to store the logs.
Specify the storage in gigabytes that the logs are allowed to occupy in Disk Capacity in GB.
Accept the default ixVolume in Postgres Data Storage and Postgres Backup Storage to let the system create the datasets, or select Host Path to select an existing dataset on the system to use for these storage volumes.
Accept the default values in Resources Configuration or to customize the CPU and memory allocated to the container (pod) the Minio app uses, enter new values in the CPU Resource Limit and Memory Limit fields.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory.
The application might use considerably less system resources.
Click Install to complete the installation.
The Installed applications screen opens showing the MinIO application in the Deploying state.
It changes to Running when the application is ready to use.
After installing and getting the MinIO Enterprise application running in SCALE, log into the MinIO web portal and complete the MinIO setup.
Go to Monitoring > Metrics to verify the servers match the total number of systems (nodes) you configured.
Verify the number of drives match the number you configured on each system, four systems each with four drives (4 systems x 4 drives each = 16 drives).
Refer to MinIO documentation for more information.
10.5.1.2 - Installing MinIO Enterprise SNMD
Provides instructions on installing and configuring MinIO Enterprise in a Single-Node Multi-Disk (SNMD) configuration.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
SCALE Enterprise licensed systems do not have applications available by default.
This feature can be enabled as part of the Enterprise license after consulting with iXsystems.
Only install qualified applications from the Enterprise applications train with the assistance of iXsystems Support.
Contacting Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
The instructions in this article apply to the TrueNAS MinIO Enterprise application installed in a Single-Node Multi-Disk (SNMD) multi-mode configuration.
Accept the defaults in Application Name or enter a name for your MinIO application deployment.
Accept the default in Version, which populates with the current MinIO version.
SCALE displays update information on the Installed application screen when an update becomes available.
Enter credentials to use as the MinIO administration user.
If you have existing MinIO credentials, enter these or create new login credentials for the first time you log into MinIO.
The Root User is the equivalent of the MinIO access key. The Root Password is the login password for that user or the MinIO secret key.
Accept the User and Group Configuration settings default values for MinIO Enterprise.
If you configured SCALE with a new administration user for MinIO, enter the UID and GID.
Scroll down to or click Network Configuration on the list of sections at the right of the screen.
Select the certificate you created for MinIO from the Certificates dropdown list.
Enter the TrueNAS server IP address and the API port number 30000 as a URL in MinIO Server URL (API). For example, https://ipaddress:30000.
Enter the TrueNAS server IP address and the web UI browser redirect port number 30001 as a URL in MinIO Browser Redirect URL. For example, https://ipaddress:30001.
MNMD MinIO installations require HTTPS for both MinIO Server URL and MinIO Browser Redirect URL to verify the integrity of each node. Standard or SNMD MinIO installations do not require HTTPS.
Scroll down to or click on Storage Configuration on the list of sections at the right of the screen.
Click Add three times in the Storage Configuration section to add three more sets of storage volume settings.
In the first set of storage volume settings, select Host Path (Path that already exists on the system) and accept the default /data1 in Mount Path.
Enter or browse to the data1 dataset to populate Host Path with the mount path. For example, /mnt/tank/apps/data1.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data2, and enter or browse to the location of the data2 dataset to populate the Host Path.
Scroll down to the next set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data3, and enter or browse to the location of the data3 dataset to populate the Host Path.
Scroll down to the last set of storage volume settings and select Host Path (Path that already exists on the system).
Change the Mount Path to /data4, and enter or browse to the location of the data4 dataset to populate the Host Path.
Select Enable Multi Mode (SNMD or MNMD), then click Add.
Enter **/data{1…4} in the Multi Mode (SNMD or MNMD) field.
Select the optional Enable Log Search API to enable LogSearch API and configure MinIO to use this function and deploy a postgres database to store the logs.
Specify the storage in gigabytes that the logs are allowed to occupy in Disk Capacity in GB.
Accept the default ixVolume in Postgres Data Storage and Postgres Backup Storage to let the system create the datasets, or select Host Path to select an existing dataset on the system to use for these storage volumes.
Accept the default values in Resources Configuration or to customize the CPU and memory allocated to the container (pod) the Minio app uses, enter new values in the CPU Resource Limit and Memory Limit fields.
Tune these limits as needed to prevent the application from overconsuming system resources and introducing performance issues.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes available memory.
The application might use considerably less system resources.
Click Install to complete the installation.
The Installed applications screen opens showing the MinIO application in the Deploying state.
It changes to Running when the application is ready to use.
Provides general information, guidelines, installation instructions, and use scenarios for the Enterprise version of the Syncthing app.
Application maintenance is independent from TrueNAS SCALE version release cycles.
This means app version information, features, configuration options, and installation behavior at the time of access might vary from those in documented tutorials.
In TrueNAS 24.04 (Dragonfish), the Apps feature is provided using Kuberenetes.
To propose documentation changes to a Kubernetes-based app available in TrueNAS 24.04 (Dragonfish), click Edit Page in the top right corner.
Future versions of TrueNAS, starting with 24.10 (Electric Eel), provide the Apps feature using Docker.
See the TrueNAS Apps Marketplace for more information.
See Updating Content for more guidance on proposing documentation changes.
TrueNAS Enterprise
SCALE Enterprise licensed systems do not have applications available by default.
This feature can be enabled as part of the Enterprise license after consulting with iXsystems.
Only install qualified applications from the Enterprise applications train with the assistance of iXsystems Support.
Contacting Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
This article provides information on installing and using the TrueNAS Syncthing app.
SCALE has two versions of the Syncthing application, the community version in the charts train and a smaller version tested and polished for a safe and supportable experience for enterprise customers in the enterprise train.
Community members can install either the enterprise or community version.
Syncthing Overview
Syncthing is a file synchronization application that provides a simple and secure environment for file sharing between different devices and locations.
Use it to synchronize files between different departments, teams, or remote workers.
Syncthing is tested and validated to work in harmony with TrueNAS platforms and underlying technologies such as ZFS to offer a turnkey means of keeping data across many systems.
It can seamlessly integrate with TrueNAS.
Syncthing does not use or need a central server or cloud storage.
All data is encrypted and synchronized directly between devices to ensure files are protected from unauthorized access.
Syncthing is easy to use and configure.
You can install on a wide range of operating systems, including Windows, MacOS, Linux, FreeBSD, iOS or Android.
The Syncthing web UI provides users with easy management and configuration of the application software.
How does Syncthing work?
Syncthing does not have a central directory or cache to manage.
It segments files into pieces called blocks.
These blocks transfer data from one device to another.
Multiple devices can share the synchronization load in a similar way to the torrent protocol.
With more devices and smaller blocks, devices receive data faster because all devices fetch blocks in parallel.
Syncthing renames files and updates metadata more efficiently because renaming does not cause a re-transmission of that file.
Temporary files store partial data downloaded from devices.
Temporary files are removed when a file transfer completes or after the configured amount of time elapses.
Users migrating data from an existing third-party NAS solution to TrueNAS SCALE 24.04 (Dragonfish) or newer can use the Syncthing Enterprise application to mount the source with a remote SMB share that preserves metadata.
Create a self-signed certificate for the Syncthing enterprise app.
Adding an App Certificate
Go to Credentials > Certificates to add a self-signed certificate authority (CA) and certificate for the application to use.
Click Add on the Certificate Authorities (CA) widget to open the Add Certificate Authority screen.
a. Enter a name for the CA. For example, minio, syncthing, etc.
Accept the defaults for Type and Profile, then click Next.
b. Accept the defaults on Certificate Options unless you want to set an expiration on the certificate.
Enter a new value in Lifetime to impose an expiration time period, then click Next.
c. Enter location and organization values for your installation in the Certificate Subject fields.
Enter the email address you want to receive system notifications.
d. Enter your system IP address in Subject Alternate Names, then click Next.
When configuring a cluster, enter the system IP addresses for each system in the cluster.
e. Accept the default values on Extra Constraints, then click Next.
f. Review the CA configuration then click Save.
Click Add on the Certificates widget to open the Add Certificate screen.
a. Enter a name for the certificate. For example, minio, syncthing, etc.
Select Internal Certificate as Type and HTTPS RSA Certificate in Profiles, then click Next.
b. Select the newly-created CA in Signing Certificate Authority.
Accept the rest of the defaults unless you want to set an expiration on the certificate.
Enter a new value in Lifetime to impose an expiration time period, then click Next.
c. Enter location and organization values for your installation in the Certificate Subject fields.
Enter the email address you want to receive system notifications.
d. Enter your system IP address in Subject Alternate Names, then click Next.
When configuring a cluster, enter the system IP addresses for each system in the cluster.
e. Accept the default values on Extra Constraints, then click Next.
f. Review the CA configuration then click Save.
Download the certificate and install it.
a. Click the download icon on the Certificates widget to start the download.
When complete, click the browser download icon to open in a File Explorer window.
b. Right click on the certificate.crt file, then click Install Certificate. Click Open on the Open File window.
c. Click Install Certificate, then select Local Machine on the Welcome to the Certificate Import Wizard window. Click Next.
d. Select Place all certificates in the following store, then select Trusted Root Certificate Authorities and click OK.
c. Click Next then Finish.
You can allow the app to create a storage volume(s) or use existing datasets created in SCALE.
The TrueNAS Syncthing app requires a main configuration storage volume for application information.
You can also mount existing datasets for storage volume inside the container pod.
If you want to use existing datasets for the main storage volume, [create any datasets]/scaletutorials/datasets/datasetsscale/ before beginning the app installation process (for example, syncthing for the configuration storage volume).
If also mounting storage volume inside the container, create a second dataset named data1. If mounting multiple storage volumes, create a dataset for each volume (for example, data2, data3, etc.).
You can have multiple Syncthing app deployments (two or more Charts, two or more Enterprise, or Charts and Enterprise trains, etc.).
Each Syncthing app deployment requires a unique name that can include numbers and dashes or underscores (for example, syncthing2, syncthing-test, syncthing_1, etc.).
Use a consistent file-naming convention to avoid conflict situations where data does not or cannot synchronize because of file name conflicts.
Path and file names in the Syncthing app are case sensitive.
For example, a file named MyData.txt is not the same as mydata.txt file in Syncthing.
If not already assigned, set a pool for applications to use.
Either use the default user and group IDs or create the new user with Create New Primary Group selected.
Make note of the UID/GID for the new user.
Installing the Syncthing Application
Go to Apps > Discover Apps, and locate the Syncthing enterprise app widget.
Click Install to open the Install Syncthing screen.
Application configuration settings are presented in several sections, each explained below.
To find specific fields click in the Search Input Fields search field, scroll down to a particular section, or click on the section heading in the navigation area in the upper-right corner.
Accept the default values in Application Name and Version.
Select the timezone where the TrueNAS server is located from the Timezone dropdown list.
Accept the default user and group ID settings.
If selected, Host Network binds to the default host settings programmed for Syncthing.
Accept the default web port 31000.
If changing ports, see Default Ports for a list of assigned port numbers.
Select the certificate created for Syncthing from the Certificates dropdown list.
Configure the storage settings.
To allow Syncthing to create the configuration storage volume, leave Type set to ixVolume (Dataset created automatically by the system), then enter or browse to the location of the data1 dataset to populate the Host Path field under the Mount Path field.
To use an existing dataset created for Syncthing, select Host Path (Path that already exists on the system).
Enter or browse to the dataset created to populate the Host Path field (for example, /mnt/tank/syncthing/config), then enter or browse to the location of the data1 dataset to populate the Host Path field under the Mount Path field.
To add another dataset path inside the container, see Storage Settings below for more information.
Click Install.
The system opens the Installed Applications screen with the Syncthing app in the Deploying state.
After installation completes the status changes to Running.
Secure Syncthing by setting up a username and password.
Understanding Syncthing Settings
The following sections provide detailed explanations of the settings found in each section of the Install Syncthing screen for the Enterprise train app.
Application Name Settings
Accept the default value or enter a name in Application Name field.
In most cases use the default name, but if adding a second deployment of the application you must change this name.
Accept the default version number in Version.
When a new version becomes available, the application has an update badge.
The Installed Applications screen shows the option to update applications.
Configuration Setting
Select the timezone where your TrueNAS SCALE system is located.
User and Group Settings
You can accept the default settings in User and Group Configuration, or enter new user and group IDs.
The default value for User Id and Group ID is 568.
Accept the default port numbers in Web Port for Syncthing.
The SCALE Syncthing chart app listens on port 31000.
Before changing the default port and assigning a new port number, refer to the TrueNAS default port list for a list of assigned port numbers.
To change the port numbers, enter a number within the range 9000-65535.
We recommend not selecting Host Network.
This binds to the host network.
Select the self-signed certificate created in SCALE for Syncthing from the Certificate dropdown list.
You can edit the certificate after deploying the application.
Storage Settings
You can allow the Syncthing app to create the configuration storage volume or you can create datasets to use for the configuration storage volume and to use for storage within the container pod.
To allow the Syncthing app to create the configuration storage volume, leave Type set to ixVolume (Dataset created automatically…).
To use existing datasets, select Host Path (Path that already exist on the system) in Type to show the Host Path field, then enter or browse to and select the dataset an existing dataset created for the configuration storage volume.
If mounting a storage volume inside the container pod, enter or browse to the location of the data1 dataset to populate the Host Path field below the Mount Path populated with data1.
In addition to the data1 dataset, you can mount additional datasets to use as other storage volumes within the pod.
Click Add to the right of Additional Storage to show another set of Mount Path and Host Path fields for each dataset to mount.
Enter or browse to the dataset to populate the Host Path and Mount Path fields.
Mounting an SMB Share
The TrueNAS SCALE Syncthing Enterprise app includes the option to mount an SMB share inside the container pod.
This allows data synchronization between the share and the app.
The SMB share mount does not include ACL protections at this time.
Permissions are currently limited to the permissions of the user that mounted the share.
Alternate data streams (metadata), finder colors tags, previews, resource forks, and MacOS metadata are stripped from the share along with filesystem permissions, but this functionality is undergoing active development and implementation planned for a future TrueNAS SCALE release.
To mount an SMB share inside the Syncthing application, select SMB Share (Mounts a persistent volume claim to a system) in Type if not mounting a dataset in the container pod. If mounting a dataset inside the pod and to mount an SMB share, click Add to the right of Additional Storage to add a set of select settings then select the SMB share option.
Enter the server for the SMB share in Server, the name of the share in Share, then enter the username and password credentials for the SMB share
Determine the total size of the SMB share to mount and access via TrueNAS SCALE and Syncthing, and enter this value in Size.
You can edit the size after deploying the application if you need to increase the storage volume capacity for the share.
Resource Configuration Settings
Accept the default values in Resources Configuration or enter new CPU and memory values.
By default, this application is limited to use no more than 4 CPU cores and 8 Gigabytes of available memory.
The application might use considerably less system resources.
To customize the CPU and memory allocated to the container (pod) Syncthing uses, enter new CPU values as a plain integer value followed by the suffix m (milli).
The default is 4000m.
Accept the default value 8Gb allocated memory or enter a new limit in bytes.
Enter a plain integer followed by the measurement suffix, for example, 129M or 123MiB.
Increasing inotify Watchers
Syncthing uses inotify to monitor filesystem events, with one inotify watcher per monitored directory.
Linux defaults to a maximum of 8192 inotify watchers.
Using the Syncthing Enterprise app to sync directories with greater than 8191 subdirectories (possibly lower if other services are also utilizing inotify) produces errors that prevent automatic monitoring of filesystem changes.
Increase inotify values to allow Syncthing to monitor all sync directories.
Add a sysctl variable to ensure changes persist through reboot.
Go to System Settings > Advanced and locate the Sysctl widget.
Enter a Value larger than the number of directories monitored by Syncthing.
There is a small memory impact for each inotify watcher of 1080 bytes, so it is best to start with a lower number, we suggest 204800, and increase if needed.
Enter a Description for the variable, such as Increase inotify limit.
Select Enabled and click Save.
Securing the Syncthing Web UI
After installing and starting the Syncthing application, launch the Syncthing webUI.
Go to Actions > Settings and set a user password for the web UI.
The Syncthing web portal allows administrators to monitor and manage the synchronization process, view logs, and adjust settings.
Folders list configured sync folders, details on sync status and file count, capacity, etc.
To change folder configuration settings, click on the folder.
This Device displays the current system IO status including transfer/receive rate, number of listeners, total uptime, sync state, and the device ID and version.
Actions displays a dropdown list of options.
Click Advanced to access GUI, LDAP, folder, device, and other settings.
You can manage directional settings for sync configurations, security, encryption, and UI server settings through the Actions options.
Managing Syncthing Folder
To change or enter a directory path to share a folder, click on the folder, then select Edit.
We recommend each shared folder have a sync folder to allow for more granular traffic and data flow.
Syncthing creates a default sync folder in the main user or HOME directory during installation of the application.
Untrusted Device Password is a beta feature and not recommended for production environments.
This feature is for edge cases where two users want to share data on a given device but cannot risk interception of data.
Only trusted users with the code can open the file(s) with shared data.
Using Syncthing File Versioning
File Versioning applies to changes received from other devices.
For example, Bill turns on versioning and Alice changes a file.
Syncthing archives the old version on Bill’s computer when it syncs the change from Alice.
But if Bill changes a file locally on his computer, Syncthing does not and cannot archive the old version.
For more information on specific file versioning, see Versioning
Using Syncthing Advanced Administration
Go to Actions > Advanced to access advanced settings.
These setting options allow you to set up network isolation, directory services, database, and bandwidth throttling, and to change device-specific settings and global default settings.
Incorrect configuration can damage folder contents and render Syncthing inoperable!
Viewing Syncthing Logs and Debugs
Go to Logs to access current logs and debug files.
The Log tab displays current logs, and the Debugging Facilities tab provides access to debug logging facilities.
Select different parameters to add to the logs and assist with debugging specific functionalities.
You can forward logs to a specific folder or remote device.
Maintaining File Ownership (ACL Preservation)
Syncthing includes the ability to maintain ownership and extend attributes during transfers between nodes (systems).
This ensures ACLs and permissions remain consistent across TrueNAS SCALE systems during one and bi-directional Syncthing moves.
You can configure this setting on a per folder basis.
10.6 - Sandboxes (Jail-like Containers)
Provides advanced users information on deploying custom FreeBSD jail-like containers in SCALE.
TrueNAS Sandboxes and Jailmaker are not supported by iXsystems.
This is provided solely for users with advanced command-line, containerization, and networking experience.
There is significant risk that using Jailmaker causes conflicts with the built-in Apps framework within SCALE.
Do not mix the two features unless you are capable of self-supporting and resolving any issues caused by using this solution.
Beginning with 24.04 (Dragonfish), TrueNAS SCALE includes the systemd-nspawn containerization program in the base system.
This allows using tools like the open-source Jailmaker to build and run containers that are very similar to Jails from TrueNAS CORE or LXC containers on Linux.
Using the Jailmaker tool allows deploying these containers without modifying the base TrueNAS system.
These containers persist across upgrades in 24.04 (Dragonfish) and later SCALE major versions.
With a TrueNAS dataset configured for sandboxes and the Jailmaker script set to run at system startup, sandboxes can now be created.
Creating and managing sandboxes is done only in TrueNAS Shell sessions using the
jlmkr
command.
For full usage documentation, refer to the open-source Jailmaker project.
From a TrueNAS Shell session, go to your sandboxes dataset and enter
./jlmkr.py -h
for embedded usage notes.
Report any issues encountered when using Jailmaker to the project Issues Tracker.
11 - Reporting
Provides information on changing settings that control how SCALE displays report graphs, how to interact with graphs, and configuring reporting exporters.
TrueNAS has a built-in reporting engine that provides helpful graphs and information about the system.
TrueNAS SCALE uses Netdata to gather metrics, create visualizations, and provide reporting statistics.
Click Netdata from the Reporting screen to see the built-in Netdata UI.
This UI bases metrics off your local system and browser time, which might be different from the default TrueNAS system time.
Reporting data is saved to permit viewing and monitoring usage trends over time.
This data is preserved across system upgrades and restarts.
TrueCommand offers enhanced features for reporting like creating custom graphs and comparing utilization across multiple systems.
Interacting with Graphs
Click on and drag a certain range of the graph to expand the information displayed in that selected area in the Graph.
Click on the icon to zoom in on the graph.
Click on the icon to zoom out on the graph.
Click the to move the graph forward.
Click the to move the graph backward.
Using the Netdata UI
Click Netdata from the Reporting screen to see the built-in Netdata UI.
This UI bases metrics off your local system and browser time, which might be different from the default TrueNAS system time.
A new password generates each time the Netdata button is clicked on the Reporting screen.
Click Generate New Password on the dialog to force regeneration.
The Netdata UI opens a log in prompt.
Enter the new generated password to regain access.
You can configure TrueNAS to export Netdata information to any time-series database, reporting cloud service or application installed on a server.
For example, Graphite, Grafana, etc., installed on a server or use their cloud service.
Creating reporting exporters enables SCALE to send Netdata reporting metrics, formatted as a JSON object, to another reporting entity.
For more information on exporting Netdata records to other servers or services, refer to the Netdata exporting reference guide.
Graphite is a monitoring tool available as an application you can deploy on a server or use their cloud service.
It stores and renders time-series data based on a plaintext database.
Netdata exports reporting metrics to Graphite in the format prefix.hostname.chart.dimension.
For additional information, see the Netdata Graphite exporting guide.
Adding a Reporting Exporter
To configure a reporting exporter in SCALE, you need the:
IP address of the reporting service or server.
If using another TrueNAS system with a reporting application, this is the IP address of the TrueNAS running the application.
Port number the reporting service listens on.
If using another TrueNAS system with a reporting application, this is the port number the TrueNAS system listens on (port:80)
Go to Reporting and click on Exporters to open the Reporting Exporters screen.
Any reporting exporters configured on the system display on the Reporting Exporters screen.
Select Enable to send reporting metrics to the configured exporter instance.
Clearing the checkmark disables the exporter without removing configuration.
Enter the IP address for the data collection server or cloud service.
Enter the port number the report collecting server, etc. listens on.
Enter the file hierarchy structure, or where in the collecting server, etc. to send the data.
First enter the top-level in Prefix and then the data collection folder in the Namespace field.
For example, entering DF in Prefix and test in Namespace creates two folders in Graphite with DF as the parent to Test.
You can accept the defaults for all other settings, or enter configuration settings to match your use case.
Click Save.
To view the Graphite web UI, enter the IPaddress:Port# of the system hosting the application.
SCALE can now export the data records as Graphite-formatted JSON objects to the other report collection and processing application, service, or servers.
SCALE also populates the exporter screen with default settings.
To view these settings, click Edit on the row for the exporter.
12 - System Settings
Tutorials for configuring the system management options in the System Settings area of the TrueNAS SCALE web interface.
SCALE system management options are collected in this section of the UI and organized into a few different screens:
Update controls when the system applies a new version.
There are options to download and install an update, have the system check daily and stage updates, or apply a manual update file to the system.
General shows system details and has basic, less intrusive management options, including web interface access, localization, and NTP server connections.
This is also where users can input an Enterprise license or create a software bug ticket.
Advanced contains options that are more central to the system configuration or meant for advanced users.
Specific options include configuring the system console, log, and dataset pool, managing sessions, adding custom system controls, kernel-level settings, scheduled scripting or commands, global two-factor authentication, and determining any isolated GPU devices.
Warning: Advanced settings can be disruptive to system function if misconfigured.
Boot lists each ZFS boot environment stored on the system.
These restore the system to a previous version or specific point in time.
Services displays each system component that runs continuously in the background.
These typically control data sharing or other external access to the system.
Individual services have their own configuration screens and activation toggles, and can be set to run automatically.
Shell allows users to use the TrueNAS Operating System command-line interface (CLI) directly in the web UI.
Includes an experimental TrueNAS SCALE-specific CLI for configuring the system separately from the web interface.
See the CLI Reference Guide for more information.
Alert Settings allows users to configure Alert Services and to adjust the threshold and frequency of various alert types. See Alerts Settings Screens for more information.
Enclosure appears when the system is attached to compatible SCALE hardware.
This is a visual representation of the system with additional details about disks and other physical hardware components.
Contents
Updating SCALE: Provides instructions on updating SCALE releases in the UI.
General Settings: Tutorials for configuring many general TrueNAS SCALE settings.
Managing the System Configuration: Provides information on downloading your TrueNAS SCALE configuration to back up system settings, uploading a new configuration file, and resetting back to default settings.
Managing General Settings: Provides information on configuring GUI options, localizing TrueNAS SCALE to your region and language, and adding NTP servers.
Setting Up System Email: Provides instructions on configuring email using SMTP or GMail OAuth and setting up the email alert service in SCALE.
Advanced Settings: Tutorials for configuring advanced system settings in TrueNAS SCALE.
Managing Cron Jobs: Provides information on adding or modifying cron jobs in TrueNAS SCALE.
Managing the Console Setup Menu: Provides information on the Console setup menu configuration settings including the serial port, port speed, password protection, and the banner users see.
Managing System Logging: Provides information on setting up or changing the syslog server, the level of logging and the information included in the logs, and using TLS as the transport protocol.
FTP: Provides instructions on configuring the FTP service including storage, user, and access permissions.
NFS: Provides information on configuring NFS service in TrueNAS SCALE.
S.M.A.R.T.: Provides information on S.M.A.R.T. service screen settings.
SMB: Provides instructions on configuring the SMB service in TrueNAS SCALE.
SNMP: Provides information on configuring SNMP service in TrueNAS SCALE.
SSH: Provides information on configuring the SSH service in TrueNAS SCALE and using an SFTP connection.
UPS: Provides information on configuring UPS service in TrueNAS SCALE.
Using Shell: Provides information on using the TrueNAS SCALE Shell.
Audit Logs: Provides information on using the System and SMB Share auditing screens and function in TrueNAS.
12.1 - Updating SCALE
Provides instructions on updating SCALE releases in the UI.
TrueNAS has several software branches (linear update paths) known as trains. If SCALE is in a prerelease train it can have various preview/early build releases of the software.
The Update Screen only displays the current train.
When looking to upgrade SCALE to a new major version, make sure to upgrade SCALE along the path of major versions until the system is on the desired major version release.
For more information on other available trains and the upgrade path from one version to the next, see Release Schedules.
See the Software Status page for the latest recommendations for software usage.
Do not change to a prerelease or nightly release unless the system is intended to permanently remain on early versions and is not storing any critical data.
If you are using a non-production train, be prepared to experience bugs or other problems.
Testers are encouraged to submit bug reports and debug files.
For information on how to file an issue ticket see Filing an Issue Ticket in SCALE.
The TrueNAS SCALE Update screen provides users with two different methods to update the system, automatic or manual.
We recommend updating SCALE when the system is idle (no clients connected, no disk activity, etc.).
The system restarts after an upgrade.
Update during scheduled maintenance times to avoid disrupting user activities.
All auxiliary parameters are subject to change between major versions of TrueNAS due to security and development issues.
We recommend removing all auxiliary parameters from TrueNAS configurations before upgrading.
Click Confirm, then Continue to start the automatic installation process.
TrueNAS SCALE downloads the configuration file and the update file, then starts the install.
After updating, clear the browser cache (CTRL+F5) before logging in to SCALE. This ensures stale data doesn’t interfere with loading the SCALE UI.
Performing a Manual Update
If the system detects an available update, to do a manual update click Download Updates and wait for the file to download to your system.
Click Install Manual Update File.
The Save configuration settings from this machine before updating? window opens.
Click Export Password Secret Seed then click Save Configuration.
The Manual Update screen opens.
Click Choose File to locate the update file on the system.
Select a temporary location to store the update file. Select Memory Device or select one of the mount locations on the dropdown list to keep a copy in the server.
Click Apply Update to start the update process. A status window opens and displays the installation progress. When complete, a Restart window opens.
Click Confirm, then Continue to restart the system.
Update Progress
When a system update starts, appears in the toolbar at the top of the UI.
Click the icon to see the current status of the update and which TrueNAS administrative account initiated the update.
Provides instructions on how to update SCALE releases on Enterprise (HA) systems.
TrueNAS Enterprise
This procedure only applies to SCALE Enterprise (HA) systems.
If attempting to migrate from CORE to SCALE, see Migrating from TrueNAS CORE.
Updating Enterprise (HA) Systems
If the system does not have an administrative user account, create the admin user as part of this procedure.
Take a screenshot of the license information found on the Support widget on the System Settings > General screen. You use this to verify the license after the update.
To update your Enterprise (HA) system to the latest SCALE release, log into the SCALE UI using the virtual IP (VIP) address and then:
Check for updates. Go to the main Dashboard and click Check for Updates on the System Information widget for the active controller.
This opens the System Settings > Update screen. If an update is available it displays on this screen.
Save the password secret seed and configuration settings to a secure location. Click Install Manual Updates. The Save configuration settings window opens.
Select Export Password Secret Seed then click Save Configuration. The system downloads the file. The file contains sensitive system data and should be maintained in a secure location.
Select the update file and start the process.
Click Choose File and select the update file downloaded to your system, then click Apply Update to start the update process.
After the system to finishes updating it reboots.
Sign into the SCALE UI. If using root to sign in, create the admin account now.
If using admin, continue to the next step.
Verify the system license after the update. Go to System Settings > General.
Verify the license information in the screenshot of the Support widget you took before the update matches the information on the Support widget after updating the system.
Verify the admin user settings, or if not created, create the admin user account now.
If you want the admin account to have the ability to execute sudo commands in an SSH session, select the option for the sudo access you want to allow.
Also, verify Shell is set to bash if you want the admin user to have the ability to execute commands in Shell.
To set a location where the admin user can save to, browse to, and select the dataset in Home Directory. If set to the default /nonexistent files are not saved for this user.
Test the admin user access to the UI.
a. Log out of the UI.
b. Enter the admin user credentials in the sign-in splash screen.
After validating access to the SCALE UI using the admin credentials, disable the root user password.
Go to Credentials > Local User and edit the root user. Select Disable Password and click Save.
Tutorials for configuring many general TrueNAS SCALE settings.
The TrueNAS SCALE General Settings section provides settings options for support, graphic user interface, localization, NTP servers, and system configuration.
Contents
Managing the System Configuration: Provides information on downloading your TrueNAS SCALE configuration to back up system settings, uploading a new configuration file, and resetting back to default settings.
Managing General Settings: Provides information on configuring GUI options, localizing TrueNAS SCALE to your region and language, and adding NTP servers.
Setting Up System Email: Provides instructions on configuring email using SMTP or GMail OAuth and setting up the email alert service in SCALE.
12.3.1 - Managing the System Configuration
Provides information on downloading your TrueNAS SCALE configuration to back up system settings, uploading a new configuration file, and resetting back to default settings.
TrueNAS SCALE allows users to manage the system configuration by uploading or downloading configurations, or by resetting the system to the default configuration.
System Configuration Options
The Manage Configuration option on the System Settings > General screen provides three options:
Download File that downloads your system configuration settings to a file on your system.
Upload File that allows you to upload a replacement configuration file.
Reset to Defaults that resets system configuration settings back to factory settings.
Downloading the File
The Download File option downloads your TrueNAS SCALE current configuration to the local machine.
When you download the configuration file, you have the option to Export Password Secret Seed, which includes encrypted passwords in the configuration file.
This allows you to restore the configuration file to a different operating system device where the decryption seed is not already present.
Users must physically secure configuration backups containing the seed to prevent unauthorized access or password decryption.
We recommend backing up the system configuration regularly.
Doing so preserves settings when migrating, restoring, or fixing the system if it runs into any issues.
Save the configuration file each time the system configuration changes.
Go to System Settings > General and click on Manage Configuration.
Select Download File.
The Save Configuration dialog displays.
Click Export Password Secret Seed and then click Save. The system downloads the system configuration. Save this file in a safe location on your network where files are regularly backed up.
Anytime you change your system configuration, download the system configuration file again and keep it safe.
Uploading the File
The Upload File option gives users the ability to replace the current system configuration with any previously saved TrueNAS SCALE configuration file.
All passwords are reset if the uploaded configuration file was saved without selecting Save Password Secret Seed.
Resetting to Defaults
TrueNAS Enterprise
Enterprise High Availability (HA) systems should never reset their system configuration to defaults.
Contact iXsystems Support if a system configuration reset is required.
iXsystems Support
Customers who purchase iXsystems hardware or that want additional support must have a support contract to use iXsystems Support Services. The TrueNAS Community forums provides free support for users without an iXsystems Support contract.
Save the current system configuration with the Download File option before resetting the configuration to default settings!
If you do not save the system configuration before resetting it, you could lose data that was not backed up, and you cannot revert to the previous configuration.
The Reset to Defaults option resets the system configuration to factory settings.
After the configuration resets, the system restarts and users must set a new login password.
Backing Up the Config File
SCALE does not automatically back up the system configuration file to the system dataset.
Users who want to schedule an automatic backup of the system configuration file should:
Users can manually back up the SCALE config file by downloading and saving the file to a location that is automatically backed up.
12.3.2 - Managing General Settings
Provides information on configuring GUI options, localizing TrueNAS SCALE to your region and language, and adding NTP servers.
The TrueNAS SCALE General Settings section provides settings options for support, graphic user interface, localization, NTP servers, and system configuration.
The Support widget shows information about the TrueNAS version and system hardware.
Links to the open source documentation, community forums, and official Enterprise licensing from iXsystems are also provided.
Add License opens the sidebar with a field to paste a TrueNAS Enterprise license (details).
File Ticket opens a window to provide feedback directly to the development team.
Feedback window
The Send Feedback icon opens a feedback window.
Alternately, go to System Settings > General, find the Support widget, and click File Ticket to see the feedback window.
The feedback window allows users to send page ratings, comments, and report issues or suggest improvements directly to the TrueNAS development team.
Submitting a bug report requires a free Atlassian account.
Click between the tabs at the top of the window to see options for your specific feedback.
Rate this page
Use the Rate this page tab to quickly review and provide comments on the currently active TrueNAS user interface screen.
You can include a screenshot of the current page and/or upload additional images with your comments.
Report a bug
Use the Report a bug tab to notify the development team when a TrueNAS screen or feature is not working as intended.
For example, report a bug when a middleware error and traceback appears while saving a configuration change.
Enter a descriptive summary in the Subject.
TrueNAS can show a list of existing Jira tickets with similar summaries.
When there is an existing ticket about the issue, consider clicking on that ticket and leaving a comment instead of creating a new one.
Duplicate tickets are closed in favor of consolidating feedback into one report.
Enter details about the issue in the Message.
Keep the details concise and focused on how to reproduce the issue, what the expected result of the action is, and what the actual result of the action was.
This helps ensure a speedy ticket resolution.
Include system debug and screenshot files to also speed up the issue resolution.
Bug Reports from Enterprise Licensed Systems
TrueNAS Enterprise
When an Enterprise license is applied to the system, the Report a bug tab has additional environment and contact information fields for sending bug reports directly to iXsystems.
Filling out the entire form with precise details and accurate contact information ensures a prompt response from the iXsystems Customer Support team.
Configuring GUI Options
The GUI widget allows users to configure the TrueNAS SCALE web interface address. Click Settings to open the GUI Settings configuration screen.
Changing the GUI SSL Certificate
The system uses a self-signed certificate to enable encrypted web interface connections. To change the default certificate, select a different certificate that was created or imported in the Certificates section from the GUI SSL Certificate dropdown list.
Setting the Web Interface IP Address
To set the WebUI IP address, if using IPv4 addresses, select a recent IP address from the Web Interface IPv4 Address dropdown list. This limits the usage when accessing the administrative GUI. The built-in HTTP server binds to the wildcard address of 0.0.0.0 (any address) and issues an alert if the specified address becomes unavailable. If using an IPv6 address, select a recent IP address from the Web Interface IPv6 Address dropdown list.
Configuring HTTPS Options
To allow configuring a non-standard port to access the GUI over HTTPS, enter a port number in the Web Interface HTTPS Port field.
Select the cryptographic protocols for securing client/server connections from the HTTPS Protocols dropdown list. Select the Transport Layer Security (TLS) versions TrueNAS SCALE can use for connection security.
To redirect HTTP connections to HTTPS, select Web Interface HTTP -> HTTPS Redirect. A GUI SSL Certificate is required for HTTPS.
Activating this also sets the HTTP Strict Transport Security (HSTS) maximum age to 31536000 seconds (one year).
This means that after a browser connects to the web interface for the first time, the browser continues to use HTTPS and renews this setting every year.
A warning displays when setting this function.
Special consideration should be given when TrueNAS is installed in a VM, as VMs are not configured to use HTTPS. Enabling HTTPS redirect can interfere with the accessibility of some apps. To determine if HTTPS redirect is active, go to System Settings > General > GUI > Settings and locate the Web Interface HTTP -> HTTPS Redirect checkbox. To disable HTTPS redirects, clear this option and click Save, then clear the browser cache before attempting to connect to the app again.
To send failed HTTP request data which can include client and server IP addresses, failed method call tracebacks, and middleware log file contents to iXsystems, select Crash Reporting.
Sending Usage Statistics to iXsystems
To send anonymous usage statistics to iXsystems, select the Usage Collection option.
Showing Console Messages
To display console messages in real time at the bottom of the browser, select the Show Console Messages option.
Localizing TrueNAS SCALE
To change the WebUI on-screen language and set the keyboard to work with the selected language, click Settings on the System Settings > General > Localization widget. The Localization Settings configuration screen opens.
Select the language from the Language dropdown list, and then the keyboard layout in Console Keyboard Map.
Enter the time zone in Timezone and then select the local date and time formats to use.
Click Save.
Adding NTP Servers
The NTP Servers widget allows users to configure Network Time Protocol (NTP) servers.
These sync the local system time with an accurate external reference.
By default, new installations use several existing NTP servers. TrueNAS SCALE supports adding custom NTP servers.
Setting Up System Email
The Email widget displays information about current system mail settings.
When configured, an automatic script sends a nightly email to the administrator account containing important information such as the health of the disks.
To configure the system email send method, click Settings to open the Email Options screen.
Select either SMTP or GMail OAuth to display the relevant configuration settings.
12.3.3 - Adding a License and Proactive Support
Provides instructions for SCALE Enterprise users to add their system license and set up proactive support.
Adding a TrueNAS Enterprise License
For users with a valid TrueNAS license, click Add License.
Copy your license into the box and click Save.
You are prompted to reload the page for the license to take effect, click RELOAD NOW.
Log back into the WebUI where the End User License Agreement (EULA) displays.
Read it thoroughly and completely.
After you finish, click I AGREE.
The system information updates to reflect the licensing specifics for the system.
Silver and Gold level Support customers can also enable Proactive Support on their hardware to automatically notify iXsystems if an issue occurs.
To find more details about the different Warranty and Service Level Agreement (SLA) options available, see iXsystems Support.
When the system is ready to be in production, update the status by selecting This is a production system and then click the Proceed button.
This sends an email to iXsystems declaring that the system is in production.
While not required for declaring the system is in production, TrueNAS has the option to include an initial debug with the email that can assist support in the future.
Setting Up Proactive Support
Silver/Gold Coverage Customers can enable iXsystems Proactive Support.
This feature automatically emails iXsystems when certain conditions occur in a TrueNAS system.
To configure proactive support, click Get Support on the Support widget located on the System Settings > General screen.
Select Proactive Support from the dropdown list.
Complete all available fields and select Enable iXsystems Proactive Support, then click Save.
12.3.4 - Setting Up System Email
Provides instructions on configuring email using SMTP or GMail OAuth and setting up the email alert service in SCALE.
An automatic script sends a nightly email to the administrator account containing important information such as the health of the disks.
Configure the system to send these emails to the administrator remote email account for fast awareness and resolution of any critical issues.
Configure the email address for the admin user as part of your initial system setup or using the procedure below.
You can also configure email addresses for additional user accounts as needed.
Configuring the Admin User Email Address
Before configuring anything else, set the local administrator email address.
Click here for instructions
Go to Credentials > Local Users, click on the admin user row to expand it. Select Edit to display the Edit User configuration screen.
In the Email field, enter a remote email address that the system administrator regularly monitors (like admin@example.com) and click Save.
Configuring User Emails
Add a new user as an administrative or non-administrative account and set up email for that user.
Follow the directions in Configuring the Admin User Email Address above for an existing user or see Managing Users for a new user.
Setting Up System Email
After setting up the admin email address, you need to set up the send method for email service.
There are two ways to access email configuration options.
Go to the Systems Settings > General screen and locate the Email widget to view current configuration or click the Alerts icon in the top right of the UI, then click the gear icon, and select Email to open the General settings screen.
Click Settings on the Email Widget to open the Email Options configuration screen.
The configuration options change based on the selected method.
After configuring the send method, click Send Test Mail to verify the configured email settings are working.
If the test email fails, verify that the Email field is correctly configured for the admin user.
Return to Credentials > Users to edit the admin user.
Save stores the email configuration and closes the Email Options screen.
Configuring Email Using SMTP
To set up SMTP service for the system email send method, you need the outgoing mail server and port number for the email address.
Enter the email address you want to use in From Email and the name in From Name.
This is the email that sends the alerts and the name that appears before the address.
Enter the host name or IP address of the SMTP server to use in Outgoing Mail Server.
Enter the SMTP port number in Mail Server Port.
Typically, this is 25/465 (secure SMTP) or 587 (submission).
Select the level of security from the Security dropdown list.
Options are Plain (No Encryption), SSL (Implicit TLS), or TLS (STARTTLS).
Select SMTP Authentication for TrueNAS to reuse authentication credentials from the SMTP server.
Enter the SMTP credentials in the new fields that appear.
Typically, Username is the full email address and Password is the password for that account.
Click Send Test Email to verify you receive an email.
Click Save.
Configuring Email Using GMail OAuth
To set up the system email using Gmail OAuth, you need to log in to your Gmail account through the TrueNAS SCALE web UI.
Select the account to use for authentication or select Use another account.
If prompted, enter the Gmail account credentials.
Type in the GMail account to use and click Next.
Enter the password for the GMail account you entered.
When the TrueNAS wants to access your Google Account window displays, scroll down and click Allow to complete the set up or Cancel to exit set up and close the window.
After setting up Gmail OAuth authentication, the Email Options screen displays Gmail credentials have been applied and the button changes to Log In To Gmail Again.
Click Send Test Email to verify you receive an email.
Click Save.
Setting Up the Email Alert Service
If the system email send method is configured, the admin email receives a system health email every night/morning.
You can also add/configure the Email Alert Service to send timely warnings when a system alert hits a warning level that is specified in Alert Settings.
From the Alertsnotifications panel, select the settings icon and then Alert Settings, or go to System Settings > Alert Settings.
Locate Email under Alert Services, select the more_vert icon, and then click Edit to open the Edit Alert Service screen.
Add the system email address in the Email Address field.
Use the Level dropdown to adjust the email warning threshold or accept the default Warning.
Use Send Test Alert to generate a test alert and confirm the email address and alert service works.
12.4 - Advanced Settings
Tutorials for configuring advanced system settings in TrueNAS SCALE.
Advanced Settings provides configuration options for the console, syslog, kernel, sysctl, replication, cron jobs, init/shutdown scripts, system dataset pool, isolated GPU device(s), self-encrypting drives, system access sessions, allowed IP addresses, audit logging, and global two-factor authentication.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
This article provides information on sysctl, system dataset pool, setting the maximum number of simultaneous replication tasks the system can perform, and managing sessions.
Managing Allowed IP Addresses
Use the System Settings > Advanced screen Allowed IP Addresses configuration screen to restrict access to the TrueNAS SCALE web UI and API.
Entering an IP address limits access to the system to only the address(es) entered here. To allow unrestricted access to all IP addresses, leave this list empty.
Managing Sysctl Variables
Use Add on the Sysctl widget to add a tunable that configures a kernel module parameter at runtime.
The Add Sysctl or Edit Sysctl configuration screens display the settings.
Enter the sysctl variable name in Variable. Sysctl tunables configure kernel module parameters while the system runs and generally take effect immediately.
Enter a description and then select Enabled. To disable but not delete the variable, clear the Enabled checkbox.
Click Save.
Managing the System Dataset Pool
Storage widget displays the pool configured as the system dataset pool and allows users to select the storage pool they want to hold the system dataset.
The system dataset stores core files for debugging and keys for encrypted pools. It also stores Samba4 metadata, such as the user and group cache and share-level permissions.
Configure opens the Storage Settings configuration screen.
Storage Settings Configuration Screen
If the system has one pool, TrueNAS configures that pool as the system dataset pool. If your system has more than one pool, you can set the system dataset pool using the Select Pool dropdown. Users can move the system dataset to an unencrypted pool, or an encrypted pool without passphrases.
Users can move the system dataset to a key-encrypted pool, but cannot change the pool encryption type afterward. If the encrypted pool already has a passphrase set, you cannot move the system dataset to that pool.
Swap Size lets users enter an amount (in GiB) of hard disk space to use as a substitute for RAM when the system fully utilizes the actual RAM.
By default, the system creates all data disks with the specified swap amount. Changing the value does not affect the amount of swap on existing disks, only disks added after the change. Swap size does not affect log or cache devices.
Setting the Number of Replication Tasks
The Replication widget displays the number of replication tasks that can execute simultaneously configured on the system. It allows users to adjust the maximum number of replication tasks the system can execute simultaneously.
Click Configure to open the Replication configuration screen.
Enter a number for the maximum number of simultaneous replication tasks you want to allow the system to process and click Save.
Managing Access (Websocket Sessions)
The Access widget displays a list of all active sessions, including the user who initiated the session and what time it started.
It also displays the Token Lifetime setting for your current session.
It allows administrators to manage other active sessions and to configure the token lifetime for their account.
The Terminate Other Sessions button ends all sessions except for the one you are currently using.
You can also end individual sessions by clicking the logout button next to that session.
You must check a confirmation box before the system allows you to end sessions.
The logout icon is inactive for the currently logged in administrator session and active for any other current sessions.
It cannot be used to terminate the currently logged in active administrator session.
Token Lifetime displays the configured token duration for the current session (default five minutes).
TrueNAS SCALE logs out user sessions that are inactive for longer than that configured token setting for the user.
New activity resets the token counter.
If the configured token lifetime is exceeded, TrueNAS SCALE displays a Logout dialog with the exceeded ticket lifetime value and the time that the session is scheduled to terminate.
Click Extend Session to reset the token counter.
If the button is not clicked, the TrueNAS SCALE terminates the session automatically and returns to the log in screen.
Click Configure to open the Token Settings screen and configure Token Lifetime for the current account.
Select a value that fits user needs and security requirements.
Enter the value in seconds.
The default lifetime setting is 300 seconds, or five minutes.
The minimum value allowed is 30 seconds.
The maximum is 2147482 seconds, or 24 days, 20 hours, 31 minutes, and 22 seconds.
Click Save.
Contents
Managing Cron Jobs: Provides information on adding or modifying cron jobs in TrueNAS SCALE.
Managing the Console Setup Menu: Provides information on the Console setup menu configuration settings including the serial port, port speed, password protection, and the banner users see.
Managing System Logging: Provides information on setting up or changing the syslog server, the level of logging and the information included in the logs, and using TLS as the transport protocol.
Developer Mode (Unsupported): Provides information on the unsupported SCALE developer mode and how to enable it.
12.4.1 - Managing Cron Jobs
Provides information on adding or modifying cron jobs in TrueNAS SCALE.
Cron jobs allow users to configure jobs that run specific commands or scripts on a regular schedule using cron(8). Cron jobs help users run repetitive tasks.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
The Cron Jobs widget on the System > Advanced screen displays No Cron Jobs configured until you add a cron job, and then it displays information on cron job(s) configured on the system.
Click Add to open the Add Cron Job configuration screen and create a new cron job. If you want to modify an existing cron job, click anywhere on the item to open the Edit Cron Jobs configuration screen populated with the settings for that cron job.
The Add Cron Job and Edit Cron Job configuration screens display the same settings.
Enter a description for the cron job.
Next, enter the full path to the command or script to run in Command. For example, for a command string to create a list of users on the system and write that list to a file, enter cat /etc/passwd > users_$(date +%F).txt.
Select a user account to run the command from the Run As User dropdown list. The user must have permissions allowing them to run the command or script.
Select a schedule preset or choose Custom to open the advanced scheduler.
An in-progress cron task postpones any later scheduled instances of the task until the one already running completes.
Cron Job Schedule Format
Cron job schedules use six asterisks that represent minutes, hours, days of the month, days of the week, and months in that order.
For example, a schedule of 1 1 1 * sat,sun would run at 01:01 AM, on day 1 of the month, and only on Saturday and Sunday.
Separate multiple values for a segment with commas, not spaces.
If you want to hide standard output (stdout) from the command, select Hide Standard Output. If left cleared, TrueNAS emails any standard output to the user account cron that ran the command.
To hide error output (stderr) from the command, select Hide Standard Error. If left cleared, TrueNAS emails any error output to the user account cron that ran the command.
Select Enabled to enable this cron job. Leave this checkbox cleared to disable the cron job without deleting it.
Click Save.
12.4.2 - Managing the Console Setup Menu
Provides information on the Console setup menu configuration settings including the serial port, port speed, password protection, and the banner users see.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
The Console widget on the System Setting > Advanced screen displays current console settings for TrueNAS.
Click Configure to open the Console configuration screen. The Console configuration settings determine how the Console setup menu displays, the serial port it uses and the speed of the port, and the banner users see when it is accessed.
To display the console without being prompted to enter a password, select Show Text Console without Password Prompt. Leave it clear to add a login prompt to the system before showing the console menu.
Select Enable Serial Console to enable the serial console but do not select this if the serial port is disabled.
Enter the serial console port address in Serial Port and set the speed (in bits per second) from the Serial Speed dropdown list. Options are 9600, 19200, 38400, 57600 or 115200.
Finally, enter the message you want to display when a user logs in with SSH in MOTD Banner.
Click Save
12.4.3 - Managing System Logging
Provides information on setting up or changing the syslog server, the level of logging and the information included in the logs, and using TLS as the transport protocol.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
By default, TrueNAS writes system logs to the system boot device.
The Syslog widget on the System > Advanced screen allows users determine how and when the system sends log messages to a connected syslog server.
The Syslog widget displays the existing system logging settings.
Before configuring your syslog server to use TLS as the Syslog Transport method, first make sure you add a certificate and certificate authority (CA) to the TrueNAS system. Go to Credentials > Certificates and use the Certificate Authority (CA) and Certificates widgets to verify you have the required certificates or to add them.
Click Configure to open the Syslog configuration screen.
The Syslog configuration screen settings specify the logging level the system uses to record system events, the syslog server DNS host name or IP, the transport protocol it uses, and if using TLS, the certificate and certificate authority (CA) for that server, and finally if it uses the system dataset to store the logs.
Enter the remote syslog server DNS host name or IP address in Syslog Server. To use non-standard port numbers like mysyslogserver:1928, add a colon and the port number to the host name. Log entries are written to local logs and sent to the remote syslog server.
Enter the transport protocol for the remote system log server connection in Syslog Transport. Selecting Transport Layer Security (TLS) displays the Syslog TLS Certificate and Syslog TSL Certificate Authority fields.
Next, select the transport protocol for the remote system log server TLS certificate from the Syslog TLS Certificate dropdown list, and select the TLS CA for the TLS server from the Syslog TLS Certificate Authority dropdown list.
Select Use FQDN for Logging to include the fully-qualified domain name (FQDN) in logs to precisely identify systems with similar host names.
Select the minimum log priority level to send to the remote syslog server from Syslog Level the dropdown list.
The system only sends logs at or above this level.
Click Save.
12.4.4 - Managing Init/Shutdown Scripts
Provides information on adding or modifying init/shutdown scripts in TrueNAS SCALE.
The Init/Shutdown Scripts widget on the System > Advanced screen allows you to add scripts to run before or after initialization (start-up), or at shutdown. For example, creating a script to backup your system or run a systemd command before exiting and shutting down the system.
Init/shutdown scripts are capable of making OS-level changes and can be dangerous when done incorrectly. Use caution before creating script or command tasks.
The Init/Shutdown Scripts widget displays No Init/Shutdown Scripts configured until you add either a command or script, and then the widget lists the scripts configured on the system.
Enter a description and then select Command or Script from the Type dropdown list. Selecting Script displays additional options.
Enter the command string in Command, or if using a script, enter or use the browse to the path in Script. The script runs using dash(1).
Select the option from the When dropdown list for the time this command or script runs.
Enter the number of seconds after the script runs that the command should stop in Timeout.
Select Enable to enable the script. Leave clear to disable but not delete the script.
Click Save.
Editing an Init/Shutdown Script
Click a script listed on the Init/Shutdown Scripts widget to open the Edit Init/Shutdown Script configuration screen populated with the settings for that script.
You can change from a command to a script, and modify the script or command as needed.
To disable but not delete the command or script, clear the Enabled checkbox.
Click Save.
12.4.5 - Managing SEDs
Provides information on adding or modifying self-encrypting drive (SED) user and global passwords in TrueNAS SCALE.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
The Self-Encrypting Drive(s) widget on the System > Advanced screen allows you set the user and global SED password in SCALE.
Managing Self-Encrypting Drives
The Self-Encrypting Drive (SED) widget displays the ATA security user and password configured on the system.
Click Configure to open the Self-Encrypting Drive configuration screen.
The Self-Encrypting Drive configuration screen allows users set the ATA security user and create a SED global password.
Select the user passed to camcontrol security -u to unlock SEDs from the ATA Security User dropdown list. Options are USER or MASTER.
Enter the global password to unlock SEDs in SED Password and in Confirm SED Password.
Click Save.
12.4.6 - Isolating GPU for VMs
Provides information on isolating Graphics Processing Units (GPU) installed in your system.
Systems with more than one graphics processing unit (GPU) installed can isolate additional GPU device(s) from the host operating system (OS) and allocate them for use by a virtual machine (VM).
Isolated GPU devices are unavailable to the OS and for allocation to applications.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
To isolate a GPU, you must have at least two in your system; one available to the host system for system functions and the other available to isolate for use by a VM.
One isolated GPU device can be used by a single VM.
Isolated GPU cannot be allocated to applications.
To allocate an isolated GPU device, select it while creating or editing VM configuration.
When allocated to a VM, the isolated GPU connects to the VM as if it were physically installed in that VM and becomes unavailable for any other allocations.
Click Configure on the Isolated GPU Device(s) widget to open the Isolate GPU PCI Ids screen, where you can select a GPU device to isolate.
12.4.7 - Managing Global 2FA (Two-Factor Authentication)
Provides information on SCALE two-factor authentication, setting it up, and logging in with it enabled.
Global Two-factor authentication (2FA) is great for increasing security.
TrueNAS offers global 2FA to ensure that entities cannot use a compromised administrator root password to access the administrator interface.
Advanced settings have reasonable defaults in place. A warning message displays for some settings advising of the dangers of making changes.
Changing advanced settings can be dangerous when done incorrectly. Use caution before saving changes.
To use 2FA, you need a mobile device with the current time and date, and an authenticator app installed.
We recommend Google Authenticator.
You can use other authenticator applications, but you must confirm the settings and QR codes generated in TrueNAS are compatible with your particular app before permanently activating 2FA.
Two-factor authentication is time-based and requires a correct system time setting.
Ensure Network Time Protocol (NTP) is functional before enabling two-factor authentication is strongly recommended!
What is 2FA and why should I enable it?
2FA adds an extra layer of security to your system to prevent someone from logging in, even if they have your password.
2FA requires you to verify your identity using a randomized six-digit code that regenerates every 30 seconds (unless modified) to use when you log in.
Benefits of 2FA
Unauthorized users cannot log in since they do not have the randomized six-digit code.
Authorized employees can securely access systems from any device or location without jeopardizing sensitive information.
Internet access on the TrueNAS system is not required to use 2FA.
Drawbacks of 2FA
2FA requires an app to generate the 2FA code.
If the 2FA code is not working or users cannot get it, the system is inaccessible through the UI and SSH (if enabled).
You can bypass or unlock 2FA using the CLI.
Enabling 2FA
Set up a second 2FA device as a backup before proceeding.
Before you begin, download Google Authenticator to your mobile device.
Go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config.
When using Google Authenticator, set Interval to 30 or the authenticator code might not function when logging in.
Click Show QR and scan the QR code using Google Authenticator.
After scanning the code click CLOSE to close the dialog on the Two-Factor Authentication screen.
Accounts that are already configured with individual 2FA are not prompted for 2FA login codes until Global 2FA is enabled.
When Global 2FA is enabled, user accounts that have not configured 2FA settings yet are shown the Two-Factor Authentication screen on their next login to configure and enable 2FA authentication for that account.
Disabling or Bypassing 2FA
Go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config. Clear the Enable Two-Factor Authentication Globally checkbox and click Save.
If the device with the 2FA app is not available, you can use the system CLI to bypass 2FA with administrative IPMI or by physically accessing the system.
To unlock 2FA in the SCALE CLI, enter: auth two_factor update enabled=false
Reactivating 2FA
If you want to enable 2FA again, go to System Settings > Advanced, scroll down to the Global Two Factor Authentication widget, and click Config.
Check Enable Two Factor Authentication Globally, then click Save.
To change the system-generated Secret, go to Credentials > 2FA and click Renew 2FA Secret.
Using 2FA to Log in to TrueNAS
Enabling 2FA changes the login process for both the TrueNAS web interface and SSH logins.
Logging In Using the Web Interface
The login screen adds another field for the randomized authenticator code. If this field is not immediately visible, try refreshing the browser.
Enter the code from the mobile device (without the space) in the login window and use the root username and password.
If you wait too long, a new number code displays in Google Authenticator, so you can retry.
Logging In Using SSH
Confirm that you set Enable Two-Factor Auth for SSH in System Settings > Advanced > Global Two Factor Authentication.
Go to System Settings > Services and edit the SSH service.
a. Set Log in as Admin with Password, then click Save.
b. Click the SSH toggle and wait for the service status to show that it is running.
Open the Google Authentication app on your mobile device.
Open a terminal (such as Windows Shell) and SSH into the system using either the host name or IP address, the administrator account user name and password, and the 2FA code.
12.4.8 - Developer Mode (Unsupported)
Provides information on the unsupported SCALE developer mode and how to enable it.
Developer mode is for developers only.
Users that enable this functionality will not receive support on any issues submitted to iXsystems.
Only enable when you are comfortable with debugging and resolving all issues encountered on the system.
Never enable on a system that has production storage and workloads.
TrueNAS is an Open Source Storage appliance, not a standard Linux operating system (OS) that allows customization of the OS environment.
By default, the root/boot filesystem and tools such as apt are disabled to prevent accidental misconfiguration that renders the system inoperable or puts stored data at risk.
However, as an open-source appliance, there are circumstances in which software developers want to create a development environment to install new packages and do engineering or test work before creating patches to the TrueNAS project.
Do not make system changes using the TrueNAS UI web shell.
Using package management tools in the web shell can result in middleware changes that render the system inaccessible.
Connect to the system using SSH or a physically connected monitor and keyboard before enabling or using developer mode.
To enable developer mode, log into the system as the root account and access the Linux shell.
Run the install-dev-tools command.
Running install-dev-tools removes the default TrueNAS read-only protections and installs a variety of tools needed for development environments on TrueNAS.
These changes do not persist across updates and install-dev-tools must be re-run after every system update.
12.5 - Boot Pool Management
Provides instructions on managing the TrueNAS SCALE boot pool and boot environments.
System Settings > Boot contains options for monitoring and managing the ZFS pool and devices that store the TrueNAS operating system.
Changing the Scrub Interval
The Stats/Settings option displays current system statistics and provides the option to change the scrub interval, or how often the system runs a data integrity check on the operating system device.
Go to System Settings > Boot screen and click Stats/Settings.
The Stats/Settings window displays statistics for the operating system device: Boot pool Condition as ONLINE or OFFLINE, Size in GiB and the space in use in Used, and Last Scrub Run with the date and time of the scrub.
By default, the operating system device is scrubbed every 7 days.
To change the default scrub interval, input a different number in Scrub interval (in days) and click Update Interval.
Boot Pool Device Management
From the System Settings > Boot screen, click the Boot Pool Status button to open the Boot Pool Status screen.
This screen shows the boot-pool and expands to show the devices that are allocated to that pool.
Read, write, or checksum errors are also shown for the pool.
TrueNAS supports a ZFS feature known as boot environments.
These are snapshot clones of the TrueNAS boot-pool install location that TrueNAS boots into.
Only one boot environment is used for booting at a time.
A boot environment allows rebooting into a specific point in time and greatly simplifies recovering from system misconfigurations or other potential system failures.
With multiple boot environments, the process of updating the operating system becomes a low-risk operation.
For example, the TrueNAS update process automatically creates a snapshot of the current boot environment and adds it to the boot menu before applying the update.
If anything goes wrong during the update, the system administrator can activate the snapshot of the pre-update environment and reboot TrueNAS to restore system functionality.
Boot environments do not preserve or restore the state of any attached storage pools or apps, only the system boot-pool.
Storage backups must be handled through the ZFS snapshot feature or other backup options.
TrueNAS applications also use separate upgrade and container image management methods to provide app update and rollback features.
To view the list of boot environments on the system, go to System Settings > Boot.
Each boot environment entry contains this information:
Name: the name of the boot entry as it appears in the boot menu.
Active: indicates which entry boots by default if a boot environment is not active.
Date Created: indicates the boot environment creation date and time.
Space: shows boot environment size.
Keep: indicates whether or not TrueNAS deletes this boot environment when a system update does not have enough space to proceed.
To access more options for a boot environment, click to display the list of options:
Activate (Click to expand)
The option to activate a boot environment only displays for boot entries not set to Active
Activating an environment means the system boots into the point of time saved in that environment the next time it is started.
Click the more_vert for an inactive boot environment, and then select Activate to open the Activate dialog.
The System Boot screen status changes to Reboot and the current Active entry changes from Now/Reboot to Now, indicating that it is the current boot environment but is not used on next boot.
Clone (Click to expand)
Cloning copies the selected boot environment into a new inactive boot environment that preserves the boot-pool state at the clone creation time.
Click the more_vert for a boot environment, and then select Clone to open the Clone Boot Environment window.
Enter a new name using only alphanumeric characters, and/or the allowed dashes (-), underscores (_), and periods (.) characters.
The Source field displays the boot environment you are cloning. If the displayed name is incorrect, close the window and select the correct boot environment to clone.
Click Save.
Rename (Click to expand)
You can change the name of any boot environment on the System Settings > Boot screen.
Click the more_vert for a boot environment, and then select Rename to open the Rename Boot Environment window.
You cannot delete the default or any active entries.
Because you cannot delete an activated boot entry, this option does not display for activated boot environments.
To delete the active boot environment, first activate another entry and then delete the environment you want to remove.
Keep/Unkeep (Click to expand)
By default, TrueNAS prunes boot environments when the boot-pool has no remaining storage space.
Keep toggles with the Unkeep option, and they determine whether the TrueNAS updater can automatically delete this boot environment if there is not enough space to proceed with an update.
Click the more_vert for a boot environment, and then select Keep to open the Keep dialog.
Select Confirm and then click Keep Flag.
This makes the boot environment subject to automatic deletion if the TrueNAS updater needs space for an update.
12.6 - Services
Tutorials for TrueNAS SCALE services.
System Settings > Services displays each system component that runs continuously in the background. These typically control data-sharing or other external access to the system. Individual services have configuration screens and activation toggles, and you can set them to run automatically.
Documented services related to data sharing or automated tasks are in their respective Shares and Tasks articles.
Contents
FTP: Provides instructions on configuring the FTP service including storage, user, and access permissions.
NFS: Provides information on configuring NFS service in TrueNAS SCALE.
S.M.A.R.T.: Provides information on S.M.A.R.T. service screen settings.
SMB: Provides instructions on configuring the SMB service in TrueNAS SCALE.
SNMP: Provides information on configuring SNMP service in TrueNAS SCALE.
SSH: Provides information on configuring the SSH service in TrueNAS SCALE and using an SFTP connection.
UPS: Provides information on configuring UPS service in TrueNAS SCALE.
12.6.1 - FTP
Provides instructions on configuring the FTP service including storage, user, and access permissions.
The File Transfer Protocol (FTP) is a simple option for data transfers.
The SSH options provide secure transfer methods for critical objects like configuration files, while the Trivial FTP options provide simple file transfer methods for non-critical files.
Options for configuring FTP, SSH, and TFTP are in System Settings > Services.
Click the edit to configure the related service.
Configuring FTP For Any Local User
FTP requires a new dataset and a local user account.
Go to Storage to add a new dataset to use as storage for files.
Next, add a new user. Go to Credentials > Local Users and click Add to create a local user on the TrueNAS.
Assign a user name and password, and link the newly created FTP dataset as the user home directory.
You can do this for every user or create a global account for FTP (for example, OurOrgFTPaccnt).
Edit the file permissions for the new dataset. Go to Datasets, then click on the name of the new dataset. Scroll down to Permissions and click Edit.
Enter or select the new user account in the User and Group fields.
Select Apply User and Apply Group.
Select the Read, Write, and Execute for User, Group, and Other you want to apply.
Click Save.
Configuring FTP Service
To configure FTP, go to System Settings > Services and find FTP, then click edit to open the Services > FTP screen.
Configure the options according to your environment and security considerations. Click Advanced Settings to display more options.
To confine FTP sessions to the home directory of a local user, select both chroot and Allow Local User Login.
Do not allow anonymous or root access unless it is necessary.
Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this FTPS for better security.
Click Save and then start the FTP service.
Configuring FTP Services For FTP Group
FTP requires a new dataset and a local user account.
Go to Storage and add a new [dataset]](/scaletutorials/datasets/datasetsscale/).
Next, add a new user. Go to Credentials > Local Users and click Add to create a local user on the TrueNAS.
Assign a user name and password, and link the newly created FTP dataset as the user home directory. Then, add ftp to the Auxiliary Groups field and click Save.
Edit the file permissions for the new dataset. Go to Datasets, then click on the name of the new dataset. Scroll down to Permissions and click Edit.
Enter or select the new user account in the User and Group fields.
Enable Apply User and Apply Group.
Select the Read, Write, and Execute for User, Group, and Other you want to apply, then click Save.
Configuring FTP Service
Go to System Settings > Services and find FTP, then click edit to open the Services > FTP screen.
Configure the options according to your environment and security considerations. Click Advanced Settings to display more options.
When configuring FTP bandwidth settings, we recommend manually entering the units you want to use, e.g. KiB, MiB, GiB.
To confine FTP sessions to the home directory of a local user, select chroot.
Do not allow anonymous or root access unless it is necessary.
Enable TLS when possible (especially when exposing FTP to a WAN). TLS effectively makes this FTPS for better security.
Click Save, then start the FTP service.
Connecting with FTP
Use a browser or FTP client to connect to the TrueNAS FTP share.
The images below use FileZilla, which is free.
The user name and password are those of the local user account on the TrueNAS system.
The default directory is the same as the user home directory.
After connecting, you can create directories and upload or download files.
Provides information on configuring NFS service in TrueNAS SCALE.
The Services > NFS configuration screen displays settings to customize the TrueNAS NFS service.
You can access it from System Settings > Services screen. Locate NFS and click edit to open the screen, or use the Config Service option on the Unix (NFS) Share widget options menu found on the main Sharing screen.
Select Start Automatically to activate the NFS service when TrueNAS boots.
We recommend using the default NFS settings unless you require specific settings.
Select the IP address from the Bind IP Addresses dropdown list if you want to use a specific static IP address, or leave this field blank for NFS to listen to all available addresses.
By default, TrueNAS dynamically calculates the number of threads the kernel NFS server uses. However, if you want to manually enter an optimal number of threads the kernel NFS server uses, clear Calculate number of threads dynamically and enter the number of threads you want in the Specify number of threads manually field.
If using NFSv4, select NFSv4 from Enabled Protocols. NFSv3 ownership model for NFSv4 clears, allowing you to enable or leave it clear.
If you want to force NFS shares to fail if the Kerberos ticket is unavailable, select Require Kerberos for NFSv4.
Next, enter a port to bind to in the field that applies:
Enter a port to bind mountd(8) in mountd(8) bind port.
Enter a port to bind rpc.statd(8)in rpc.statd(8) bind port.
Enter a port to bind rpc.lockd(8) in rpc.lockd(8) bind port.
The UDP protocol is deprecated and not supported with NFS. It is disabled by default in the Linux kernel.
Using UDP over NFS on modern networks (1Gb+) can lead to data corruption caused by fragmentation during high loads.
Only select Allow non-root mount if the NFS client requires it to allow serving non-root mount requests.
Select Manage Groups Server-side to allow the server to determine group IDs based on server-side lookups rather than relying solely on the information provided by the NFS client.
This can support more than 16 groups and provide more accurate group memberships.
It is equivalent to setting the --manage-gids flag for rpc.mountd.
This setting assumes group membership is configured correctly on the NFS server.
Click Save.
Start the NFS service.
When TrueNAS is already connected to Active Directory, setting NFSv4 and Require Kerberos for NFSv4 also requires a Kerberos Keytab.
12.6.3 - S.M.A.R.T.
Provides information on S.M.A.R.T. service screen settings.
There is a special consideration when installing TrueNAS in a Virtual Machine (VM), as S.M.A.R.T services monitor actual physical devices, which are abstracted in a VM. After the installation of TrueNAS completes on the VM, go to System Settings > Services > and click the blue toggle button on the S.M.A.R.T. service to stop the service from running. Clear the Start Automatically checkbox so the service does not automatically start when the system reboots.
Use the Services > S.M.A.R.T. screen to configure when S.M.A.R.T. tests run and when to trigger alert warnings and send emails.
Click the editConfigure icon to open the screen.
Enter the time in minutes smartd to wake up and check if any tests are configured to run in Check Interval.
Select the Power Mode from the dropdown list. Choices include Never, Sleep, Standby, and Idle. TrueNAS only performs tests when you select Never.
Set the temperatures that trigger alerts in Difference, Informational and Critical.
Click Save after changing any settings.
Start the service.
12.6.4 - SMB
Provides instructions on configuring the SMB service in TrueNAS SCALE.
The Services > SMB screen displays after going to the Shares screen, finding the Windows (SMB) Shares section, and clicking more_vert + Config Service.
Alternatively, you can go to System Settings > Services and click the edit edit icon for the SMB service.
Configuring SMB Service
The SMB Services screen displays setting options to configure TrueNAS SMB settings to fit your use case.
In most cases, you can set the required fields and accept the rest of the setting defaults. If you have specific needs for your use case, click Advanced Options to display more settings.
Enter the name of the TrueNAS host system if not the default displayed in NetBIOS Name. This name is limited to 15 characters and cannot be the Workgroup name.
Enter any alias name or names that do not exceed 15 characters in the NetBIOS Alias field. Separate each alias name with a space between them.
Enter a name that matches the Windows workgroup name in Workgroup. TrueNAS detects and sets the correct workgroup from these services when unconfigured with enabled Active Directory or LDAP active.
If using SMB1 clients, select Enable SMB1 support to allow legacy SMB1 clients to connect to the server. Note: SMB1 is deprecated. We advise you to upgrade clients to operating system versions that support modern SMB protocol versions.
If you plan to use the insecure and vulnerable NTLMv1 encryption, select NTLMv1 Auth to allow smbd attempts to authenticate users. This setting enables backward compatibility with older versions of Windows, but we don’t recommend it. Do not use on untrusted networks.
Enter any notes about the service configuration in Description
Use Auxiliary Parameters to enter additional smb.conf options, or to log more details when a client attempts to authenticate to the share, add log level = 1, auth_audit:5. Refer to the Samba Guide for more information on these settings.
Click Save.
Start the SMB service.
12.6.5 - SNMP
Provides information on configuring SNMP service in TrueNAS SCALE.
SNMP (Simple Network Management Protocol) monitors network-attached devices for conditions that warrant administrative attention.
TrueNAS uses Net-SNMP to provide SNMP.
To configure SNMP, go to System Settings > Services page, find SNMP, and click the edit.
To download an MIB from your TrueNAS system, you can enable SSH and use a file transfer command like scp.
When using SSH, make sure to validate the user logging in has SSH login permissions enabled and the SSH service is active and using a known port (22 is default).
Management Information Base (MIB) files are located in /usr/local/share/snmp/mibs/.
Example (replace mytruenas.example.com with your system IP address or hostname):
Allowing external connections to TrueNAS is a security vulnerability!
Do not enable SSH unless you require external connections.
See Security Recommendations for more security considerations when using SSH.
Configuring SSH Service
To configure SSH go to System Settings > Services, find SSH, and click edit to open the basic settings General Options configuration screen.
Use the Password Login Groups and Allow Password Authentication settings to allow specific TrueNAS account groups the ability to use password authentication for SSH logins.
Click Save. Select Start Automatically and enable the SSH service.
Configuring Advanced SSH Settings
If your configuration requires more advanced settings, click Advanced Settings.
The basic options continue to display above the Advanced Settings screen.
Configure the options as needed to match your network environment.
These Auxiliary Parameters can be useful when troubleshooting SSH connectivity issues:
Increase the ClientAliveInterval if SSH connections tend to drop.
Increase the MaxStartups value (10 is default) when you need more concurrent SSH connections.
Remember to enable the SSH service in System Settings > Services after making changes.
Create and store SSH connections and keypairs to allow SSH access in Credentials > Backup Credentials or by editing an administrative user account. See Adding SSH Credentials for more information.
Using SSH File Transfer Protocol (SFTP)
SFTP (SSH File Transfer Protocol) is available by enabling SSH remote access to the TrueNAS system.
SFTP is more secure than standard FTP as it applies SSL encryption on all transfers by default.
Go to System Settings > Services, find the SSH entry, and click the edit to open the Services > SSH basic settings configuration screen.
Go to Credentials > Local Users. Click anywhere on the row of the user you want to access SSH to expand the user entry, then click Edit to open the Edit User configuration screen. Make sure that SSH password login enabled is selected. See Managing Users for more information.
SSH with root is a security vulnerability. It allows users to fully control the NAS remotely with a terminal instead of providing SFTP transfer access.
Choose a non-root administrative user to allow SSH access.
Review the remaining options and configure them according to your environment or security needs.
Remember to enable the SSH service in System Settings > Services after making changes.
Create and store SSH connections and keypairs to allow SSH access in Credentials > Backup Credentials or by editing an administrative user account. See Adding SSH Credentials for more information.
Using SFTP Connections
Open an FTP client (like FileZilla) or command line.
This article shows using FileZilla as an example.
Using FileZilla, enter SFTP://{TrueNAS IP} {username} {password} {port 22}. Where {TrueNAS IP} is the IP address for your TrueNAS system, {username} is the administrator login user name, and {password} is the adminstrator password, and {port 22} to connect.
SFTP does not offer chroot locking.
While chroot is not 100% secure, lacking chroot lets users move up to the root directory and view internal system information.
If this level of access is a concern, FTP with TLS might be the more secure choice.
12.6.7 - UPS
Provides information on configuring UPS service in TrueNAS SCALE.
An Uninterruptible Power Supply (UPS) is a power backup system that ensures continuous electricity during outages, preventing downtime and damage.
TrueNAS uses NUT (Network UPS Tools) to provide UPS support.
For supported device and driver information, see their hardware compatibility list.
TrueNAS High Availability (HA) systems are not compatible with uninterruptible power supplies (UPS).
Some UPS models are unresponsive with the default polling frequency (default is two seconds).
TrueNAS displays the issue in logs as a recurring error like libusb_get_interrupt: Unknown error.
If you get an error, decrease the polling frequency by adding an entry to Auxiliary Parameters (ups.conf): pollinterval = 10.
How do I find a device name?
For USB devices, the easiest way to determine the correct device name is to set Show console messages in System Settings > Advanced.
Plug in the USB device and look for a /dev/ugen or /dev/uhid device name in the console messages.Can I attach Multiple Computers to One UPS?
A UPS with adequate capacity can power multiple computers.
One computer connects to the UPS data port with a serial or USB cable.
This primary system makes UPS status available on the network for other computers.
The UPS powers the secondary computers, and they receive UPS status data from the primary system.
See the NUT User Manual and NUT User Manual Pages.
12.7 - Using Shell
Provides information on using the TrueNAS SCALE Shell.
The SCALE Shell is convenient for running command line tools, configuring different system settings, or finding log files and debug information.
Warning! The supported mechanisms for making configuration changes are the TrueNAS WebUI, CLI, and API exclusively.
All others are not supported and result in undefined behavior that can result in system failure!
The Set font size slider adjusts the Shell displayed text size.
Restore Default resets the font size to default.
The Shell stores the command history for the current session.
Leaving the Shell screen clears the command history.
Click Reconnect to start a new session.
Navigating In Shell
This section provides keyboard navigation shortcuts you can use in Shell.
Click Here for More Information
Action
Keyboard/Command
Description
Scroll up
Up arrow expand_less
Scroll up through previous commands.
Scroll down
Down arrow expand_more
Scroll down through following commands.
Re-enter command
Enter
After entering a command, press Enter to re-enter the command.
Top of screen
Home
Moves the cursor to the top of the screen entries and results.
Bottom of screen
End
Moves the cursor to the bottom of the screen command entries and results.
Delete
Delete
Deletes what you highlight.
Auto-fill text
Tab
Type a few letters and press Tab to complete a command name or filename in the current directory.
right-click
Right-clicking in the terminal window displays a reminder about using Command+c and Command+v or Ctrl+Insert and Shift+Insert for copy and paste operations.
Exit to root prompt
exit
Entering exit leaves the session.
Copy text
Ctrl+Insert
Enter Ctrl+Insert to copy highlighted text in Shell.
Paste text
Shift+Insert
Enter Shift+Insert to paste copied text in Shell.
Kill running process
Ctrl+c
Enter Ctrl+c to kill a process running in Shell. For example, the ping command.
Changing the Default Shell
zsh is the default shell, but you can change this by going to Credentials > Local Users.
Select the admin or other user to expand it.
Click Edit to open the Edit User screen.
Scroll down to Shell and select a different option from the dropdown list.
Options are nologin, TrueNAS CLI, TrueNAS Console, sh, bash, rbash, dash, tmux, and zsh.
Click Save.
Most Linux command-line utilities are available in the Shell.
Clicking other SCALE UI menu options closes the shell session and stops commands running in the Shell screen.
Tmux allows you to detach sessions in Shell and then reattach them later.
Commands continue to run in a detached session.
TrueNAS CLI
The new SCALE command-line interface (CLI) lets you directly configure SCALE features using namespaces and commands based on the SCALE API.
TrueNAS CLI is still in active development.
We are not accepting bug reports or feature requests at this time.
We intend the CLI to be an alternative method for configuring TrueNAS features.
Because of the variety of available features and configurations, we include CLI-specific instructions in their respective UI documentation sections.
12.8 - Audit Logs
Provides information on using the System and SMB Share auditing screens and function in TrueNAS.
Auditing Overview
TrueNAS SCALE auditing and logs provide a trail of all actions performed by a session, user, or service (SMB, middleware).
The audit function backends are both the syslog and the Samba debug library.
Syslog sends audit messages via explicit syslog call with configurable priority (WARNING is the default) and facility (for example, USER).
The default is syslog sent audit messages.
Debug sends audit messages from the Samba debug library and these messages have a configurable severity (WARNING, NOTICE, or INFO).
The System Settings > Audit screen lists all session, user, or SMB events.
Logs include who performed the action, timestamp, event type, and a short string of the action performed (event data).
SCALE includes a manual page with more information on the VFS auditing functions.
Administrative users can enter
man vfs_truenas_audit
in a SCALE command prompt to view the embedded manual page.
Auditing Event Types
Events are organized by session and user, and SMB auditing.
Session and user auditing eventsAuthentication Events
Audit message generated every time a client logs into the SCALE UI or an SSH session or makes changes to user credentials.Method Call Events
Audit message generated every time the currently logged in user creates a new user account or changes user credentials.
SMB auditing eventsConnect Events
Generated every time an SMB client performs an SMB tree connection (TCON) to a given share.
Each session can have zero or more TCONs.Disconnect Events
Generated every time an SMB client performs an SMB tree disconnect to a given share.Create Events
Generated every time an SMB client performs an SMB create operation on a given tree connection (TCON).
Does not log internally-initiated create operations.
Each SMB tree connection can have multiple open files.Read or Write Events
Generated at configurable intervals as an SMB client reads from or writes to a file.
Specifies the minimum amount of time to wait before generating another read or write event for a given file type.
For example, when set to 5 and an SMB client does constant writes to a file, only 12 events are generated per minute.
The default value is 60, or one event per type per minute.
File-based counters are printed within close messages, and connection-based counters are included in disconnect messages.
Read or Write Offload Events
Generated at configurable intervals as an SMB client performs offloads of reads from or writes to a file.
Specifies the minimum amount of time to wait before generating another offload read or write event for a given file type.
For example, when set to 5 and an SMB client does constant writes to a file, only 12 events are generated per minute.
The default value is 60, or one event per type per minute.
File-based counters are printed within close messages, and connection-based counters are included in disconnect messages.
Open or Close Events
Generated every time an SMB client opens or closes a file.
When a file is opened or closed a summary of file system operations performed on the type is included in the audit message.Rename Events
Generated when a client attempts to rename a file.Set_Attr Events
Generated when a client attempts to set basic file attributes (for example DOS mode or file timestamps).
The key attr_type indicates the precise type of attributes that are changed in the event this message records.Set_Quota Events
Generated when a client attempts to set basic file attributes (for example DOS mode or file timestamps).
The key attr_type indicates the precise type of attributes that are changed in the event this message records.Unlink Events
Generated when a client attempts to delete a file or directory from a share.Set_ACL Events
Generated when a client attempts to set an NFSv4 ACL on a file system or to grant a user (OWNER) read and write permissions to the file system.
Audit Message Records
Audit records contain information that establishes:
Type of event
When the event occurred (timestamp)
Where the event occurred (source and destination addresses)
Source of the event (user or process)
Outcome of the event (success or failure)
Identity of any individual or file names associated with the event
Each audit message is a single JSON file containing mandatory fields.
It can also include additional optional records.
Message size is limited to not exceed 1024 bytes for maximum portability with different syslog implementations.
Use the Export to CSV button on an audit screen to download audit logs in a format readable in a spreadsheet program.
Use the Copy to Clipboard option on the Event Data widget to copy the selected audit message event record to a text or JSON object file.
The JSON object for an audit message contains the version information, the service which is the name of the SMB share, a session ID and the tree connection (tcon_id).
Message Fields
Each audit message JSON object includes:
Field
Description
aid
GUID uniquely identifying the audit event.
vers
JSON object containing version information of the audit event. Audit version identifiers represent the major and minor versions of the internal TrueNAS audit message. Major versions are not made outside a major SCALE release. Minor version changes indicate non-breaking changes to format, such as adding a new optional field. Major version changes can be renaming or removing an existing mandatory field.
time
UTC timestamp indicating when the event occurs.
addr
IPv4 or IPv6 address for the client generating the audit message.
user
Username of either the user or client generating the audit message. If no username, could be the user ID prefixed with UID.
svc
Unique human-readable service identifier (all uppercase alpha characters) for the TrueNAS service generating the audit message (always SMB).
event
Human-readable name for the event type for the audit message. Name is in all uppercase alpha characters that can include an underscore (_) or dot(.) special characters. See Audit Event Types above for more information.
svc_data
A JSON object containing tree connection (TCON) specific data. This is standardized for all events.
event_data
A JSON object containing event-specific data. This varies based on the event type.
sess
GUID unique identifier for the session.
success
Shows true if the operation succeeded or false if it fails.
System and User Auditing
Authentication and other events are captured by the TrueNAS audit logging functions.
The TrueNAS SCALE auditing logs event data varies based on the type of event tracked.
Accessing Auditing Screens
Users have access to audit information from three locations in the SCALE UI:
Credentials > Local Users details screen through the Audit Logs option
On the Local Users page, click Audit Logs on the Users details screen to open the Audit log screen with the Search field filtered to show events (authentication, changes to existing users, creating new users, etc.) specific to that user. For more details see Audit Screen.
Shares > Window (SMB) Shares details screen through the share edit Audit Logging option
On the Sharing page, click the editEdit icon on the desired SMB share row where Enable, watch and ignore settings are available. For details see Configuring SMB Auditing.
System > Services > SMB to view SMB audit logs
On the Services page, click the receipt_longAudit Logs icon on the SMB row. This opens the main Audit log page with the Search field filter configured to show only SMB events. For details see Audit Screen.
System Settings > Audit option on the main navigation panel
The default Audit log page is unfiltered and displays all system events such as authentication and SMB events.
The audit screen includes basic and advanced search options.
Click Switch to Basic to change to the basic search function or click Switch to Advanced to show the advanced search operators.
You can enter any filters in the basic Search field to show events matching the entry.
To enter advanced search parameters, use the format displayed in the field, for example, Service = “SMB” AND Event = “CLOSE” to show closed SMB events.
Event types are listed in Auditing Event Types.
Advanced search uses a syntax similar to SQL/JQL and allows several custom variables for filtering.
Parentheses define query priority.
Clicking the advanced Search field prompts you with a dropdown of available event types, options, and operators to help you complete the search string.
For example, to search for any SMB connect or close event from the user smbuser or any non-authentication SMB events, enter (Service = "SMB" AND Event in ("Connect", "Close") AND User in ("smbuser")) OR (Event != "Authentication" AND Service = "SMB").
The advanced search automatically checks syntax and shows done when the syntax is valid and warning for invalid syntax.
Click on a row to show details of that event in the Metadata and Event Data widgets.
Export as CSV sends the event log data to a csv file you can open in a spreadsheet program (i.e., MS Excel, Google Sheets, etc.) or other data management app that accept CSV files.
The assignment (Copy to Clipboard) icon shows two options, Copy Text and Copy Json.
Copy Text copies the event to a text file.
Copy Json copies the event to a JSON object.
Configuring SMB Auditing
Configure and enable SMB auditing for an SMB share at creation or when modifying an existing share.
SMB auditing is only supported for SMB2 (or newer) protocol-negotiated SMB sessions.
SMB1 connections to shares with auditing enabled are rejected.
From the Add SMB Share or Edit SMB Share screen, click Advanced Options and scroll down to Audit Logging.
Selecting Enable turns auditing on for the share you are creating or editing.
Use the Watch List and Ignore List functions to add audit logging groups to include or exclude.
Click in Watch List to see a list of user groups on the system.
Click on a group to add it to the list and record events generated by user accounts that are members of the group.
Leave Watch List blank to include all groups, otherwise auditing is restricted to only the groups added.
Click in Ignore List to see a list of user groups on the system..
Click on a group to add it to the list and explicitly avoid recording any events generated by user accounts that are members of this group.
The Watch List takes precedence over the Ignore List when using both lists.
Click Save.
Configuring Audit Storage and Retention Policies
To configure Audit storage and retention settings, go to System Settings > Advanced, then click Configure on the Audit widget.
The Audit configuration screen sets the retention period, reservation size, quota size and percentage of used space in the audit dataset that triggers warning and critical alerts.
Enter the number of days to retain local audit messages.
Reservation (in GiB)
Enter the size (in GiB) of reserved space to allocate on the ZFS dataset where the audit databases are stored. The reservation specifies the minimum amount of space guaranteed to the dataset, and counts against the space available for other datasets in the zpool where the audit dataset is located. To disable, enter zero (0).
Quota (in GiB)
Enter the size (in GiB) of the maximum amount of space that can be consumed by the dataset where the audit databases are stored. To disable, enter zero (0).
Quota Fill Warning (in %)
Enter a percentage threshold. TrueNAS generates a warning level alert when the dataset quota reaches that capacity used. Allowed range:5 - 80.
Quota Fill Critical (in %)
Enter a percentage threshold. TrueNAS generates a critical level alert when the dataset quota reaches that capacity used. Allowed range:50 - 95.
For example, to change the percent usage warning threshold for the storage allocated to the Audit database:
Navigate to System > Advanced page.
Select the Configure button on the Audit widget.
In the Audit configuration popup, change the value in the Quota Fill Warning field to the desired percentage.