TrueNAS Apps: TutorialsApplication maintenance is independent from TrueNAS version release cycles.
App versions, features, options, and installation behavior at time of access might vary from documented tutorials and UI reference.
Setting Up MinIO Clustering
5 minute read.
This article applies to the public release of the S3 MinIO charts application in the TRUENAS catalog.
On TrueNAS 23.10 and later, users can create a MinIO S3 distributed instance to TrueNAS out and handle individual node failures. A node is a single TrueNAS storage system in a cluster.
The examples below use four TrueNAS systems to create a distributed cluster. For more information on MinIO distributed setups, refer to the MinIO documentation.
Before configuring MinIO, create a dataset and shared directory for the persistent MinIO data.
Go to Datasets and select the pool or dataset where you want to place the MinIO dataset. For example, /tank/apps/minio or /tank/minio. You can use either an existing pool or create a new one.
After creating the dataset, create the directory where MinIO stores information the application uses.
To create a directory, open the Linux CLI and enter mkdir path="/path/to/directory"
or if you have a share created on your system with access to the dataset use it to create a directory.
MinIO uses both the default /export and the /data mount points during storage configuration.
For a distributed configuration, repeat this on all system nodes in advance.
Take note of the system (node) IP addresses or host names and have them ready for configuration. Also, have your S3 user name and password ready for later.
Configure the MinIO application using the full version Minio charts widget. Go to Apps, click Discover Apps then
We recommend using the Install option on the MinIO application widget.
If your system has sharing (SMB, NFS, iSCSI) configured, disable the share service before adding and configuring a new MinIO deployment. After completing the installation and starting MinIO, enable the share service.
If the dataset for the MinIO share has the same path as the MinIO application, disable host path validation before starting MinIO. To use host path validation, set up a new dataset for the application with a completely different path. For example, for the share /pool/shares/minio and for the application /pool/apps/minio.
Begin on the first node (system) in your cluster.
To install the S3 MinIO (community app), go to Apps, click on Discover Apps, then either begin typing MinIO into the search field or scroll down to locate the charts version of the MinIO widget.
Click on the widget to open the MinIO application information screen.
Click Install to open the Install MinIO screen.
Accept the default values for Application Name and Version. The best practice is to keep the default Create new pods and then kill old ones in the MinIO update strategy. This implements a rolling upgrade strategy.
Next, enter the MinIO Configuration settings.
Select Enable Distributed Mode when setting up a cluster of SCALE systems in a distributed cluster.
MinIO in distributed mode allows you to pool multiple drives or TrueNAS SCALE systems (even if they are different machines) into a single object storage server for better data protection in the event of single or multiple node failures because MinIO distributes the drives across several nodes. For more information, see the Distributed MinIO Quickstart Guide.
To create a distributed cluster, click Add to show the Distributed MinIO Instance URI(s) fields for each TrueNAS system (node) IP addresses/host names to include in the cluster. Use the same order across all the nodes.
The MinIO wizard defaults include all the arguments you need to deploy a container for the application.
Enter a name in Root User to use as the MinIO access key. Enter a name of five to 20 characters in length, for example admin or admin1. Next enter the Root Password to use as the MinIO secret key. Enter eight to 40 random characters, for example MySecr3tPa$$w0d4Min10.
Refer to MinIO User Management for more information.
Keep all passwords and credentials secured and backed up.
For a distributed cluster, ensure the values are identical between server nodes and have the same credentials.
MinIO containers use server port 9000. The MinIO Console communicates using port 9001.
You can configure the API and UI access node ports and the MinIO domain name if you have TLS configured for MinIO.
You can also configure a MinIO certificate.
Configure the storage volumes. Accept the default *value in Mount Path under MinIO Export Storage (Data), and leave Type set to ixVolume (Dataset created automatically by the system).
Click Add to the right of Additional Storage.
Select Host Path (Path that already exists on the system) in Type, enter /data in Mount Path to add a data volume for the dataset and directory created above. Enter or browse to the path for the data dataset created in the First Steps to set the Host Path value.
You can select Enable ACL to modify dataset permissions here, or go to Datasets, select the row for the /minio/dataset and click Edit on the Permissions widget to open the ACL Editor screen and customize dataset permissions and add ACE entries.
Accept the defaults in Advanced DNS Settings.
If you want to limit the CPU and memory resources available to the container, select Enable Pod resource limits then enter the new values for CPU and/or memory.
Click Install when finished entering the configuration settings.
Now that the first node is complete, configure any remaining nodes (including datasets and directories).
After installing MinIO on all systems (nodes) in the cluster, start the MinIO applications.
After you create datasets, you can navigate to the TrueNAS address at port :9000 to see the MinIO UI. After creating a distributed setup, you can see all your TrueNAS addresses.
Log in with the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD keys you created as environment variables.
Click Web Portal to open the MinIO sign-in screen.