TrueNAS Scale S3 Service to Minio Application Migration

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
TrueNAS S3 service is deprecated, The 22.12.3.1 release notes, made it sound pretty easy:

S3 Deploy the minio application from the TrueNAS Enterprise catalog and port any existing configuration from System Settings > Services > S3 to the deployed application.
  • If you try to deploy Official Minio application, it will reject trying to re-use your existing MinIO dataset because it is in use by the S3 service
  • If you stop the S3 service, remove auto-start and try to deploy the Official Minio application you get the same message.
If you try Application > Settings > Advanced Settings > disable Host Path Safety Checks

Then try to install MinIO it will allow the install and usage of your dataset, however it fails to start with:
Code:
2023-06-23 19:44:12.731996+00:00ERROR Unable to use the drive /export: Drive /export: found backend type fs, expected xl or xl-single - to migrate to a supported backend visit https://min.io/docs/minio/linux/operations/install-deploy-manage/migrate-fs-gateway.html: Invalid arguments specified


Reviewing the URL in the error message it seems to indicate that whatever TrueNAS S3 service did has since been deprecated and removed leaving the filesystem on S3 service dataset unreadable to current Minio versions. It appears you can not use the TrueNAS S3 service dataset "as-is".

It looks like you need to setup the Minio application with a new dataset have both of them running, and using the "mc" client do a migration of the config, metadata, buckets, etc. That is way more complicated than the TrueNAS release notes hinted at.

Looking for feedback on what others have done for the migration.
 

boostedn

Dabbler
Joined
Mar 9, 2023
Messages
14
I have a fix for that portion, you need to configure the built-in S3 to point to another dataset/folder. (It's missing from the instructions too)



I have a post on this above, but the rest still doesn't work. If you get further let me know. I'm still using the built-in S3. I think this migration guide was written some time ago. I don't think it's up to date and I don't think anyone actually tested it. It's so annoying, it's not the first time I've run into a situation like this with TN. It's just par for the course.


I can't even get the container to deploy.
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
Thanks for the comment. As best I can tell the format / filesystem used by the older S3 service just is not supported by MinIO any more. There is no reusing your existing dataset. It appears you need to leave that assigned to S3 service until migration is completed.

Create a new clean dataset for the MinIO application. Get that up and running on alternate ports.

Have them both up and running and use a "mc" client to connect to both and export from S3 and import to MinIO application.

Once the migration is completed stop the S3 service. Edit the MinIO application to assign it the old S3 ports and let your clients connect to the new install.

I haven't worked out the steps exactly on how to accomplish this.
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
I was able to complete a migration from S3 Service to MinIO Application under TrueNAS Scale. It is not straight forward, and not 100%. My S3 Service usage is basic enough that I could do some manual work. I'll document what I did in a few posts.
  • The S3 Service appears to be very old, seems to be release 2021-11-24T23:19:33Z.
  • The filesystem format used by S3 Service was dropped from MinIO back in release 2022-10-24T18-35-07Z. In theory this is the last version you could possibly upgrade to. You would still need to perform a data migration to modern versions, but I suspect it would have gone cleaner.
  • The MinIO application does not let you select older versions to install (it is not installing a current version either). You could probably install Docker image of an older MinIO version, use a version appropriate "mc" client and try to import into that and then test if that allows a cleaner data migration. I did not go down that path.
  • It would have been nice if IX could have upgrade the S3 Service version a bit before deprecation. The JSON model used by S3 Service is so old that current export commands used for a data migration just don't work.
  • I tried using older "mc" clients as well, they either lacked the export commands or still could not understand the JSON returned by the S3 Service.
---
High Level Outline of My Migration Process:
  1. Create a new dataset for the new MinIO application instance.
  2. Install MinIO application without SSL/TLS on non-standard ports to allow both S3 Service and MinIO to run concurrently.
  3. Changed the S3 Service to non-standard ports to prevent clients from updating data being migrated. The "mc" client does have the ability to issue "watch" command to look for data updates during migration and sync that stuff. I decided for my install, having a few hours of S3 Service outage was safer and acceptable.
  4. Configured the "mc" client with aliases to connect to both instances at the same time.
  5. I migrated buckets, users, and policies as best I could. This is where the migration falls short as the S3 Service lacked support for the command to export this metadata to zip files that could be imported into MinIO application.
  6. Then performed the data migration. This took a few hours to complete for my ~350GiB S3 Service repository.
  7. When it was done I shut down the S3 Service.
  8. Enabled TLS support on MinIO Service (ugh, that was not straight forward either).
  9. Changed MinIO Application to use the old S3 Service ports and each of my applications (3 of them) resumed activity.
  10. My existing Prometheus scrape jobs resumed scraping against the replacement with no changes needed.
  11. I was able to connect MinIO to Prometheus as well to get better dashboards within MinIO (this was something I could not get working with the S3 Service)
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
MinIO Application Installation Notes

  • Used non-standard ports that did not conflict with my existing S3 Service install (still running)
  • I reused the same "Root User" and "Root Password" between both instances. Using different ones would probably work since you have to provide both of these to the "mc" client, but I did not test that.
  • Created a new ZFS dataset for MinIO mounted as /mnt/main/apps/minio
  • Then from a shell, created 2 directories within that dataset:
Code:
cd /mnt/main/apps/minio

mkdir certs data


  • certs - will be used to hold information for necessary trickery to map to LetsEncrypt certificates. Best I could tell MinIO is hard coded to look for specific certificate names and location. I didn't see a way to alter that with environment variables (but that might exist).
  • data - this will be the root of where the MinIO data repository will be.

Apps > Search “Minio” > Click [Install].
  • Application Name: Minio
    • Version (whatever is latest) [1.7.15]
  • Workload Configuration:
    • Update Strategy: Create new pods and then kill old ones
  • Minio Configuration
    • Enable Distributed Mode:
      • Disable [needs 4 instances]
    • Minio Extra Arguments
      • No items have been added
    • Root User:
      • Access Key: <can be any english / verbal name> [lowercase only]
    • Root Password:
      • Security Key: <Highly complex password> [alpha-numeric only]
      • Confirm Security Key:
  • Minio Service Configuration
    • Port: 10000
    • Console Port: 10002
  • Log Search API Configuration
    • Enable Log Search API: Disabled [Requires PostGres Database]
  • Storage
    • Minio Data Mount Point Path: /export
    • Host Path for Minio Data Volume: Enable
      • Host Path Data Volume: /mnt/main/apps/minio/data
  • Postgres Storage
    • Disabled
  • Postgres Backup Volume
    • Disabled
  • Advanced DNS Settings
    • No items have been added
  • Resource Limits: Disabled
Click [Save]

Within a few second, logs showed it was up and running:
Code:
023-06-26 14:57:30.201182+00:00Formatting 1st pool, 1 set(s), 1 drives per set.
2023-06-26 14:57:30.201282+00:00WARNING: Host local has more than 0 drives of set. A host failure will result in data becoming unavailable.
2023-06-26 14:57:30.478034+00:00MinIO Object Storage Server
2023-06-26 14:57:30.478150+00:00Copyright: 2015-2023 MinIO, Inc.
2023-06-26 14:57:30.478177+00:00License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
2023-06-26 14:57:30.478223+00:00Version: RELEASE.2023-03-13T19-46-17Z (go1.19.7 linux/amd64)
2023-06-26 14:57:30.478246+00:002023-06-26T14:57:30.478246275Z
2023-06-26 14:57:30.478310+00:00Status:         1 Online, 0 Offline.
2023-06-26 14:57:30.478401+00:00API: http://172.16.0.17:10000  http://127.0.0.1:10000
2023-06-26 14:57:30.478637+00:00Console: http://172.16.0.17:10002 http://127.0.0.1:10002
2023-06-26 14:57:30.478681+00:002023-06-26T14:57:30.478681961Z
2023-06-26 14:57:30.478701+00:00Documentation: https://min.io/docs/minio/linux/index.html
2023-06-26 14:57:30.478720+00:00Warning: The standard parity is set to 0. This can lead to data loss.
2023-06-26 14:57:30.521654+00:002023-06-26T14:57:30.521654033Z
2023-06-26 14:57:30.521667+00:00You are running an older version of MinIO released 3 months ago
2023-06-26 14:57:30.521670+00:00Update: Run `mc admin update`
2023-06-26 14:57:30.521672+00:002023-06-26T14:57:30.521672769Z
2023-06-26 14:57:30.521675+00:002023-06-26T14:57:30.521675006Z


I was able to access the MinIO GUI via TrueNAS host IP address on the port I specified (http://192.168.10.102:10002/login)
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
After the Minio Application is installed…


Change S3 Service Port Number​

We will change the port number to prevent existing client connections / making updates during migration.
  1. System Settings > Services
  2. Stop S3 Service.
  3. Click Pencil Icon to Edit Service
    • Edit Port number to something non-standard and not used (and make note of it)
      • Original Port: 9000
      • New Port: 9005
    • Console Port can remain the same
    • Save
  4. Start S3 Service


Configure the “mc” client for migration.​

Create Aliases to migrate data from old s3_service to new minio_app. This configuration file will reference the temporary port numbers for both instances being used for the data migration.

The default path is in your home directory (~/.mc/config.json). The file is supposed to look like this (but with your actual secret values replaced):
Code:
{
    "version": "10",
    "aliases": {
        "s3_service": {
            "url": "https://truenas.example.com:9005",
            "accessKey": "my_access_key",
            "secretKey": "my_secret",
            "api": "S3v4",
            "path": "auto"
        },
        "minio_app": {
            "url": "http://192.168.10.102:10000",
            "accessKey": "my_access_key",
            "secretKey": "my_secret",
            "api": "S3v4",
            "path": "auto"
        }
    }
}


Then tested connectivity to both instances...
Code:
$ mc admin info s3_service


●  truenas.example.com:9005
   Uptime: 11 minutes
   Version: 2021-11-24T23:19:33Z


355 GiB Used, 3 Buckets, 26,305 Objects


$ mc admin info minio_app

●  192.168.10.102:10000
   Uptime: 1 hour
   Version: 2023-03-13T19:46:17Z
   Network: 1/1 OK
   Drives: 1/1 OK
   Pool: 1

Pools:
   1st, Erasure sets: 1, Drives per erasure set: 1

1 drive online, 0 drives offline
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70

Export S3 Configuration to MinIO Application​

Review S3 Service Configuration
Code:
$ mc admin config export s3_service | grep -v "^#"

region name=
api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=strict replication_workers=250 replication_failed_workers=8 transition_workers=100 stale_uploads_cleanup_interval=6h stale_uploads_expiry=24h delete_cleanup_interval=5m
heal bitrotscan=off max_sleep=1s max_io=100
scanner delay=10 max_wait=15s cycle=1m
subnet license=

Review Minio Application Configuration
Code:
$ mc admin config export minio_app | grep -v "^#"

subnet license= api_key= proxy=
site name= region=
api requests_max=0 requests_deadline=10s cluster_deadline=10s cors_allow_origin=* remote_transport_deadline=2h list_quorum=strict replication_priority=auto transition_workers=100 stale_uploads_cleanup_interval=6h stale_uploads_expiry=24h delete_cleanup_interval=5m disable_odirect=off gzip_objects=off
scanner speed=default

  • I had no significant differences, so I decided to leave as-is
  • If there are configuration items to migrate, then you can:
Code:
$ mc admin config export s3_service > config.txt

# Edit / review the file as needed

$ mc admin config import minio_app < config.txt

Restart Service to use new configuration
Code:
$ mc admin service restart minio_app


Export S3 Service Bucket Metadata​

(This is where things didn't go as planned...)
This will export all bucket metadata to a ZIP file (metadata only, no objects) named: cluster-metadata.zip
Code:
$ mc admin cluster bucket export s3_service

  • Rec'd Error message: “mc: <ERROR> Unable to export bucket metadata. Failed to parse server response (unexpected end of JSON input):.
    • Likely server version is just to old to work with the client.
    • Github issues - where the server was updated to latest version that support standalone/file system mode
    • using “--debug” switch for output showed:
      • "XMinioAdminVersionMismatch","Message":"This 'admin' API is not supported by server in 'mode-server-fs'"
If you could generate a ZIP file, then import would be like this:
Code:
$ mc admin cluster bucket import minio_app cluster-metadata.zip


Attempting to export IAM Settings also failed...
Code:
$ mc admin cluster iam export s3_service

mc: <ERROR> Unable to export IAM info. Failed to parse server response (unexpected end of JSON input):.


At this point I just fumbled my way trying to think of what needed to be exported / imported. But I have no expertise with S3 / Minio, there might be more elegant solutions for this. You mileage may vary here...


Export S3 Service Policies​

I tried to determine a delta of service policies between the two installations to point out which ones are missing and needed to be migrated. This checks names only, not value of respective policies. You might want to do something more complex, this met my needs:
Code:
$ diff <(mc admin policy list s3_service) <(mc admin policy list minio_app) | grep "<"
< readwrite
< velero_backups
< k3s
< postgresql_backups

  • Have to export and import:
    • velero_backups
    • k3s
    • postgresql_backups
For each of them I did these manual steps:
Code:
$ POLICY=k3s

$ mc admin policy info s3_service $POLICY -f $POLICY.json

$ mc admin policy add minio_app $POLICY $POLICY.json

Added policy `k3s` successfully.

  • Repeat for any additional policies

Export S3 Service Users​

Code:
$ mc admin user list s3_service

enabled    k3s-prod              k3s                 
enabled    postgres              postgresql_backups 
enabled    velero                velero_backups

For each user user, manually add them:
Code:
$ mc admin user add minio_app k3s-prod

Enter Secret Key:
Added user `k3s-prod` successfully.

  • Repeat for any additional users

Then from the MinIO application GUI, for each user set respective group, polices, etc. I couldn't find a way to automate this, just did side by side compare of each user to get them to match.

Perform Data Migration​

Code:
$ mc mirror --preserve s3_service minio_app

  • “--preserve” - Preserve file system attributes and bucket policy rules of the SOURCE on the TARGET
  • "--watch" - could be used here to monitor changes and keep replication going until you manually kill the process

Wait for Migration to complete, I migrated about 72MiB/Sec. Took a while.

When migration has completed:

  1. Stop and disable S3 Service to prevent it from restarting
  2. Stop MinIO Application and adjust ports as needed to replace the S3 Service
  3. Start MinIO Application and monitor client connections (if your clients require SSL/LTS you have more work to do!)
After some point, you can delete the old S3 Service Dataset and respective snapshots when no longer needed.
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70

Enable TLS for Minio​

I have TrueNAS Scale already doing certificate generation for it's TLS used by the GUI (https). I could not figure out a way to let deployed applications use the existing certificate (not sure where they are actually stored).
  • Using the TrueNAS shell I could see that certbot is already installed as part of the base TrueNAS Scale installation. But it didn't seem to have the Cloudflare DNS plugin available which I needed to use. If that could have worked, then I could have created a 1 time certificate and then setup a monthly cronjob to just renew it...
  • I looked for CertManager, or Certbot, or LetsEncrypt in the official application repository. Didn't see them.
  • I rolled my own Docker install of Certbot and had it generate a certificate I needed for MinIO. I'm not going to document that here as it seemed like hack I had to use due to a lack of time. I'm assuming there is a more elegant way to get certificates enabled for deployed applications.
In the end I got the certificate I needed, on ZFS dataset mounted as "/mnt/main/apps/LetsEncrypt".

Through trial and error, I figured out the follow permission changes were needed after LetsEncrypt creates the directory structure. The MinIO application was not able mount or traverse the directories / files without these changes.

I tried to make as few changes as possible via the TrueNAS shell, adjust the domain name as appropriate for your installation.
Code:
chmod 755 /mnt/main/apps/LetsEncrypt
chmod 755 /mnt/main/apps/LetsEncrypt/live
chmod 755 /mnt/main/apps/LetsEncrypt/archive
chmod 644 /mnt/main/apps/LetsEncrypt/archive/truenas.example.com/privkey1.pem


Secondly the filenames and directory structure used by LetsEncrypt didn't work well with MinIO. There might be a way to tell MinIO of an alternate location for certificates and names but I couldn't find it. MinIO seems to just require 2 files named public.crt and private.key be located within "/etc/minio/certs" and then upon restart MinIO will switch to SSL/TLS connections.

The directory I created before MinIO using path "/mnt/main/apps/minio/certs" will hold symlinks which will do the mapping between what MinIO is looking for and what LetsEncrypt creates.

It's ok that the following paths don't actually exist yet. The "ln" command allows this. When both ZFS datasets are mounted within the MinIO application container then the links become valid. This is done via the TrueNAS shell, adjust the domain name as appropriate for your installation.
Code:
chown minio:minio /mnt/main/apps/minio/certs
cd /mnt/main/apps/minio/certs

ln -s /etc/letsencrypt/live/truenas.example.com/fullchain.pem public.crt
ln -s /etc/letsencrypt/live/truenas.example.com/privkey.pem private.key


Add extra host paths to Minio application:
  • Mount Path in Pod: /etc/letsencrypt
  • Host Path: /mnt/main/apps/LetsEncrypt/

  • Mount Path in Pod: /etc/minio/certs
  • Host path: /mnt/main/apps/minio/certs
Upon clicking "Save" the application will restart. You should be able to check the logs and see the MinIO API and Console are now using TLS by using the "https" port:
Code:
2023-06-26 23:20:35.820765+00:00API: https://172.16.0.21:9000  https://127.0.0.1:9000
2023-06-26 23:20:35.820901+00:00Console: https://172.16.0.21:9002 https://127.0.0.1:9002



Once the TLS certificate was inplace, my existing S3 clients connected on port 9000 with TLS and external Promethus job scape was able to connect.

Migration was completed. I monitored for 2 days, all the clients seem to be functional.
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
For completeness, I'll document the Prometheus stuff as well..

Generate Prometheus Scape Job Configuration:
Code:
$ mc admin prometheus generate minio

scrape_configs:
- job_name: minio-job
  bearer_token: ey...Hq9w
  metrics_path: /minio/v2/metrics/cluster
  scheme: https
  static_configs:
  - targets: ['truenas.example.com:9000']


  • With the "mc" command, "minio" is the ALIAS in my config files pointing to the new MinIO application (like how "minio_app" was used in previous steps).
  • The "bearer_token" will be a long string of characters unique to your install. I greatly reduced it here for readability.
  • The "target" should already reflect your domain name.
Cut & paste the scrape_configs YAML from above into your Prometheus configuration yaml file (you'll have to figure that out, I use Prometheus Operator under an external Kubernetes cluster).

Once added to Prometheus, you should see the scrape job listed in Prometheus Service Discovery:
1687984445648.png

At this point you can add a MinIO dashboard to Grafana and view your metrics that way. However, MinIO is also able to get its metrics from Prometheus.

You will need to set 2 environment variables on the MinIO application configuration:

TrueNAS GUI > Apps > Applications > MINIO > Edit

  • Minio Image Environment
    • Set the following two environment variables:
      • MINIO_PROMETHEUS_URL
      • MINIO_PROMETHEUS_JOB_ID
Where MINIO_PROMETHEUS_URL value is the URL to reach Prometheus. Since my install is on an external Kubernetes cluster, I give it a full domain name to my instance: https:// ...

Where MINIO_PROMETHEUS_JOB_ID is the name of the job defined in the scrape_configs YAML above, in this case minio-job.

Upon clicking "Save" the MinIO Application will be restarted.


Log into the MinIO GUI and navigate Monitoring > Metrics and enjoy your Eye Candy.

1687985086633.png
 

namnnumbr

Dabbler
Joined
Jan 19, 2021
Messages
14
Thanks so much for this writeup @Richard Durso !

I've found that truenas scale stores it's certs in `/etc/certificates`. If you want to hack these for minio, you can create a cronjob to keep the files in sync:

1.
Code:
openssl x509 -inform PEM -in /etc/certificates/<public cert>.crt > </path/to/minio>/certs/public.crt

2.
Code:
openssl rsa -in  /etc/certificates/<public cert>t.key -text >  </path/to/minio>/certs/private.key


That said, while the minio service seems to start in Truenas k8s, the healthchecks fail for me since they target an http endpoint (which is now defunct) -- am I doing something weird, or have you made another alteration for the healthcheck?
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
I'm not clear about on your comments on the healthcheck. Not seeing a reference to the healthcheck or any errors. Where do you see this?

I tried your steps on certs, partial success. The GUI worked, but Prometheus Job Scrape failed:
Code:
failed to verify certificate: x509: certificate signed by unknown authority


Instead of using "openssl" I just did direct copy and it worked as is:

Code:
cp /etc/certificates/<public cert>.crt  </path/to/minio>/certs/public.crt
cp /etc/certificates/<public cert>.key  </path/to/minio>/certs/private.key


Prometheus job scrape resumed with that.
 
Last edited:

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
BTW - I just tried to upgrade to 2023-03-13_1.7.16 and it failed... it deployed, but never reached Started. Did a rollback to 2023-03-13_1.7.15 and this is working fine again.
 

namnnumbr

Dabbler
Joined
Jan 19, 2021
Messages
14
Deploying but failing to start is exactly the symptom I'm finding, also with 1.7.16.

If you want to dig into it, can investigate the kubernetes internals that power the TrueNAS App:

In TrueNAS shell, as root:

Code:
# get pods in all namespaces
k3s kubectl get pods -A 

# describe pod in `ix-minio` namespace like `minio-<random>-<rand>`
k3s kubect describe pod minio-b98dbb8-988kr -n ix-minio


The description shows a recurring error:
Warning Unhealthy 5s (x21 over 105s) kubelet Startup probe failed: HTTP probe failed with statuscode: 400

If you scroll up some in the description, we can see why - the probe endpoints are all `http`
Liveness: http-get http://:9091/minio/health/live delay=10s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:9091/minio/health/live delay=10s timeout=5s period=10s #success=2 #failure=5
Startup: http-get http://:9091/minio/health/live delay=10s timeout=2s period=5s #success=1 #failure=60
 

Richard Durso

Explorer
Joined
Jan 30, 2014
Messages
70
That wasn't too hard to fix, this will load up the YAML manifiest in a vi editor:
Code:
# k3s kubectl edit deployment.apps/minio -n ix-minio


Scroll down you will find the probe definitions:
Code:
    livenessProbe:                
      failureThreshold: 5                  
      httpGet:                                                                              
        path: /minio/health/live          
        port: 9001              
        scheme: HTTP                              
      initialDelaySeconds: 10                  
      periodSeconds: 10          
      successThreshold: 1                  
      timeoutSeconds: 5          

    readinessProbe:          
      failureThreshold: 5                        
      httpGet:                                  
        path: /minio/health/live  
        port: 9001                          
        scheme: HTTP                                                                        
      initialDelaySeconds: 10            
      periodSeconds: 10  
      successThreshold: 2                        
      timeoutSeconds: 5
     
    startupProbe:      
      failureThreshold: 60                                                                  
      httpGet:                
        path: /minio/health/live
        port: 9001                                
        scheme: HTTP                                                                        
      initialDelaySeconds: 10    
      periodSeconds: 5  
      successThreshold: 1      
      timeoutSeconds: 2


Change each of the scheme: HTTP to be scheme: HTTPS

Save. Wait a few seconds:
Code:
# k3s kubectl get pods -n ix-minio
NAME                     READY   STATUS    RESTARTS   AGE
minio-757bb4cd5f-zxrsv   1/1     Running   0          76s


TrueNAS reflects it too:
1690313314180.png
 

namnnumbr

Dabbler
Joined
Jan 19, 2021
Messages
14
::facepalm:: I'm so used to looking for helm values.yaml that I completely forgot about editing with kubectl.

Thanks!
 

kavaa

Dabbler
Joined
Aug 10, 2023
Messages
14
Tried these steps with a valid certificate not Letsencrypt but a WildCard and still getting: `Startup probe failed: HTTP probe failed with statuscode: 400` when starting the App
 

kavaa

Dabbler
Joined
Aug 10, 2023
Messages
14
Tried these steps with a valid certificate not Letsencrypt but a WildCard and still getting: `Startup probe failed: HTTP probe failed with statuscode: 400` when starting the App
Okay, so got it working somewhat... but getting this: ERR_SSL_PROTOCOL_ERROR
Mounting /mnt/TANK/Config_MinIO/certs to /etc/minio/certs in the container.
The files that are in /mnt/TANK/Config_MinIO/certs are the public.crt and private.key what's going wrong here?
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Looks to me like you might be better off running the Enterprise train version of Minio, which allows for the certificate definition in the app config rather than a hack like you're doing.
 

kavaa

Dabbler
Joined
Aug 10, 2023
Messages
14
Than we are still getting:
Back-off restarting failed container

With the valid certificate selected that's valid and also used for TrueNAS GUI
 

kavaa

Dabbler
Joined
Aug 10, 2023
Messages
14
Nevermind, needed to specify the URL with HTTPS in the config...
GUI is now working, but... Cannot login. - Error
Code:
Expected element type <AssumeRoleResponse> but have <html>
 
Top