TrueCharts Integrates Docker Compose with TrueNAS SCALE

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
looks a bit better after i created a user docker, see log.
But it seems to ignore the yaml file and in the end the container is shut down.

I added a new directory in via the gui like the comment in your youtube video mentions - doesnt solve the problem:

Andrew Kelsey

There have been some changes to how this needs to be done. The 3rd entry under volumes needs to point to a path inside the container.

In the configuration for the app under Storage and Persistence, click add, choose Host Path for type, enter the dataset path to where your yaml file is, then in Mouth Path, enter the location inside the container (make it up). This will mount the Host Path location to the Mouth Path location in the container.

That 3rd entry under volumes in the yaml will not be your Host Path.


2023-01-21 23:30:29.075668+00:00docker not running yet. Waiting...
2023-01-21 23:30:30.833747+00:00Certificate request self-signature ok
2023-01-21 23:30:30.833824+00:00subject=CN = docker:dind server
2023-01-21 23:30:30.856997+00:00/certs/server/cert.pem: OK
2023-01-21 23:30:30.914429+00:00Certificate request self-signature ok
2023-01-21 23:30:30.914465+00:00subject=CN = docker:dind client
2023-01-21 23:30:30.930842+00:00/certs/client/cert.pem: OK
2023-01-21 23:30:31.067639+00:00time="2023-01-21T23:30:31.067481837Z" level=info msg="Starting up"
2023-01-21 23:30:31.069300+00:00time="2023-01-21T23:30:31.069238544Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2023-01-21 23:30:31.070945+00:00time="2023-01-21T23:30:31.070881358Z" level=info msg="libcontainerd: started new containerd process" pid=108
2023-01-21 23:30:31.071042+00:00time="2023-01-21T23:30:31.071001010Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-21 23:30:31.071057+00:00time="2023-01-21T23:30:31.071011886Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-21 23:30:31.071079+00:00time="2023-01-21T23:30:31.071052485Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-21 23:30:31.071095+00:00time="2023-01-21T23:30:31.071075963Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-21 23:30:31.113084+00:00time="2023-01-21T23:30:31Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header"
2023-01-21 23:30:31.113580+00:00time="2023-01-21T23:30:31.113525384Z" level=info msg="starting containerd" revision=78f51771157abb6c9ed224c22013cdf09962315d version=v1.6.13
2023-01-21 23:30:31.124338+00:00time="2023-01-21T23:30:31.124282283Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
2023-01-21 23:30:31.124384+00:00time="2023-01-21T23:30:31.124349157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167133+00:00time="2023-01-21T23:30:31.167055821Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"ip: can't find device 'aufs'\\nmodprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167157+00:00time="2023-01-21T23:30:31.167085465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167316+00:00time="2023-01-21T23:30:31.167268406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (zfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167338+00:00time="2023-01-21T23:30:31.167282102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167344+00:00time="2023-01-21T23:30:31.167292458Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
2023-01-21 23:30:31.167355+00:00time="2023-01-21T23:30:31.167300126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167367+00:00time="2023-01-21T23:30:31.167333173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.167770+00:00time="2023-01-21T23:30:31.167726139Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
2023-01-21 23:30:31.174993+00:00time="2023-01-21T23:30:31.174934870Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
2023-01-21 23:30:31.175022+00:00time="2023-01-21T23:30:31.174977241Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
2023-01-21 23:30:31.175036+00:00time="2023-01-21T23:30:31.174997739Z" level=info msg="metadata content store policy set" policy=shared
2023-01-21 23:30:31.175278+00:00time="2023-01-21T23:30:31.175243782Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
2023-01-21 23:30:31.175307+00:00time="2023-01-21T23:30:31.175275543Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
2023-01-21 23:30:31.175331+00:00time="2023-01-21T23:30:31.175288064Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
2023-01-21 23:30:31.175393+00:00time="2023-01-21T23:30:31.175372036Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175418+00:00time="2023-01-21T23:30:31.175387165Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175429+00:00time="2023-01-21T23:30:31.175400826Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175629+00:00time="2023-01-21T23:30:31.175600279Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175664+00:00time="2023-01-21T23:30:31.175641182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175679+00:00time="2023-01-21T23:30:31.175653839Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175687+00:00time="2023-01-21T23:30:31.175662699Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175692+00:00time="2023-01-21T23:30:31.175671349Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.175704+00:00time="2023-01-21T23:30:31.175679746Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
2023-01-21 23:30:31.175881+00:00time="2023-01-21T23:30:31.175855674Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
2023-01-21 23:30:31.176037+00:00time="2023-01-21T23:30:31.175983970Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
2023-01-21 23:30:31.176341+00:00time="2023-01-21T23:30:31.176315267Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
2023-01-21 23:30:31.176382+00:00time="2023-01-21T23:30:31.176361712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176392+00:00time="2023-01-21T23:30:31.176375320Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
2023-01-21 23:30:31.176518+00:00time="2023-01-21T23:30:31.176495324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176530+00:00time="2023-01-21T23:30:31.176507954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176536+00:00time="2023-01-21T23:30:31.176519072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176545+00:00time="2023-01-21T23:30:31.176530284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176554+00:00time="2023-01-21T23:30:31.176539602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176567+00:00time="2023-01-21T23:30:31.176551467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176589+00:00time="2023-01-21T23:30:31.176561908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176596+00:00time="2023-01-21T23:30:31.176575674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176609+00:00time="2023-01-21T23:30:31.176585917Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
2023-01-21 23:30:31.176896+00:00time="2023-01-21T23:30:31.176870291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176921+00:00time="2023-01-21T23:30:31.176899180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176934+00:00time="2023-01-21T23:30:31.176912552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
2023-01-21 23:30:31.176945+00:00time="2023-01-21T23:30:31.176927967Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
2023-01-21 23:30:31.176961+00:00time="2023-01-21T23:30:31.176941332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
2023-01-21 23:30:31.176969+00:00time="2023-01-21T23:30:31.176949796Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
2023-01-21 23:30:31.177031+00:00time="2023-01-21T23:30:31.177012155Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
2023-01-21 23:30:31.177484+00:00time="2023-01-21T23:30:31.177454738Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
2023-01-21 23:30:31.177592+00:00time="2023-01-21T23:30:31.177560939Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
2023-01-21 23:30:31.177982+00:00time="2023-01-21T23:30:31.177932515Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
2023-01-21 23:30:31.178057+00:00time="2023-01-21T23:30:31.178012255Z" level=info msg="containerd successfully booted in 0.065517s"
2023-01-21 23:30:31.188679+00:00time="2023-01-21T23:30:31.188610834Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-21 23:30:31.188704+00:00time="2023-01-21T23:30:31.188649314Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-21 23:30:31.188712+00:00time="2023-01-21T23:30:31.188669508Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-21 23:30:31.188729+00:00time="2023-01-21T23:30:31.188682296Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-21 23:30:31.190012+00:00time="2023-01-21T23:30:31.189965369Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2023-01-21 23:30:31.190038+00:00time="2023-01-21T23:30:31.189979942Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2023-01-21 23:30:31.190045+00:00time="2023-01-21T23:30:31.189995852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}" module=grpc
2023-01-21 23:30:31.190055+00:00time="2023-01-21T23:30:31.190013594Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2023-01-21 23:30:31.217971+00:00time="2023-01-21T23:30:31.217883757Z" level=info msg="Loading containers: start."
2023-01-21 23:30:31.307037+00:00time="2023-01-21T23:30:31.306963132Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2023-01-21 23:30:31.326085+00:00time="2023-01-21T23:30:31.326021686Z" level=info msg="Loading containers: done."
2023-01-21 23:30:31.342486+00:00time="2023-01-21T23:30:31.342381952Z" level=info msg="Docker daemon" commit=42c8b31 graphdriver(s)=zfs version=20.10.22
2023-01-21 23:30:31.342703+00:00time="2023-01-21T23:30:31.342655613Z" level=info msg="Daemon has completed initialization"
2023-01-21 23:30:31.446336+00:00time="2023-01-21T23:30:31.446170902Z" level=info msg="API listen on /var/run/docker.sock"
2023-01-21 23:30:31.451798+00:00time="2023-01-21T23:30:31.451697067Z" level=info msg="API listen on [::]:2376"
2023-01-21 23:30:34.251400+00:00time="2023-01-21T23:30:34.251317245Z" level=info msg="Processing signal 'terminated'"
2023-01-21 23:30:34.252070+00:00time="2023-01-21T23:30:34.252022781Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
2023-01-21 23:30:34.252263+00:00time="2023-01-21T23:30:34.252222016Z" level=info msg="Daemon shutdown complete"
2023-01-21 23:30:34.252345+00:00time="2023-01-21T23:30:34.252233007Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
2023-01-21 23:30:34.252362+00:00time="2023-01-21T23:30:34.252242432Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
 

rene-sackers

Cadet
Joined
Jan 23, 2023
Messages
3
@notanumba I'm experiencing the same issue as in your previous post now.

Code:
2023-01-24 05:25:17.453296+00:00time="2023-01-24T05:25:17.453174380Z" level=warning msg="grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = \"transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout\". Reconnecting..." module=grpc


I shut down my machine to do physically move it across the room, started it back up and suddenly got this issue. I don't understand how this happened, I didn't change any kind of configuration. Literally just re-booted the machine.

As for that YouTube comment, I don't fully understand it. It says "The 3rd entry under volumes needs to point to a path inside the container", so that's talking about the /data mount, right? But then it states "choose Host Path for type, enter the dataset path to where your yaml file is". So what is the entry pointing to, the data directory, or the directory with the compose file?

You also said you created a user docker. Do you mean in the TrueNAS UI, under Credentials -> Local Users, you created a user & primary group Docker, and that got rid of the "containerd.sock: timeout" error message?
 

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
You also said you created a user docker. Do you mean in the TrueNAS UI, under Credentials -> Local Users, you created a user & primary group Docker, and that got rid of the "containerd.sock: timeout" error message?
Yes. A TrueNAS User named "docker" with no password and in group docker (is created by default).

I figured out something more:

  1. You need to setup a user named "docker" (lower case!) in TrueNAS UI. TrueNAS creates automatically the group "docker".
  2. You need to correct the rights for the docker user: he needs access to the files on your TrueNAS Dataset (see below).
  3. You are able to start the container

In detail:

My container settings for truecharts docker-compose:
  • Storage and Persistence\...\ box Host Path: /mnt/ssdapps/docker_data/compose
  • App Configuration\Image Environment\ box "Docker Compose File": /mnt/ssdapps/docker_data/compose/ex.yaml
Set Permissions of docker compose dataset:

Dataset /mnt/ssdapps/docker_data/compose needs at the R/W/E permission (maybe less) for user "docker": Go to your docker Dataset in TrueNAS, select it, click Permissions on the right side and create a new Access Control Entry as User Obj and choose "docker". Apply the ACL.



Container shall start then. Some Network problems still to solve - but the container starts!
 

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
ok. it has to be a network problem:

if i start the console in the truecharts docker-compose container i get the following info:

The container came up the the correct port 2001 from the yaml file:
/ # netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:2001 0.0.0.0:* LISTEN
tcp 0 0 :::2376 :::* LISTEN

The server
/ # curl -k https://localhost:2001
<!doctype html><html lang="en" ng-app="portainer" ng-strict-di data-edition="CE"><head><meta charset="utf-8"/><title>Portainer</title><meta name="description" content=""/><meta name="author" content="Portainer.io"/><meta http-equiv="cache-control" content="no-cache"/><meta http-equiv="expires" content="0"/><meta http-equiv="pragma" content="no-cache"/><base id="base"/><script>if (window.origin == 'file://') {
// we are loading the app from a local file as in docker extension
document.getElementById('base').href = 'http://localhost:49000/';

window.ddExtension = true;
} else {
var path = window.location.pathname.replace(/^\/+|\/+$/g, '');
var basePath = path ? '/' + path + '/' : '/';
document.getElementById('base').href = basePath;
}</script><!--[if lt IE 9]>
<script src="//html5shim.googlecode.com/svn/trunk/html5.js"></script>
 

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
please ignore the previous post - released it too fast... why can`t we edit the released posts ?

ok. it has to be a network problem: if i start the console in the truecharts docker-compose container i get the following info:

The container came up the the correct port 2001 from the yaml file:
/ # netstat -nl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:2001 0.0.0.0:* LISTEN
tcp 0 0 :::2376 :::* LISTEN

The server gives a correct response relvealing it has portainer running:
/ # curl -k https://localhost:2001
<!doctype html><html lang="en" ng-app="portainer" ng-strict-di data-edition="CE"><head><meta charset="utf-8"/><title>Portainer</title><meta name="description" content=""/><meta name="author" content="Portainer.io"/><meta http-equiv="cache-control" content="no-cache"/><meta http-equiv="expires" content="0"/><meta http-equiv="pragma" content="no-cache"/><base id="base"/><script>if (window.origin == 'file://') {
// we are loading the app from a local file as in docker extension
document.getElementById('base').href = 'http://localhost:49000/';

window.ddExtension = true;
} else {
var path = window.location.pathname.replace(/^\/+|\/+$/g, '');
var basePath = path ? '/' + path + '/' : '/';
document.getElementById('base').href = basePath;
}</script><!--[if lt IE 9]>
[...]

but i cannot reach the port from my NAS
root@nas# curl -k https://localhost:2001
curl: (7) Failed to connect to localhost port 2001: Connection refused

Why am i not able to reach it from NAS' shell ?
 

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
ok... solved another riddle: Now i am able to connect to portainer on nas ip + port 2001.

enable host networking and fill out the boxes.

Networking and Services\
Show Expert Config <YES>
Host-Networking (Complicated) <YES>
Add external Interfaces
Host Interface 'br0' Interface
IP Address Management
IPAM Type DHCP


Another problem left: portainer web if cannot connect to docker.io
The bridging and routing is a bit complex - there is for sure the error.
 

notanumba

Dabbler
Joined
Jan 8, 2023
Messages
15
with this modified yaml:


networks:
privatenetwork:
name: privatenetwork
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.10.0/24
gateway: 192.168.10.1

services:
portainer:
read_only: true
image: portainer/portainer-ce:latest
healthcheck:
disable: true
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
networks:
- privatenetwork
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /mnt/ssdapps/docker_data/compose/data:/data:rw
ports:
- "2001:9443"
 

rene-sackers

Cadet
Joined
Jan 23, 2023
Messages
3
@notanumba thx a lot for sharing your progress. My issue right now is that I basically rebooted my machine, and my existing, working portainer instance using the docker-compose app kicked the bucket. When I start it, I can see that the images start up, and instantly die. I cannot access the PVC storage externally, and I can't shell into the app because it's constantly rebooting. I am at this point just trying to recover my lost data that's on the docker volumes inside the pod.

I've tried truetool to mount the pvc, but it just throws "name is too long". I'm hoping one of these things will just fix my existing app so it starts up and I can rescue my data. Outside of that, I'm at a loss.
 

rene-sackers

Cadet
Joined
Jan 23, 2023
Messages
3
I'm just going to leave this here in case anyone else in the future needs it. I tried to use TrueTool to mount the PVC volume of my docker-compose image running portainer, but it failed, because under portainer, more than 1 volumes get created and the truetool mount only expects 1. I'm not blaming this on truetool, as this is an unsupported app/configuration and it (probably) works fine for all other apps.

The solution was to manually mount the first volume using this guide: https://truecharts.org/manual/guides/pvc-access#manual-method---new-user-guide

So, in my case:
Code:
zfs list -t filesystem -r boot-data/ix-applications/releases/portainer/volumes -o name -H | grep pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc/0086ed6ee3804bb3154f746179493e5c2880c0f27580dcb199fcc57a452e2a6d
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc/00b96051194c20df039f55f4dd93e1e8f7adee994d7882594319b1d01ca1976d
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc/0865ff1eabb031c2170eee180b362b4b2b99bd4ff84ed983be74e99529933e06
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc/0865ff1eabb031c2170eee180b362b4b2b99bd4ff84ed983be74e99529933e06-init
...
like 50 more volumes
...


Take the first volume from this, so:
Code:
boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc


Mount the volume:
Code:
zfs set mountpoint=/temporary/portainer-data boot-data/ix-applications/releases/portainer/volumes/pvc-f82b125e-786e-443b-adbc-f5ad9a66f7fc


It's now under /mnt/temporary/portainer-data. In there were my docker volumes dir, managed to pull everything out and recover my data.
 

truecharts

Guru
Joined
Aug 19, 2021
Messages
788
For clearity:
We do not offer any assistance here as we simply don't have the staff capacity, if anyone like @notanumba needs support or assistence, please contact our support staff on discord.
 
Top