Apps Host Path access problem with SMB

peschka

Cadet
Joined
Nov 22, 2022
Messages
2
Hello guys,

I'm new to TrueNAS Scale and found most of the answers of my questions here. Great community !

I have one small but crucial for me issue. This is the my version of TrueNAS Scale: TrueNAS-SCALE-Bluefin-RC - TrueNAS SCALE Bluefin RC

I'm attaching screenshots to help explain the issue I'm having.

I have SMB share pointing to /mnt/TrueNAS/Downloads and NFS share to the same folder.

I want to give the plex app ( for example ) access to my /mnt/TrueNAS/Downloads dataset. No matter if it's plex or another app, and no matter of the repo version ( Truecharts or Official ) I always end up with stuck at deploying app. Here is the full log:
2022-11-22 23:47:41
Updated LoadBalancer with new IPs: [] -> [192.168.0.8]
2022-11-22 23:47:40
Job completed
2022-11-22 23:47:40
Ensuring load balancer
2022-11-22 23:47:40
Applied LoadBalancer DaemonSet kube-system/svclb-plex-04b638ae
2022-11-22 23:47:32
Created pod: plex-manifests-7967c
2022-11-22 23:47:32
Successfully assigned ix-plex/plex-manifests-7967c to ix-truenas
2022-11-22 23:47:32
Add eth0 [172.16.4.62/16] from ix-net
2022-11-22 23:47:32
Container image "tccr.io/truecharts/ubuntu:jammy-20221101@sha256:4b9475e08c5180d4e7417dc6a18a26dcce7691e4311e5353dbb952645c5ff43f" already present on machine
2022-11-22 23:47:32
Created container plex-manifests
2022-11-22 23:47:32
Started container plex-manifests
2022-11-22 23:47:32
Error: Error response from daemon: invalid volume specification: '/mnt/TrueNAS/Downloads:/Downloads': Invalid mount path. /mnt/TrueNAS/Downloads. Following service(s) uses this path: `NFS Share, SMB Share`.
2022-11-22 23:47:31
Add eth0 [172.16.4.61/16] from ix-net
2022-11-22 23:47:31
Container image "tccr.io/truecharts/ubuntu:jammy-20221101@sha256:4b9475e08c5180d4e7417dc6a18a26dcce7691e4311e5353dbb952645c5ff43f" already present on machine
2022-11-22 23:47:30
Scaled up replica set plex-764768fbbc to 1 from 0
2022-11-22 23:47:30
Created pod: plex-764768fbbc-l8j5k
2022-11-22 23:47:30
Successfully assigned ix-plex/plex-764768fbbc-l8j5k to ix-truenas
2022-11-22 23:47:21
Scaled down replica set plex-764768fbbc to 0 from 1
2022-11-22 23:47:21
Deleted pod: plex-764768fbbc-nxdnw
2022-11-22 23:47:21
Error: Error response from daemon: invalid volume specification: '/mnt/TrueNAS/Downloads:/Downloads': Invalid mount path. /mnt/TrueNAS/Downloads. Following service(s) uses this path: `NFS Share, SMB Share`.
2022-11-22 23:47:20
Add eth0 [172.16.4.60/16] from ix-net
2022-11-22 23:47:20
Container image "tccr.io/truecharts/ubuntu:jammy-20221101@sha256:4b9475e08c5180d4e7417dc6a18a26dcce7691e4311e5353dbb952645c5ff43f" already present on machine
2022-11-22 23:47:20
Updated LoadBalancer with new IPs: [] -> [192.168.0.8]
2022-11-22 23:47:20
Deleting load balancer
2022-11-22 23:47:20
Deleted LoadBalancer DaemonSet kube-system/svclb-plex-1a8e0b71
2022-11-22 23:47:20
Deleted load balancer
2022-11-22 23:47:19
Job completed
2022-11-22 23:47:19
Ensuring load balancer
2022-11-22 23:47:19
Applied LoadBalancer DaemonSet kube-system/svclb-plex-1a8e0b71
2022-11-22 23:47:19
Scaled up replica set plex-764768fbbc to 1
2022-11-22 23:47:19
Created pod: plex-764768fbbc-nxdnw
2022-11-22 23:47:19
Successfully assigned ix-plex/plex-764768fbbc-nxdnw to ix-truenas
2022-11-22 23:47:11
Add eth0 [172.16.4.58/16] from ix-net
2022-11-22 23:47:11
Container image "tccr.io/truecharts/ubuntu:jammy-20221101@sha256:4b9475e08c5180d4e7417dc6a18a26dcce7691e4311e5353dbb952645c5ff43f" already present on machine
2022-11-22 23:47:11
Created container plex-manifests
2022-11-22 23:47:11
Started container plex-manifests
2022-11-22 23:47:10
Created pod: plex-manifests-dvj2c
2022-11-22 23:47:10
Successfully assigned ix-plex/plex-manifests-dvj2c to ix-truenas

If I stop SMB and NFS or remove their paths to /mnt/TrueNAS/Downloads all apps are deploying without a problem. The most interesting thing is that if I deploy the app with stopped services and then turn them on - everything is working fine. I highlighted what I think is the cause for deploying to get stuck.

And also one more thing, if I set the apps to use NFS share ( instead of host path ) everything is working so I think its not permissions related or I'm wrong ?
 

Attachments

  • Datasets.png
    Datasets.png
    147.2 KB · Views: 829
  • SMB.png
    SMB.png
    65.6 KB · Views: 778
  • NFS.png
    NFS.png
    92.6 KB · Views: 701
  • Plex.png
    Plex.png
    21.4 KB · Views: 836

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Based on other threads I've seen here, I think this is by design. It isn't a design I agree with, but that may be because I'm not familiar with all the relevant technical issues.
 

patan32

Cadet
Joined
Oct 14, 2022
Messages
7
I have the same issue. I don't know what will be the right way to mount a SMB share for example Movies folder that is mapped to Plex container. I am sure everyone will be copying movies to this folder and it will be shared over SMB. I am not sure how this can be a intended design. I can't see any other posts regarding this issue.

I hope they have a good fix for this.
 

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466

peschka

Cadet
Joined
Nov 22, 2022
Messages
2
I made one more research and found this to be closest to my issue:


But just how safe is to turn off: Validate host path ? I don't like the warning: Disabling validation of host path can result in a data loss.

If I do that all apps are re-deployed and working perfectly with SMB and NFS shares ( what I wanted ). But just worried about this warning tho ...
 

patan32

Cadet
Joined
Oct 14, 2022
Messages
7

I'm thinking I'd seen others, but these at least address it.
That's pretty bad. Don't fix the issue but tell the users this is not supported. Permissions can be manually set on each dataset and this is how i have been running the containers. I don't want to use PVC because if you delete the container all the data goes with it. PVC just uses pointers and i find mounting a PVC share is stupid when you need to edit the files.
 

patan32

Cadet
Joined
Oct 14, 2022
Messages
7
I found the fix. You can disable this option "Validate host path" in the "Settings" -> "Advanced Settings". You can do this using CLI "'app kubernetes update validate_host_path=false'. This is not supported by truecharts BTW.
1669244464980.png
 

soleous

Dabbler
Joined
Apr 14, 2021
Messages
30
Has anyone done a deep dive into this yet? I'm also getting it for Replication Services. I've currently disabled the "Validation host path" option, but I'm not sure of the purpose of this function.

Code:
Following service(s) uses this path: `Replication, NFS Share, SMB Share`."
 

mattheja

Dabbler
Joined
Nov 21, 2017
Messages
13
I was searching for more details on what the warning "Disabling validation of host path can result in a data loss." could entail, but Kubernetes documentation and internet sources were not very helpful for me.

It would be nice if TrueNAS devs could chime in with more details here. For Official Apps like Plex, the Storage settings by default steer you towards using data/config volumes, which are host path volumes. Then, there is a conflicting default overall app/kubernetes setting that causes this to not work. This does not seem like a good design, then if you want to make the host path volumes work, you have a scary warning changing a default setting.
 

bcat

Explorer
Joined
Oct 20, 2022
Messages
84
From what I've read elsewhere, I think it's primarily that the default NFSv4 ACL config for SMB shares effectively disables chmod, whereas a number of containers expect it to work. (Personally, I'm of the opinion that containers that attempt to change permissions on mounted volumes and don't degrade gracefully if the operation fails are generally badly written containers, but that's neither here nor there.)

It seems like this is primarily about avoiding unexpected behavior in a configuration that's not actively supported, which I totally understand. Likewise, I imagine iXsystems doesn't want to deal with bug reports that end up being user error due to complex ACL setups. But I do think the intent and functionality of the config option could be a little better documented if that's the case.

FWIW, I personally use Plex (from TrueCharts) running under a dedicated user (which, IIRC, the TrueCharts folks don't actively support, but I'm not comfortable running all my apps as the same user, and fortunately they provide lots of config options for cases), and I have a mounted dataset using NFSv4 ACLs (with various SMB shares inside) for media that works just fine. (All I had to do was make sure the plex user had inherited read access in the dataset's ACL.) I'm still on Angelfish, but I expect with the Bluefin update I'll just flip the flag and never think of it again. :)
 
Last edited:

NickF

Guru
Joined
Jun 12, 2014
Messages
760
Giving containers access to files on the host system, at least in principle, breaks the whole concept of containers. While I understand the use case here, I'm not sure what they could have done differently to fix it. Although documentation, beyond some other posts here and on Reddit, seems lacking, and I'm not sure that this was called out loud enough in the release notes for BlueFin.

In any case, this change is going to break things for alot of folks
 

browntiger

Explorer
Joined
Oct 18, 2022
Messages
58
Giving containers access to files on the host system, at least in principle, breaks the whole concept of containers.

I completely disagree. I view kubernetes pods as immutable: something k8s engine can destroy and recreate at any time on a new ip, or another host at any time it chooses. Having a small pvc is ok, like to store run time settings based on the creation parameters. Having large storage inside of pod definitly not kosher.

Designers of kubernetes specicically designed defined volumes as the means to store the data. If a pod.failed or needs upgrading, the masters nodes can scheduler with workers a new pod creation on a new ip. Normally we dont need to backup a pod as it has no data of its own. We dont care if it lives or dies. We can set replicas =0 at any time. And kill it.

So why break this with pvc storage?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
Giving containers access to files on the host system, at least in principle, breaks the whole concept of containers.

I completely disagree. I view kubernetes pods as immutable: something k8s engine can destroy and recreate at any time on a new ip, or another host at any time it chooses. Having a small pvc is ok, like to store run time settings based on the creation parameters. Having large storage inside of pod definitly not kosher.

Designers of kubernetes specicically designed defined volumes as the means to store the data. If a pod.failed or needs upgrading, the masters nodes can scheduler with workers a new pod creation on a new ip. Normally we dont need to backup a pod as it has no data of its own. We dont care if it lives or dies. We can set replicas =0 at any time. And kill it.

So why break this with pvc storage?
Oh I hear you, I understand the argument. I'm not even saying your opinion doesn't have validity, as this has been a feature since the beginning. My only point was that such access to the host filesystem partially defeats the purpose of separating the application out into a container in the first place.

IMO, you should probably do this with file sharing, like NFS. PVCs are good for data that needs to be saved, but doesn't need to be shared. But when multiple apps or systems need to access the same files, it really should be done by a file sharing protocol.
 

mgoulet65

Explorer
Joined
Jun 15, 2021
Messages
94
Given this seemingly sensible restriction, wouldn't it make sense to have an option to mount a SMB share in an App (like there currently is for NFS share)?
 

MisterE2002

Patron
Joined
Sep 5, 2015
Messages
211
I was searching for more details on what the warning "Disabling validation of host path can result in a data loss." could entail, but Kubernetes documentation and internet sources were not very helpful for me.
I am not sure. But i assume if you disable validation and the host path does not exist the container is just spin up. However the container mapping is created and is pointing to nothing so writing to it will store the data IN the container. Meaning if you restart the container all the data is gone.
Not verified.

That said, to disable this verification only to avoid container failures is a dirty workaround. Because technically it seems to work fine. If i first map a dataset to a APP and later "share" it works fine. If i change the order it does not.

Couple of incidents are created about this issue. See: https://ixsystems.atlassian.net/browse/NAS-119335
 

Mr.Dan

Cadet
Joined
Dec 16, 2022
Messages
2
I have found a work around. if you delete the SMB share then start the app you want you can then remount the share once it is running. this is kind of a pain to do every time you need to restart a app but this way you wont have to uncheck the "Validate host path" setting.
 

NickF

Guru
Joined
Jun 12, 2014
Messages
760
I have found a work around. if you delete the SMB share then start the app you want you can then remount the share once it is running. this is kind of a pain to do every time you need to restart a app but this way you wont have to uncheck the "Validate host path" setting.
Sounds more like a bug than a feature to me :P
 

Daisuke

Contributor
Joined
Jun 23, 2011
Messages
1,038
I have found a work around.
I don't have any SMB issues and I don't have Validate Host Path disabled, which introduces system and app instability. You should definitely not disable it, I have no idea who started this fashion to disable that option. Is all about how you setup your SMB shares and dataset permissions, please see Bluefin Upgrade Checklist for more details. SMBv3 is more performant than NFS and fully supported in macOS, for example.
 
Last edited:

danb35

Hall of Famer
Joined
Aug 16, 2011
Messages
15,466
Is all about how you setup your SMB shares and dataset permissions, please see Bluefin Upgrade Checklist for more details.
...and your answer is to share the whole pool. That does avoid this issue (though IMO that's also a bug), but it's hardly a best practice.
 
Top