ewhac
Contributor
- Joined
- Aug 20, 2013
- Messages
- 177
I'm running TrueNAS 13.0-U2. I have an SMB share setup to share home directories of local users in a standalone workgroup (i.e. there is no AD controller). This share was originally created back in the FreeNAS 8.x days (almost ten years ago); the dataset was migrated to new hardware in 2018. The share is primarily accessed by Linux automount clients, but we do have Windows 7 and 11 in the house, and it needs to work just well enough with them to copy files in/out.
I've been struggling with permissions/access issues on this share for years (for example: creating/cloning Git repos on the share from a Linux client has never worked). The problems increased sharply after upgrading to 13.0. Although copying files in/out using
I've considered changing the ACL preset on the affected pool/dataset, and then using
While searching for possible hints at fixing the issue, I found this thread describing a similar-ish issue. In that thread, I found this post which read:
This prompts several questions:
I've been struggling with permissions/access issues on this share for years (for example: creating/cloning Git repos on the share from a Linux client has never worked). The problems increased sharply after upgrading to 13.0. Although copying files in/out using
cp
and mv
works, saving files to the share from inside applications like LibreOffice now throws various permission errors.I've considered changing the ACL preset on the affected pool/dataset, and then using
setfacl -Rb
on all the files in the dataset to force their ACLs to a sane-ish state (remember, we don't care that much about Windows), but I'm concerned that may break things in even more subtle and unrepairable ways.While searching for possible hints at fixing the issue, I found this thread describing a similar-ish issue. In that thread, I found this post which read:
It goes without saying, delete the problematic share, if you have data in the share dataset move it to a new dataset, delete the old (empty now) dataset, recreate it, move data back. Follow the whole procedure, do not reuse the tinkered-with datasets and shares. (if you can, even users).
This prompts several questions:
- Why does it "go without saying" that destroying and recreating the dataset is necessary? (Put another way: Why can't the dataset/share be repaired in place?)
- How should the temporary dataset be created to minimize loss of metadata?
- Can the data be transferred via replication (
zfs send
), or should it be done file by file (e.g.rsync
orcp -av
)?