First of all, thanks for your reply.You shouldn't add subdirectories to /mnt on TrueNAS to begin with.
It might cause issues sooner or later, is not actively supported and (above all) almost never actually needed.
Here are my steps:It should support NFS sharing pretty fine, but it seems a bit like you're complaining about NFS, while you have added all sorts of extra layers like self created folders in /mnt that could just-as-well be the culprid.
Please try NFS the way it should work for sure:
Create a dataset and a share, if that doesn't work, get back to us to help you out.
Because at this moment we can't be sure what the problem is: Your CLI work, or the system...
(actually in both cases: Get back to us... But you get the point)
truenas# systemctl status nfs-ganesha ● nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/lib/systemd/system/nfs-ganesha.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2021-01-02 00:35:39 PST; 1min 1s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 14351 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS) Main PID: 14352 (ganesha.nfsd) Tasks: 276 (limit: 9486) Memory: 48.6M CGroup: /system.slice/nfs-ganesha.service └─14352 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT Jan 02 00:35:39 truenas.local systemd[1]: Starting NFS-Ganesha file server... Jan 02 00:35:39 truenas.local systemd[1]: Started NFS-Ganesha file server.
truenas# cat /etc/ganesha/ganesha.conf NFS_CORE_PARAM { } EXPORT { Export_Id = 1; Path = /mnt/Spool/sdata; Protocols = 3; Transports = TCP; Access_Type = None; CLIENT { Clients = 192.168.0.0/24; Access_Type = RW; } Squash = AllSquash; Anonymous_Uid = 1000; Anonymous_Gid = 1000; FSAL { Name = VFS; } } truenas#
Okay,i am not alone :)@Kevin.w I've just loaded TrueNAS SCALE 20.12 in a VM. It uses nfs-ganesha server and not the standard nfs-kernel-server.
Code:truenas# systemctl status nfs-ganesha ● nfs-ganesha.service - NFS-Ganesha file server Loaded: loaded (/lib/systemd/system/nfs-ganesha.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2021-01-02 00:35:39 PST; 1min 1s ago Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki Process: 14351 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS) Main PID: 14352 (ganesha.nfsd) Tasks: 276 (limit: 9486) Memory: 48.6M CGroup: /system.slice/nfs-ganesha.service └─14352 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT Jan 02 00:35:39 truenas.local systemd[1]: Starting NFS-Ganesha file server... Jan 02 00:35:39 truenas.local systemd[1]: Started NFS-Ganesha file server.
I've not made use of ganesha before, but /etc/ganesha/ganesha.conf appears to hold the export defintions, e.g:
Code:truenas# cat /etc/ganesha/ganesha.conf NFS_CORE_PARAM { } EXPORT { Export_Id = 1; Path = /mnt/Spool/sdata; Protocols = 3; Transports = TCP; Access_Type = None; CLIENT { Clients = 192.168.0.0/24; Access_Type = RW; } Squash = AllSquash; Anonymous_Uid = 1000; Anonymous_Gid = 1000; FSAL { Name = VFS; } } truenas#
The user/group of the dataset sdata is 1000/1000 in my case and no POSIX ACLs have been added. I set mapall in the NFS share config. I see the same behaviour on the client side as you do and messages re: stale file handles after a few minutes. This needs someone familiar with ganseha to explain if this is the expected behaviour.
truenas# cat /etc/ganesha/ganesha.conf NFS_CORE_PARAM { } NFSV4 { Allow_Numeric_Owners = true; Only_Numeric_Owners = true; } EXPORT_DEFAULTS { SecType = sys; } EXPORT { Export_Id = 1; Path = /mnt/Spool/sdata; Protocols = 3, 4; Pseudo = /sdata; Transports = TCP; Access_Type = None; CLIENT { Clients = 192.168.0.0/24; Access_Type = RW; } Squash = RootSquash; Anonymous_Uid = 0; Anonymous_Gid = 0; SecType = sys; FSAL { Name = VFS; } } truenas#
I’m sorry to tell you, the physical machine is the same result@Kevin.w Had another quick look at this and tried server setting with NFSv4 with NFSv3 ownership model ( i.e. no id mapping, sec = sys ).
My example:
Code:truenas# cat /etc/ganesha/ganesha.conf NFS_CORE_PARAM { } NFSV4 { Allow_Numeric_Owners = true; Only_Numeric_Owners = true; } EXPORT_DEFAULTS { SecType = sys; } EXPORT { Export_Id = 1; Path = /mnt/Spool/sdata; Protocols = 3, 4; Pseudo = /sdata; Transports = TCP; Access_Type = None; CLIENT { Clients = 192.168.0.0/24; Access_Type = RW; } Squash = RootSquash; Anonymous_Uid = 0; Anonymous_Gid = 0; SecType = sys; FSAL { Name = VFS; } } truenas#
Client mounts the pseudo path, e.g: "mount -v -t nfs4 <IP of TN Scale >:/sdata ..." This appears to behave on the client more normally ( no additional mounts created for sub-directories), but deleting non-empty sub-dirs causes problems. Don't know if that's due to me using a VM.
No matter if I use V3 or V4, virtual machine or physical machine, I cannot delete the subdirectoryI did more simple testing with my VM config transferring data to an SCALE NFS share, That's a TrueNAS SCALE 20.12 guest VM to its debian host, and between the SCALE VM and an in internal debian VM within SCALE (nested virtualisation). Just various cp commands, including recursive examples, and direcotry/file deletes and it seems to be problem free. Not sure why I had the earlier glitch. Don't have any real hardware to install TrueNAS SCALE on at the moment.
So when you say "the same result", do you mean the same as with NFSv3? Or you switched to NFSv4 and had a problem deleting non-empty sub-dirs?
It seems I need another try,But this is not the usual way, I thinkSorry, don't have answer for you apart from checking your config and perhaps posting the detials here. I only had success with a NFSv4 config.
NFS_CORE_PARAM { } NFSV4 { Allow_Numeric_Owners = true; Only_Numeric_Owners = true; } EXPORT_DEFAULTS { SecType = sys; } EXPORT { Export_Id = 1; Path = /mnt/POOL/nfs; Protocols = 3, 4; Pseudo = /nfs; Transports = TCP; Access_Type = None; CLIENT { Clients = 192.168.0.0/24, 192.168.0.110; Access_Type = RW; } Squash = AllSquash; Anonymous_Uid = 1000; Anonymous_Gid = 1000; SecType = sys; FSAL { Name = VFS; } }
That's great!Shoudn't your mount command be: "mount -t nfs4 192.168.0.198:/nfs /mnt" ?
The user/group of the dataset sdata is 1000/1000 in my case and no POSIX ACLs have been added. I set mapall in the NFS share config. I see the same behaviour on the client side as you do and messages re: stale file handles after a few minutes. This needs someone familiar with ganseha to explain if this is the expected behaviour.