nfs sharing in scale

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
I mounted scale's nfs on another machine of ubuntu, then I created a subdirectory abc in /mnt ,but i can't delete it,device busy?
because the subdirectory abc has been mounted to /mnt/abc automatically,but it's good in truenas core.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
You shouldn't add subdirectories to /mnt on TrueNAS to begin with.
It might cause issues sooner or later, is not actively supported and (above all) almost never actually needed.
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
You shouldn't add subdirectories to /mnt on TrueNAS to begin with.
It might cause issues sooner or later, is not actively supported and (above all) almost never actually needed.
First of all, thanks for your reply.
I want to know how to use nfs sharing in scale.
it does't support yet?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
It should support NFS sharing pretty fine, but it seems a bit like you're complaining about NFS, while you have added all sorts of extra layers like self created folders in /mnt that could just-as-well be the culprid.

Please try NFS the way it should work for sure:
Create a dataset and a share, if that doesn't work, get back to us to help you out.
Because at this moment we can't be sure what the problem is: Your CLI work, or the system...

(actually in both cases: Get back to us... But you get the point)
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
It should support NFS sharing pretty fine, but it seems a bit like you're complaining about NFS, while you have added all sorts of extra layers like self created folders in /mnt that could just-as-well be the culprid.

Please try NFS the way it should work for sure:
Create a dataset and a share, if that doesn't work, get back to us to help you out.
Because at this moment we can't be sure what the problem is: Your CLI work, or the system...

(actually in both cases: Get back to us... But you get the point)
Here are my steps:
1.---create a dataset named nfs in scale,and set the permission.
1.jpg

2---config nfs sharing
2.jpg

3---config nfs service
3.jpg

4---On another machine,i mount the nfs to /mnt,and i create a subdirectory abc,at this time,i can't delete it!
because the subdirectory abc has been mounted to /mnt/abc automatically,but it's good in truenas core.
4.jpg

I want to know where the problem is.
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
hmm thats interesting indeed, di you try a manual round of chmods?
It might be some interferance from some ACL's being set...
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@Kevin.w I've just loaded TrueNAS SCALE 20.12 in a VM. It uses nfs-ganesha server and not the standard nfs-kernel-server.

Code:
truenas# systemctl status nfs-ganesha
● nfs-ganesha.service - NFS-Ganesha file server
     Loaded: loaded (/lib/systemd/system/nfs-ganesha.service; enabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-01-02 00:35:39 PST; 1min 1s ago
       Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
    Process: 14351 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
   Main PID: 14352 (ganesha.nfsd)
      Tasks: 276 (limit: 9486)
     Memory: 48.6M
     CGroup: /system.slice/nfs-ganesha.service
             └─14352 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT

Jan 02 00:35:39 truenas.local systemd[1]: Starting NFS-Ganesha file server...
Jan 02 00:35:39 truenas.local systemd[1]: Started NFS-Ganesha file server.


I've not made use of ganesha before, but /etc/ganesha/ganesha.conf appears to hold the export defintions, e.g:

Code:
truenas# cat  /etc/ganesha/ganesha.conf

NFS_CORE_PARAM {
}
     
EXPORT {
    Export_Id = 1;
    Path = /mnt/Spool/sdata;
    Protocols = 3;
    Transports = TCP;
    Access_Type = None;
    CLIENT {
        Clients = 192.168.0.0/24;
        Access_Type = RW;
    }
    Squash = AllSquash;
    
    Anonymous_Uid = 1000;
    Anonymous_Gid = 1000;
    FSAL {
        Name = VFS;
    }
}
truenas#


The user/group of the dataset sdata is 1000/1000 in my case and no POSIX ACLs have been added. I set mapall in the NFS share config. I see the same behaviour on the client side as you do and messages re: stale file handles after a few minutes. This needs someone familiar with ganseha to explain if this is the expected behaviour.
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
@Kevin.w I've just loaded TrueNAS SCALE 20.12 in a VM. It uses nfs-ganesha server and not the standard nfs-kernel-server.

Code:
truenas# systemctl status nfs-ganesha
● nfs-ganesha.service - NFS-Ganesha file server
     Loaded: loaded (/lib/systemd/system/nfs-ganesha.service; enabled; vendor preset: disabled)
     Active: active (running) since Sat 2021-01-02 00:35:39 PST; 1min 1s ago
       Docs: http://github.com/nfs-ganesha/nfs-ganesha/wiki
    Process: 14351 ExecStart=/bin/bash -c ${NUMACTL} ${NUMAOPTS} /usr/bin/ganesha.nfsd ${OPTIONS} ${EPOCH} (code=exited, status=0/SUCCESS)
   Main PID: 14352 (ganesha.nfsd)
      Tasks: 276 (limit: 9486)
     Memory: 48.6M
     CGroup: /system.slice/nfs-ganesha.service
             └─14352 /usr/bin/ganesha.nfsd -L /var/log/ganesha/ganesha.log -f /etc/ganesha/ganesha.conf -N NIV_EVENT

Jan 02 00:35:39 truenas.local systemd[1]: Starting NFS-Ganesha file server...
Jan 02 00:35:39 truenas.local systemd[1]: Started NFS-Ganesha file server.


I've not made use of ganesha before, but /etc/ganesha/ganesha.conf appears to hold the export defintions, e.g:

Code:
truenas# cat  /etc/ganesha/ganesha.conf

NFS_CORE_PARAM {
}
    
EXPORT {
    Export_Id = 1;
    Path = /mnt/Spool/sdata;
    Protocols = 3;
    Transports = TCP;
    Access_Type = None;
    CLIENT {
        Clients = 192.168.0.0/24;
        Access_Type = RW;
    }
    Squash = AllSquash;
   
    Anonymous_Uid = 1000;
    Anonymous_Gid = 1000;
    FSAL {
        Name = VFS;
    }
}
truenas#


The user/group of the dataset sdata is 1000/1000 in my case and no POSIX ACLs have been added. I set mapall in the NFS share config. I see the same behaviour on the client side as you do and messages re: stale file handles after a few minutes. This needs someone familiar with ganseha to explain if this is the expected behaviour.
Okay,i am not alone :)
I hope someone can figure this out.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@Kevin.w Had another quick look at this and tried server setting with NFSv4 with NFSv3 ownership model ( i.e. no id mapping, sec = sys ).

My example:
Code:
truenas# cat /etc/ganesha/ganesha.conf


NFS_CORE_PARAM {
}
NFSV4 {
    Allow_Numeric_Owners = true;
    Only_Numeric_Owners = true;
}
EXPORT_DEFAULTS {
    SecType = sys;
}
   

   
   
EXPORT {
    Export_Id = 1;
    Path = /mnt/Spool/sdata;
    Protocols = 3, 4;
    Pseudo = /sdata;
    Transports = TCP;
    Access_Type = None;
    CLIENT {
        Clients = 192.168.0.0/24;
        Access_Type = RW;
    }
    Squash = RootSquash;
   
    Anonymous_Uid = 0;
    Anonymous_Gid = 0;
    SecType = sys;
    FSAL {
        Name = VFS;
    }
}
truenas#


Client mounts the pseudo path, e.g: "mount -v -t nfs4 <IP of TN Scale >:/sdata ..." This appears to behave on the client more normally ( no additional mounts created for sub-directories), but deleting non-empty sub-dirs causes problems. Don't know if that's due to me using a VM.
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
@Kevin.w Had another quick look at this and tried server setting with NFSv4 with NFSv3 ownership model ( i.e. no id mapping, sec = sys ).

My example:
Code:
truenas# cat /etc/ganesha/ganesha.conf


NFS_CORE_PARAM {
}
NFSV4 {
    Allow_Numeric_Owners = true;
    Only_Numeric_Owners = true;
}
EXPORT_DEFAULTS {
    SecType = sys;
}
  

  
  
EXPORT {
    Export_Id = 1;
    Path = /mnt/Spool/sdata;
    Protocols = 3, 4;
    Pseudo = /sdata;
    Transports = TCP;
    Access_Type = None;
    CLIENT {
        Clients = 192.168.0.0/24;
        Access_Type = RW;
    }
    Squash = RootSquash;
  
    Anonymous_Uid = 0;
    Anonymous_Gid = 0;
    SecType = sys;
    FSAL {
        Name = VFS;
    }
}
truenas#


Client mounts the pseudo path, e.g: "mount -v -t nfs4 <IP of TN Scale >:/sdata ..." This appears to behave on the client more normally ( no additional mounts created for sub-directories), but deleting non-empty sub-dirs causes problems. Don't know if that's due to me using a VM.
I’m sorry to tell you, the physical machine is the same result
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
I did more simple testing with my VM config transferring data to an SCALE NFS share, That's a TrueNAS SCALE 20.12 guest VM to its debian host, and between the SCALE VM and an in internal debian VM within SCALE (nested virtualisation). Just various cp commands, including recursive examples, and direcotry/file deletes and it seems to be problem free. Not sure why I had the earlier glitch. Don't have any real hardware to install TrueNAS SCALE on at the moment.

So when you say "the same result", do you mean the same as with NFSv3? Or you switched to NFSv4 and had a problem deleting non-empty sub-dirs?
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
I did more simple testing with my VM config transferring data to an SCALE NFS share, That's a TrueNAS SCALE 20.12 guest VM to its debian host, and between the SCALE VM and an in internal debian VM within SCALE (nested virtualisation). Just various cp commands, including recursive examples, and direcotry/file deletes and it seems to be problem free. Not sure why I had the earlier glitch. Don't have any real hardware to install TrueNAS SCALE on at the moment.

So when you say "the same result", do you mean the same as with NFSv3? Or you switched to NFSv4 and had a problem deleting non-empty sub-dirs?
No matter if I use V3 or V4, virtual machine or physical machine, I cannot delete the subdirectory
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Sorry, don't have answer for you apart from checking your config and perhaps posting the detials here. I only had success with a NFSv4 config.
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
Sorry, don't have answer for you apart from checking your config and perhaps posting the detials here. I only had success with a NFSv4 config.
It seems I need another try,But this is not the usual way, I think
 

Kevin.w

Dabbler
Joined
Dec 30, 2020
Messages
10
Code:
NFS_CORE_PARAM {
}
NFSV4 {
    Allow_Numeric_Owners = true;
    Only_Numeric_Owners = true;
}
EXPORT_DEFAULTS {
    SecType = sys;
}
  

  
  
EXPORT {
    Export_Id = 1;
    Path = /mnt/POOL/nfs;
    Protocols = 3, 4;
    Pseudo = /nfs;
    Transports = TCP;
    Access_Type = None;
    CLIENT {
        Clients = 192.168.0.0/24, 192.168.0.110;
        Access_Type = RW;
    }
    Squash = AllSquash;
  
    Anonymous_Uid = 1000;
    Anonymous_Gid = 1000;
    SecType = sys;
    FSAL {
        Name = VFS;
    }
}


failed again :(
In fact, i failed to use v3(delete subdir),and i failed to mount with v4
1609818710196.png
 
Last edited:

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
Shoudn't your mount command be: "mount -t nfs4 192.168.0.198:/nfs /mnt" ?
 

ivy

Dabbler
Joined
Feb 2, 2021
Messages
12
The user/group of the dataset sdata is 1000/1000 in my case and no POSIX ACLs have been added. I set mapall in the NFS share config. I see the same behaviour on the client side as you do and messages re: stale file handles after a few minutes. This needs someone familiar with ganseha to explain if this is the expected behaviour.

Were you ever able to get NFS working reliably without the stale file handles? If so, would you mind explaining how? I'm pretty lost.
 

KrisBee

Wizard
Joined
Mar 20, 2017
Messages
1,288
@ivy My TrueNAS SCALE 20.12 VM install was short lived and no longer exists. Only NFSv4 config worked for me as per #9 above.
 

ivy

Dabbler
Joined
Feb 2, 2021
Messages
12
Ah, okay. Sadly I'm not having the same luck with NFSv4. The stale file handles persist. Thanks anyway!
 
Top