[SOLVED] Unable to mount NFS shares anymore

Bolster3496

Cadet
Joined
Jun 13, 2022
Messages
8
Hi,

i run a truenas 13 server at home, and i've let it fill it's ZFS pool to 98%, where i experienced (obviously) issues, mainly my NFS share not being able to remotely mount.

After some clean-up (my array is now 78% full, wich should be ok), i tried to mount those shares again, without success, either via my regular autofs mount (on 2 clients, one ubuntu server, the other archlinux desktop), or manually.

if manually, i don't get many clue in the mount "verbose" output :

Code:
mathieu@radium ~ took 13s
❯ sudo mount -v -t nfs tnc.local.mydomain.net:/mnt/storage0/video ~/Vidéos
[sudo] Mot de passe de mathieu :
mount.nfs: timeout set for Mon Nov 21 17:16:42 2022
mount.nfs: trying text-based options 'vers=4.2,addr=192.168.100.4,clientaddr=192.168.100.99'

and then it time out.

Any idea what i'm missing, and/or how to troubleshoot this ?
 
Last edited:

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
Did you try to reboot the server?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
I would have thought a reboot would have brought everything back for you. Normally once TrueNAS gets above 90% is goes into a write safe mode, but you got all the way up to 98%. I don't know why your NFS shares would not mount. But typically after cleaning up the pool a bit, a reboot is needed to bring the system back.

Well You will need to either wait for someone with better suggestions or do some research to see if anyone else has had this problem. Sorry I couldn't help more, wish it were an easy reboot fix.
 

Bolster3496

Cadet
Joined
Jun 13, 2022
Messages
8
I would have thought a reboot would have brought everything back for you. Normally once TrueNAS gets above 90% is goes into a write safe mode, but you got all the way up to 98%. I don't know why your NFS shares would not mount. But typically after cleaning up the pool a bit, a reboot is needed to bring the system back.

Well You will need to either wait for someone with better suggestions or do some research to see if anyone else has had this problem. Sorry I couldn't help more, wish it were an easy reboot fix.
Thanks anyway !
 
Joined
Oct 22, 2019
Messages
3,587
What about the logs on the TrueNAS Core server?
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
How about the output of cat /var/log/messages | grep "mount"
You could just provide the data for today's attempts, no need to have too many days worth of data, well not that I would think we need, but you could compare it to the earlier data to see if there is a difference.

Also, did you do anything to your TrueNAS server since the last time you bootstrapped it and it worked to when the problems started showing up? I'm thinking about configuration changes. Also please follow the forum rules and post your system specs, this may be important for your issue.
 

Bolster3496

Cadet
Joined
Jun 13, 2022
Messages
8
Here is what i get if i grep on mount, and then on nfs. As far as i can see, this is not related to my issue :

Code:
root@nas[~]# cat /var/log/messages | grep "mount"
Nov 23 11:21:42 nas.local.2027a.net Trying to mount root from zfs:boot-pool/ROOT/13.0-U2 []...
Nov 23 11:21:42 nas.local.2027a.net Root mount waiting for: CAM
root@nas[~]# cat /var/log/messages | grep "nfs"  
Nov 23 11:21:42 nas.local.2027a.net nfsd: can't register svc name


The whole thing is running on a Asus motherboard, with an intel i5-11400 cpu, with 32g of ddr4 ram. The boot disk is a samsung 980 M.2, and the zfs pool is 12 3.5" 4TB HDD, in 2 raidz2 vdev.

I didn't touched the configuration in a while, and the issue occured after the pool being almost filled, without any configuration change
 

Bolster3496

Cadet
Joined
Jun 13, 2022
Messages
8
Ok, sorry for the noise, i finally could pinpoint the issue... it was related to a change in my network configuration (domain name went from nas.local to tnc.local).

I just spotted and corrected it, and everything felt back in place. It probably worked a while on the prevbious conf, and failed only when the share was automatically disconnected from my server cause of the filled pool.

Thanks for your help
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,970
It's the little things that can have major impacts. Glad you figured it out.
 
Top