Too many open files

Gohzio

Cadet
Joined
Mar 30, 2022
Messages
3
Hello,

I've been using Freenas/Truenas a while now and have been using Transmission on NAS's for the longest time. I've never really had an issue until the last year or so. My issue is that (apparently) randomly transmission crashes saying/usr/local/share/transmission/web/index.html (Too many open files).

The system is running;
Truenas
and transmission is running in a jail
the pool is 5.26TiB and is 50% used
i3-8100 CPU @ 3.60GHz
16 gig ram

I'm a noob when it comes Freenas and have been trying about a year to fix this. Every Google search I've done has come back with really old fixes which don't work or aint designed for Freenas. I've gone through EVERY google search result and have been unable to fix this issue.

An added weirdness is I can HEAR my NAS when it crashes. It makes a kind of high pitch whine. It is not the disk spinning and the nas works fine, and I can browse my files. It is not overheating and I've ran SMART tests with no errors. I also have PIhole running in a VM which doesn't go down when Transmission crashes. It seems it is ONLY the jail that crashes.

Currently only seeding torrents from here (39) but in the past I've done many MANY more without issues. I'll happily supply additional information if needed.

In my web searches I came across this is the issue Githublink and this is the resolution githublink (fstat says I crash at 1024). Most of it is gobbledegook to me, I am however sure I cannot apply this fix through a jail on Truenas (even if it's possible I wouldn't even know where to start). It is also really old and a different version of Transmission (currently running 3.00_4)

I was set to give up and install Qbittorrent but couldn't get it to work either so I thought if I am going to come here and ask for help anyway might as well try to fix Transmission first.

Thanks

Goh
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Memory?
Its what you might call a Wild Ass Guess

Can't think why it would be making a noise though
 

Benji99

Cadet
Joined
Jan 30, 2023
Messages
9
Chiming-in. I'm also facing the same issue when trying to access/move files from Mac using SMB on my Truenas Scale.

Seeing a ton of these in var/log/samba4/log.smbd:

Code:
[2023/02/14 14:19:52.247077,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:52.315195,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:52.363868,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:55.142058,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:55.206088,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:55.258940,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:58.199842,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:58.264987,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:19:58.318119,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:01.161695,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:01.226735,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:01.283228,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:04.145526,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:04.205654,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:04.256866,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:07.137182,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:07.196855,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:07.255426,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:10.140485,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:10.197338,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:10.247688,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:13.110353,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:13.169918,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files
[2023/02/14 14:20:13.228348,  1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
  ixnas_get_native_dosmode: .: open() failed: Too many open files


Truenas system was recently setup, not using any Aux parameters.
ACL type set to SMB/NFS4
TrueNAS-SCALE-22.12.0
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
On scale at least for me it's related to the kubernetes bug https://github.com/kubernetes/kubernetes/issues/64315 fixed by setting following sysctls https://github.com/kubernetes/kubernetes/issues/64315#issuecomment-904103310 (disclaimer: know for yourself if you can afford such numbers (for reference my server has 64GB ram and half of it sits empty because scale is scared to use more than 50% for arc))
sudo sysctl -w fs.inotify.max_user_watches=1048576;sudo sysctl -w fs.inotify.max_user_instances=8192;sudo sysctl -w vm.max_map_count=524288;
to persist do
1684611464671.png
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
9,554
[2023/02/14 14:20:10.247688, 1] ../../source3/modules/vfs_ixnas.c:131(ixnas_get_native_dosmode)
ixnas_get_native_dosmode: .: open() failed: Too many open files

That's not necessarily an SMB problem (it's just reporting why it failed to read the DOS mode on a file). It might be that you have one or more apps with an fd leak or inotify handle leak.
 

SKiZZ

Dabbler
Joined
Feb 17, 2022
Messages
27
On scale at least for me it's related to the kubernetes bug https://github.com/kubernetes/kubernetes/issues/64315 fixed by setting following sysctls https://github.com/kubernetes/kubernetes/issues/64315#issuecomment-904103310 (disclaimer: know for yourself if you can afford such numbers (for reference my server has 64GB ram and half of it sits empty because scale is scared to use more than 50% for arc))

to persist do
View attachment 66727
This is still an issue is there a way to spec this based on your RAM and Kubernetes apps?
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
This is still an issue is there a way to spec this based on your RAM and Kubernetes apps?
No idea, for what it's worth I use same numbers on server with 32G ram and can't observe any adverse effect. YMMV.
 

SKiZZ

Dabbler
Joined
Feb 17, 2022
Messages
27
No idea, for what it's worth I use same numbers on server with 32G ram and can't observe any adverse effect. YMMV.
Thanks for the reply. It be nice in the install flow if this was confurable or did some auto calculation. I have one with a 128gb.
 

sammael

Explorer
Joined
May 15, 2017
Messages
76
Thanks for the reply. It be nice in the install flow if this was confurable or did some auto calculation. I have one with a 128gb.
I don't think there's easy or automatic answer to that; it seems to be just an arbitrary number being set by whichever k8s implementation or app, based off the comment where I got the "fixed" values (https://github.com/kubernetes/kubernetes/issues/64315#issuecomment-904103310) he found in some other comment about hardening k8s the default was set to 8192 if I read correctly.

So I guess you could follow the steps he outlines by querying the sysctls inside the pods and inotify watches on the node, and increase them until the error doesn't happen - which is in no way automatic and sounds rather time consuming and tedious, which is why I just did copy some numbers from random post on internet in - at the end even the comment poster admits about one value, I quote "I don't remember why I set this one - copied from some answer elsewhere".
 
Top