Timeouts on SMB Share from Ubuntu

Status
Not open for further replies.

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
I'm experiencing issues like this as well for the past day or so after updating Ubuntu. I got alerts that my FreeNAS was running nearly 100% CPU all the time.

1RLVpwM.png


Looking at the processes it was almost all smbd. When I turned the SMB service off on FreeNAS, my CPU dropped immediately to near 0 (around 21:20 in graph below). I then shutdown my two Ubuntu boxes and re-enabled FreeNAS SMB service. CPU stayed around 0.

Finally, I turned on one of my UbuntuVMs and you can see the FreeNAS CPU shot back up to about 50-60%.

o94sNLY.png


For the moment, I've killed the SMB mounts I had configured while I decide if I want to mess around with the Kernel or just move to nfs.

After unmounting the FreeNAS smb shares on my Ubuntu boxes my FreeNAS CPU is back where I expect it... (I am still using them from Windows and Mac without issue):
1bLFBBt.png




Will be watching this thread closely.
Specifying SMB version 2.1 may fix this problem. Read on...

I wasn't happy with the kernel reversion I tried earlier, so I re-installed Ubuntu 16.04.2 LTS in the VM and experimented with the CIFS-related mount options.

Here is the related /etc/fstab entry before modification. This setup eventually results in exactly the same behavior experienced by me and @BrianAz1 and others on this thread:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,credentials=/etc/media-credentials 0 0

Pretty straight-forward, huh? When I checked the man page for mount.cifs, I found out that it uses SMB version 1.0 by default, so I specified version 2.1 by adding vers=2.1 to the parameter list. Ubuntu complained about the remote server not supporting inodes correctly and suggested I add noserverino to hush the warnings. So I added that as well. Here is the new /etc/fstab:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,vers=2.1,noserverino,credentials=/etc/media-credentials 0 0

I don't use the default Windows workgroup name ('WORKGROUP') so I also specified my workgroup name in the SAMBA configuration file (/etc/samba/smb.conf).

I've been running this configuration about 6 hours now and everything looks good so far. I'll report back later if the high CPU utilization rate/runaway process returns.
 
Last edited:

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Specifying version SMB version 2.1 may fix this problem. Read on...
...
I've been running this configuration about 6 hours now and everything looks good so far. I'll report back later if the high CPU utilization rate/runaway process returns.
It's been over 12 hours and I still haven't seen the runaway smbd process w/high CPU utilization rate. So, for now, using SMB version 2.1 seems to be a fix for this problem.
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
I haven't seen anything in the samba mailing lists about this issue. I'm wondering if there's a bug here specifically related to FreeNAS + Samba and new features in the latest cifs-utils.
I have not yet got to setup new Ubuntu VM yet but will do that shortly and post the debug log as you requested, however since its not fixed in kernel 4.8 onwards tells me its freenas + samba issue.
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
I'm experiencing issues like this as well for the past day or so after updating Ubuntu. I got alerts that my FreeNAS was running nearly 100% CPU all the time.

1RLVpwM.png


Looking at the processes it was almost all smbd. When I turned the SMB service off on FreeNAS, my CPU dropped immediately to near 0 (around 21:20 in graph below). I then shutdown my two Ubuntu boxes and re-enabled FreeNAS SMB service. CPU stayed around 0.

Finally, I turned on one of my UbuntuVMs and you can see the FreeNAS CPU shot back up to about 50-60%.

o94sNLY.png


For the moment, I've killed the SMB mounts I had configured while I decide if I want to mess around with the Kernel or just move to nfs.

After unmounting the FreeNAS smb shares on my Ubuntu boxes my FreeNAS CPU is back where I expect it... (I am still using them from Windows and Mac without issue):
1bLFBBt.png




Will be watching this thread closely.
Are you using the VM's or physical boxes?
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
It's been over 12 hours and I still haven't seen the runaway smbd process w/high CPU utilization rate. So, for now, using SMB version 2.1 seems to be a fix for this problem.
Awesome, what is the SMB server minimum and maximum protocol you have selected in freenas?
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
Increase logging to "debug" in SMB config, reproduce the problem, then post /var/log/samba4/log.smbd here.
Here you go.
 

Attachments

  • log.smbd.zip
    1.6 MB · Views: 409

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
Specifying version SMB version 2.1 may fix this problem. Read on...

I wasn't happy with the kernel reversion I tried earlier, so I re-installed Ubuntu 16.04.2 LTS in the VM and experimented with the CIFS-related mount options.

Here is the related /etc/fstab entry before modification. This setup eventually results in exactly the same behavior experienced by me and @BrianAz1 and others on this thread:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,credentials=/etc/media-credentials 0 0

Pretty straight-forward, huh? When I checked the man page for mount.cifs, I found out that it uses SMB version 1.0 by default, so I specified version 2.1 by adding vers=2.1 to the parameter list. Ubuntu complained about the remote server not supporting inodes correctly and suggested I add noserverino to hush the warnings. So I added that as well. Here is the new /etc/fstab:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,vers=2.1,noserverino,credentials=/etc/media-credentials 0 0

I don't use the default Windows workgroup name ('WORKGROUP') so I also specified my workgroup name in the SAMBA configuration file (/etc/samba/smb.conf).

I've been running this configuration about 6 hours now and everything looks good so far. I'll report back later if the high CPU utilization rate/runaway process returns.

Adding vers=2.1, forces the mount to use the root as the owner of the share, that creates permission issues for me.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Adding vers=2.1, forces the mount to use the root as the owner of the share, that creates permission issues for me.
The man page for mount.cifs lists several options for setting the UID and GID ownership of the share; did you try experimenting with these options?
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
The man page for mount.cifs lists several options for setting the UID and GID ownership of the share; did you try experimenting with these options?
I am aware of uid and gid options, but I am using the following to mount the share and adding vers=2.1 ignores the user (i.e. credential) who is mounting and replaces it with root

Code:
//10.10.10.10/config  /docker/config  cifs  gid=117,credentials=/home/docker/.smbcredentials,iocharset=utf8,sec=ntlmssp  0  0
 

jmccoy555

Cadet
Joined
Dec 24, 2013
Messages
9
vers=2.1 has done the trick for me on 11-RC2. Thanks, the fans spinning up to scream mode was driving me mad!!!

Sent from my Pixel C using Tapatalk
 

BrianAz1

Dabbler
Joined
Aug 1, 2012
Messages
12
How about increasing samba logging to "debug", reproducing the problem, then posting a debug file? ;)

Here you go.

Here's mine... Thanks for looking.

Note: I had to chop my file in half because it was so large. But the entire file only spanned a minute or so... I'm sure you'll get the gist of the issue.
 

Attachments

  • log.smbd.zip
    1,013.8 KB · Views: 389

BrianAz1

Dabbler
Joined
Aug 1, 2012
Messages
12
Specifying SMB version 2.1 may fix this problem. Read on...

I wasn't happy with the kernel reversion I tried earlier, so I re-installed Ubuntu 16.04.2 LTS in the VM and experimented with the CIFS-related mount options.

Here is the related /etc/fstab entry before modification. This setup eventually results in exactly the same behavior experienced by me and @BrianAz1 and others on this thread:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,credentials=/etc/media-credentials 0 0

Pretty straight-forward, huh? When I checked the man page for mount.cifs, I found out that it uses SMB version 1.0 by default, so I specified version 2.1 by adding vers=2.1 to the parameter list. Ubuntu complained about the remote server not supporting inodes correctly and suggested I add noserverino to hush the warnings. So I added that as well. Here is the new /etc/fstab:
Code:
//bandit/media /mnt/media cifs rw,user,auto,suid,vers=2.1,noserverino,credentials=/etc/media-credentials 0 0

I don't use the default Windows workgroup name ('WORKGROUP') so I also specified my workgroup name in the SAMBA configuration file (/etc/samba/smb.conf).

I've been running this configuration about 6 hours now and everything looks good so far. I'll report back later if the high CPU utilization rate/runaway process returns.


This is what my FreeNAS CMB/CIFS mounts looked like on my Ubuntu Plex server... .101 is FreeNAS...notice it does reference version 1.0.

Code:
brian@ChandlerPlex:~$ mount | grep .101
//192.168.30.101/OptimizedTV on /mnt/FreeNAS/OptimizedTV type cifs (rw,relatime,vers=1.0,cache=strict,username=brian,domain=FREENAS,uid=107,forceuid,gid=100,forcegid,addr=192.168.30.101,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,actimeo=1)
//192.168.30.101/OptimizedMovies on /mnt/FreeNAS/OptimizedMovies type cifs (rw,relatime,vers=1.0,cache=strict,username=brian,domain=FREENAS,uid=107,forceuid,gid=100,forcegid,addr=192.168.30.101,unix,posixpaths,serverino,mapposix,acl,rsize=1048576,wsize=65536,actimeo=1)


Like you said, it defaults to 1.0 because my fstab looks like this:
Code:
brian@ChandlerPlex:~$ cat /etc/fstab | grep .101
//192.168.30.101/OptimizedTV	/mnt/FreeNAS/OptimizedTV	  cifs   rw,username=brian,password=XXX,uid=107,gid=100	 0	   0
//192.168.30.101/OptimizedMovies	/mnt/FreeNAS/OptimizedMovies	  cifs   rw,username=brian,password=XXX,uid=107,gid=100	 0	   0
#//192.168.30.101/PlexBackups	/mnt/FreeNAS/PlexBackups	cifs	rw,username=brian,password=XXX,uid=107,gid=100	0	0



However, I just modified fstab per your suggestion to the following:
Code:
brian@ChandlerPlex:~$ cat /etc/fstab | grep .101
//192.168.30.101/OptimizedTV	/mnt/FreeNAS/OptimizedTV	  cifs   rw,vers=2.1,username=brian,password=XXX,uid=107,gid=100	 0	   0
//192.168.30.101/OptimizedMovies	/mnt/FreeNAS/OptimizedMovies	  cifs   rw,vers=2.1,username=brian,password=XXX,uid=107,gid=100	 0	   0
#//192.168.30.101/PlexBackups	/mnt/FreeNAS/PlexBackups	cifs	rw,username=brian,password=XXX,uid=107,gid=100	0	0



Here is what it looks like mounted:
Code:
brian@ChandlerPlex:~$ mount | grep .101
brian@ChandlerPlex:~$ sudo mount -a
brian@ChandlerPlex:~$ mount | grep .101
//192.168.30.101/OptimizedTV on /mnt/FreeNAS/OptimizedTV type cifs (rw,relatime,vers=2.1,sec=ntlmssp,cache=strict,username=brian,domain=FREENAS,uid=107,forceuid,gid=100,forcegid,addr=192.168.30.101,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,actimeo=1)
//192.168.30.101/OptimizedMovies on /mnt/FreeNAS/OptimizedMovies type cifs (rw,relatime,vers=2.1,sec=ntlmssp,cache=strict,username=brian,domain=FREENAS,uid=107,forceuid,gid=100,forcegid,addr=192.168.30.101,file_mode=0755,dir_mode=0755,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,actimeo=1)


I will follow-up after several hours to see how this works out. I've configured Plex to optimize ~ 300 TV shows tonight.. should give it a solid workout.

Thanks!!



Update: 2 hours later (and quite a bit of Plex optimizations writing to the mounts) and all is well so far:

krxiUtO.png


~ 23 hrs later... Lots of Plex smb work... fix still holding strong for me:

N6axDhm.png
 
Last edited:

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
VMs on ESXi host
Yes thats what I suspected, every one i see having the issue are running VM's, maybe a coincidence most probably not.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Yes thats what I suspected, every one i see having the issue are running VM's, maybe a coincidence most probably not.
Agreed! Lots of moving parts here: ESXi, FreeNAS, Linux.
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
I just saw that an update was released with latest samba, anyone tested it yet?

Code:
A new update is available for the FreeNAS-9.10-STABLE train.
Version: FreeNAS-9.10.2-U4

Changelog:
#24116 Bug   Expected Update Samba to 4.5.9
 

Geek Baba

Explorer
Joined
Sep 2, 2011
Messages
73
Status
Not open for further replies.
Top