SOLVED NFS slow directory listing

Status
Not open for further replies.

brm

Cadet
Joined
Jul 6, 2013
Messages
2
I read the sticky thread about browsing directories being slow but that was mainly for CIFS and I didn't really get an answer from that thread. Maybe I missed the solution but the thread says solved?

I just setup a freenas box with 4x3TB WD RED HDD's and when using NFS the directory listing with lots of files or sub-directories is extremely slow (30-60 seconds).

Additionally, I am trying to rsync data from the local drive to the NFS share but there is a 3-5 second between files. When larger files are transferring the speed saturates the network port of the machine so I don't think there is a speed issue between the local drive and the nas machine itself.

System: E3-1240, 16GB, 4x3TB RED HDD

Any ideas or suggestions? Is there a better way?

Thanks!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah, that thread discusses CIFS shares. But there really wasn't any solution for everyone. More than likely part of the problem is that different individuals had different problems. So one person might say that x fixed their issue, another person said it didn't.

The only advice I can provide is to troubleshoot on your own until you find the answer. I will tell you that as soon as I switched to Linux for my desktops the problem went away. But that was CIFS. Maybe try using a Linux live CD like Linux Mint 15 to see if the directories go slow on linux. At least that will provide some amount of narrowing down the issue to being the client or network/server issue.

I found that my antivirus was the cause of my problems. As soon as I got rid of that things were much better. Naturally, using Windows with no anti-virus on the internet is just... stupid. So I switched to linux and I'm plenty happy here.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
A subdirectory is basically a special type of file for purposes of a UNIX directory listing.

So how many files are we talking about here?
 

brm

Cadet
Joined
Jul 6, 2013
Messages
2
500 folders no files in any of them.

The client is CentOS.
 

MisterBennie

Cadet
Joined
Jan 2, 2014
Messages
4
Hi All,

I had the same problem and solved it in a, kind of, silly way.
It is not really a solution to the problem, more of a symptom reduction.

History
I have Freenas 9.2.0, but had the same problem on 8.X.
Slow or even very slow directory browsing.
Both on NFS as CIFS (Samba) the directory browsing was very slow.
Once I browsed to a directory, the directory loaded very fast, the next time I visited it.
But after a few minutes, it would start all over again.
Disks were not in standby, almost none of the 16 GB of memory was in use.
Throughput was very good (100-116 MByte/sec)

(kind of) Solution ;-)
I have, on my VMWare white box, a ubuntu Server 13.10 installation.
On this ubuntu installation, I have a cron running which does a directory listing of the mounted samba share every minute.
From the moment I ran this cron, response of the system (from windows 7 & 8, or my OpenElec HTPC (NFS)) has been almost immediately, in stead of waiting for 10- 20 seconds.
Don't know what the problem is, but I was not the only one.

What I notice now, is that memory usage is 12 out of 16 GB continuously.
A directory with 8800 files loads in less than a second via Samba.

Alternative (kind of) solution
Mount the shared drives on Freenas itself and run a cron from there.
Haven't tried it yet, but should work.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
Hi

I tested the suggestion from MisterBennie with 3 PC connection to a CIFS share with a directory with more than 32000 files.
At first I did open it with win 7 explorer on one PC and it took a very long time, the second time on the same PC was a lot faster. ( and acceptable)
I then started a second PC and the directory opened as fast as the 2de (fast) open on the first PC.
Both had a wireless connection.
To be sure I then tried it on my "major" PC with a cable connection to the freenas box and that was still faster. Normally it's also very slow.

So the CIFS directory info is cached at the freenas box after a CIFS request. Using a local ls - alR on the freenas box doesn't seem enough.
 

MisterBennie

Cadet
Joined
Jan 2, 2014
Messages
4
Hi Again,

Just tested this "Solution" by accessing the Share, from Freenas, via Samba and it works too.
I created a cron (via webUI) which executes the following each minute.
Directory browsing is now very fast, as it should be.

Cron:
smbclient //Servername/Sharename -N -c 'cd "directory" ; ls '

The -N makes sure that no password is requested.
Directory is a full path to one of your directories (the double quotes will accept spaces in your path, no escape char needed).
after that, an ls is performed, didn't test if it is needed.

Example
smbclient //Freenas/MyShare -N -c 'cd "/Video/2 - Movies" ; ls '
Try the command at your command prompt to test if it works.

Stupid thing is, my NFS is responding immediately too, which was slow, like CIFS
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
Thanks

You're smbclient command does indeed solve the slow directory problem.

A tip : smbclient ls supports recurse so :
smbclient //Servername/Sharename -N -c 'cd "directory" ;recurse ; ls '

This wil run through all the subdirectories to.


BTW. Depending on the security settings it can be needed to give user and password. (if GUEST is not allowed).
 

MisterBennie

Cadet
Joined
Jan 2, 2014
Messages
4
I tried the resursive too, and the first time it took a much longer time than the times after.

hopefully this "temporary" solution will solve this slow directory browsing for now, till we know the reason why this is so slow.

p.s. I removed the recursive ls as it seems that it is not needed.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
...
p.s. I removed the recursive ls as it seems that it is not needed.

The recursive makes it possible to run it at the top level directory and it "does" all subdirectories. --> nice to keep all directories from a share in cache.
 

Gmdfunk

Dabbler
Joined
Jan 17, 2014
Messages
10
when I tried this fix for the slow directory listing I got the error

Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.13]
tree connect failed: NT_STATUS_ACCESS_DENIED

I tried this both in putty from my desktop, and at the terminal on my freenas box

I am logged in as root to both places, which may be causing my problems? but if it doesn't work as root from a terminal, how will it work as a cron job?

any help on this problem would be appreciated
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
With a cron job you can shedule it with a user and this user has to have sufficient "CIFS rights" to the selected directory's. It's a bad idea to give root this right over cmb.
Alternative it's possible to add a user and password to the smbclient command.
 

MisterBennie

Cadet
Joined
Jan 2, 2014
Messages
4
The recursive makes it possible to run it at the top level directory and it "does" all subdirectories. --> nice to keep all directories from a share in cache.


In my case it seemed that most of the cache was flushed because of this.

So, I stick with my original post where I just perform "ls" on one (deep) directory.
Furthermore, I disabled the Enable powerd (Power Saving Daemon). Don't know if this caused a delay too, but now everything feels very responsive.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
powerd is less relevant than it was six to eight years ago. For the most part, modern Intel parts seem to manage this very well on their own. I believe the AMD stuff does too. We hit the performance wall back in the mid-2000's, where more watts wasn't generating sufficiently more speed. The designs at that time were focused on clock speeds rather than energy efficiency. With the advent of multicore and virtualization, and data centers being unable to support solutions that wanted to blow off tens of kilowatts for a fully loaded rack... and the high density solutions being so expensive... basically CPU manufacturers went back to the drawing board.

These days, the best bet seems to be to get a CPU larger than what you expect to need, keeping it always in a non-peak-load scenario. This is a win all around; power utilization remains lower as compared to a more "right-sized" solution, plus it gives you legs to let the thing really run for the unexpected task you never planned for.

The ultimate test is to get a kill-a-watt and plug it in, boot FreeNAS, examine watts at idle, then turn on powerd and once again examine watts at idle. They're probably almost the same on any modern system.
 

Neuffy

Cadet
Joined
Feb 4, 2014
Messages
1
Thank you so much everyone!
NT_STATUS_ACCESS_DENIED isn't an issue running the command as a cron job (under a standard permissioned user), and the recursive command works well.

My only remaining question is how often should the cron job be scheduled to run? Is it only for new files and persistent over long periods of time, or does it need refreshing?
 

wilsonics

Cadet
Joined
Feb 7, 2013
Messages
6
Great article everyone. Detailed and very helpful.

Can anyone adapt this for an AFP share for me?

I have the same issue with slow-listing AFP shares, so running a cron ls using an AFP connection would be awesome.

I don't want to enable SMB because I've never gotten the same transfer speeds as AFP, and there are no Win machines in the house. I know, Apple wants you to go with SMB now, but I prefer AFP.

Many thanks.

Currently running 9.2.1.3-RELEASE-X64.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
From NFS Shares Doc..

If your clients are receiving "reverse DNS" errors, add an entry for the IP address of the FreeNAS® system in the "Host name database" field of Network → Global Configuration.

If the client receives timeout errors when trying to mount the share, add the IP address and hostname of the client to the "Host name data base" field of Network → Global Configuration.

I suggest this for NFS.. Helped me out initially etc..
 

DCswitch

Explorer
Joined
Dec 20, 2013
Messages
58
If your command does all the sub-directories then would this work from the root?
smbclient //Freenas/MyShare -N -c 'recurse ; ls '

I'm not sure I understand the point of changing into the sub-directory.
 

AlainD

Contributor
Joined
Apr 7, 2013
Messages
145
If your command does all the sub-directories then would this work from the root?
smbclient //Freenas/MyShare -N -c 'recurse ; ls '

I'm not sure I understand the point of changing into the sub-directory.

It could be run from root, but sometimes is unneeded to do from root. It's only a big time saver when running with big directory's.
 

DCswitch

Explorer
Joined
Dec 20, 2013
Messages
58
It could be run from root, but sometimes is unneeded to do from root. It's only a big time saver when running with big directory's.
Understood.
The reason I thought of this originally was because I have multiple large directories. Do you think it's better to run just one cron job from the root or make multiple cron jobs with all the large directories?
Also- do you think running this every minute is overkill? I also want to experiment with multiple cron jobs having them alternate on the minute run. For instance if I have two cron jobs- then running one on the odd minutes and the other on the even minutes.

I was also experimenting with running one cron job every two or three minutes (I was thinking it could save energy), but the verdict isn't out yet. It takes time to properly test because every time I go in to test I'm refreshing the directories myself.
Has anyone else already tried this?
 
Status
Not open for further replies.
Top