Register for the iXsystems Community to get an ad-free experience and exclusive discounts in our eBay Store.

Slower and slower the more files are stored in the directory

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
Hello *,

for some reason my NAS gets slower and slower the more files are stored in the directory I want to read. If there stored more than 2000 files, it becomes so slow it that working is no longer possible. (You have time to drink a cup of coffee.)

I don't think the network is the reason: perf3 shows the practically possible data throughput in both directions.

"zpool scrub <pool>" was running without any errors.

Installed is 11.2-U5 with 48GB RAM, 6 x 4TB as raidz2 and no tunables are defined.

Does anyone have any idea where I can look?

Thank you very much.

Manu
 

melloa

Dedicated Sage
Joined
May 22, 2016
Messages
1,726
Did you check the read/write performance on the pool itself?
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,167
I found adding a L2ARC with the "metadata only" setting significantly impacts rsync speed (which includes a lot of directory browsing). Subjectively, browsing directories by hand (after the L2ARC cache has become "hot") also seems much faster. I currently use AFP but posting / guidance here suggests the same benefit for SMB.

As I understand it, every reboot also flushes the L2ARC, so it has to be rebuilt each time. The settings associated with how quickly it is built are part of the auto-tuneables (if you have that enabled). I find it currently takes a few rsync passes before the cache is "hot". But the improvement can be dramatic - from 1.5hrs per backup to 5 minutes (itunes) or 3.5 hrs to 42 minutes (1.8TB of images).
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
There are configuration changes in 11.3 WRT to SMB that significantly improve SMB directory listing performance. There's nothing to stop a person from applying the same in 11.2, but I'd rather know what the problem is before throwing them out.
 

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
Did you check the read/write performance on the pool itself?
Hm ... What would be normal?

root ~ # zpool iostat 2
capacity operations bandwidth
pool alloc free read write read write
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 1.88K 30 222M 206K
freenas-boot 9.19G 110G 0 0 4.41K 3.09K
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 2.31K 0 295M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 2.55K 0 325M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 4.02K 0 506M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 3.37K 265 429M 4.83M
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 3.69K 0 470M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 2.45K 0 313M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 3.46K 226 439M 4.04M
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 3.16K 0 398M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 2.19K 110 280M 473K
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
dps-01 16.3T 5.44T 2.30K 0 294M 0
freenas-boot 9.19G 110G 0 0 0 0
------------ ----- ----- ----- ----- ----- -----
 

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
I found adding a L2ARC with the "metadata only" setting significantly impacts rsync speed (which includes a lot of directory browsing). Subjectively, browsing directories by hand (after the L2ARC cache has become "hot") also seems much faster. I currently use AFP but posting / guidance here suggests the same benefit for SMB.

As I understand it, every reboot also flushes the L2ARC, so it has to be rebuilt each time. The settings associated with how quickly it is built are part of the auto-tuneables (if you have that enabled). I find it currently takes a few rsync passes before the cache is "hot". But the improvement can be dramatic - from 1.5hrs per backup to 5 minutes (itunes) or 3.5 hrs to 42 minutes (1.8TB of images).
I haven't a SSD as Cache and the L2ARC is empty.
Do you have a SSD as Cache?
 

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
There are configuration changes in 11.3 WRT to SMB that significantly improve SMB directory listing performance. There's nothing to stop a person from applying the same in 11.2, but I'd rather know what the problem is before throwing them out.
Are you talking about directory listing performance over SMB?
Yes, I am talking about SMB. But ... if I have an issue with the directory listing performance ... I don't know.
A process reads three lines out of 1400 txt-files (each file 34k). Local with a NVME it takes less than 30 sec., via the NAS about more than 5min.
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
Yes, I am talking about SMB. But ... if I have an issue with the directory listing performance ... I don't know.
A process reads three lines out of 1400 txt-files (each file 34k). Local with a NVME it takes less than 30 sec., via the NAS about more than 5min.
What's the client?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
Windows 10 / 1803
Going over network will always be slower. I'm not sure whether it's getting bogged down in metadata operations. For the share try the following (assuming no Mac clients):
1) replace vfs modules with "ixnas" (remove all others)
2) set following auxiliary parameters:
mangled names = illegal
ea support = false


Then restart the SMB service and redo your test. If it is still going slowly, run gstat -I 1500 to check disk utilization and top to check CPU utilization. Samba is single-threaded for most operations so you might see an smbd process at close to 100%.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,167
I haven't a SSD as Cache and the L2ARC is empty. Do you have a SSD as Cache?
Yes, I added a SSD as a L2ARC to the pool. W/o a cache disk, L2ARC = 0 by default.

I use a inexpensive EVO 860 mSATA SSD as my L2ARC. Write performance doesn't have to be great (the auto-tuned rate at which the L2ARC is filled is very slow), the drive is relatively inexpensive, and 1TB capacity is 4x of what my current L2ARC needs appear to be (based on making the cache go "hot", figuring out how big it is, and then extrapolating how big it ought to be should my pool reach capacity). With time, I may opt for a faster module / card, but I doubt it would make a tremendous difference.

@anodos, does past advice re: improving SMB performance still apply? Or have these changes already been rolled into SMB by default?
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
ixnas will be default in 11.3. ea support and streams_xattr will be _enabled_ in 11.3 because they are required for proper support for MacOS clients. Mangled names will be set to "illegal" by default in 11.3.

ixnas removes the need to set "store dos attributes = no".
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,167
Awesome and thanks.

I have yet to make the transition to SMB from AFP. Is Time Machine is now fully supported over SMB by default or are the changes noted here still needed? (I'm running 11.2U5 STABLE) That's one of the reasons I've stuck to Sierra...
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
Awesome and thanks.

I have yet to make the transition to SMB from AFP. Is Time Machine is now fully supported over SMB by default or are the changes noted here still needed? (I'm running 11.2U5 STABLE) That's one of the reasons I've stuck to Sierra...
Time machine is fully supported over SMB. If you decide to advertise SMB and AFP time machine volumes from the NAS simultaneously, you may need to alter the zeroconf name for SMB by adding the auxiliary parameter "zeroconf name = freenas_smb" or something similar. This is due to the way that the txt records are registered over mDNS. In 11.3 I've moved all mDNS registration to the freenas middleware so that if the same dataset is shared with the same name via AFP and SMB simultaneously, the txt record is correctly formatted to indicate that it's a mixed time machine share to MacOS clients. This should allow legacy clients to connect, and allow protocol failover in case samba craps out.
 

Constantin

Vampire Pig
Joined
May 19, 2017
Messages
1,167
Isn't presenting / sharing the same dataset via two protocols at the same time a no-no due to the potential for data a to get mangled by two clients using different protocols trying to write at the same time?

I presume it's not an issue with TM because TM only has one client writing to one TM folder at a time?

Nevertheless, I've always set up a separate TM dataset and share for each user, so I could mount them exclusively in either SMB or AFP and never have to contend with the file transfer protocols interfering. But a failover option is certainly interesting!
 

anodos

Sambassador
iXsystems
Joined
Mar 6, 2014
Messages
7,268
Isn't presenting / sharing the same dataset via two protocols at the same time a no-no due to the potential for data a to get mangled by two clients using different protocols trying to write at the same time?
You can do cross-protocol locking between AFP and SMB. This will be auto-configured in the situation I mentioned.

Nevertheless, I've always set up a separate TM dataset and share for each user, so I could mount them exclusively in either SMB or AFP and never have to contend with the file transfer protocols interfering. But a failover option is certainly interesting!
AFP and SMB both support [homes] shares. In this case, you can create a single share and single dataset, and user home directories for time machine will be automatically chrooted to the home directory (created by pam_mkhomedir) within the specified share path. With the ixnas module you can specify a base user quota (which sets a ZFS userspace quota), to limit the size of the TM share. This way you only ever need to configure a single share and single dataset.
 

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
anodos, thank you very much for your very fast and helpful support. I changed the settings and it is possible to work.
Yes, over a network it is slower, but the NAS has many other good advantages.
 

manpro

Neophyte
Joined
Jul 9, 2019
Messages
7
Yes the performance is much more better, it feels like I have got new powerful hardware ...
 
Top