Do you mind sending me a current debug file? I can take a look at it and see if I can make any recommendations.
Should be in your inbox.
Do you mind sending me a current debug file? I can take a look at it and see if I can make any recommendations.
A few comments:Should be in your inbox.
A few comments:
- You have "null passwords = yes" in your smb.conf. This parameter has been deprecated. It looks like your smb.conf file keeps trying to reload, which can have performance implications.
- Disable hostname lookups.
- There are entries in /var/log/messages complaining about permissions issues in your login.conf. Perhaps consider testing with a fresh freenas install.
- It doesn't look like you have SMART tests configured (not relevant in this case, but thought it was worth pointing out).
I think you're getting overly fixed on a single test that you created.Hmm ... thought I have it all sorted out ... Will investigate ...
Thanks ..
A few quick questions:
- What CAD software are you using?
- Were the CAD files originally saved on other server that has since been taken offline?
- Does the CAD software use VSS?
- Have you tried other OSes on the hardware you have? (for instance temporarily loading a Linux Install with ZOL)?
- Have you tried killing your symlink / snapshots trickery and seeing if it improves performance?
Over fixed by this .. I can't help it. When I enter a folder it take almost 4 sec to populate. Your zfs+samba setup just take 1 sec (even without the smb go fast trickery).
- CAD software used is Solid Edge
- 2.5 years ago they where moved from a windows server due out of space. I have been reediting the file links to point at the right server.
- No the cad software don't use VSS. Only us engineers want a easy way to revert changes.
- No I have not tested other OSes on the asrock mobo C2750D4I. I do some planning for it but lacks time to do it.
- My snapshot trickery .. Yes I have removed some of it. Here is also something. The more snapshot you have the worse it gets.
As we are constant using the file tree many times during the day this is becoming really annoying.So where do you go ? Just buy new hardware and hope you have luck?
I may be wrong but from what I understand ZFS is caching up alot of the metadata so this file list should more or less be served out from memory. So my impression was that when file count increased it will beat ntfs but sadly thats' not the case, not today at least.
Right now I'm reverting back to 9.3. The small test by browsing in and out into folders right now seem to be more snappy. It's midnight here so tomorrow I will tell how much they differ.
arc_summary.py
. If you're hitting a wall on metadata, you can try increasing the value of vfs.zfs.arc_meta_limit
. It defaults to 1/4th the size of your pool. Try increasing to 1/2. It will take a little while for it to warm up. See if it helps the situation. If it does, you might be able to increase it a bit more, or perhaps consider adding a SSD L2ARC as a metadata-only cache device. This is covered under #3 here: https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/This folder response in samba is indeed client depended. For example win10 seems to be a fraction faster then win7 (experience from one client only) . Okay .. not the same hardware and the the diff was only 0.4 sec for 50k files, but my private Ubuntu Mate install with Caja as file explorer will just hang forever while trying to open it.
My old acer notepad win7 is much slower then my virtualbox win7 install witch I'm going to use in this test.
Just saying if you can't fix the server you maybe can reduce it by upgrade the client.
My test server hardware:
Asrock C2750D41 (freenas mini)
32 GB mem
500 GB ZFS raid1
4 Tb ZFS raid1
60 Gb Intel SSD
Both ZFS raids was created from a earlier Freenas install.
My test client :
Win7 in Virtualbox on HP Z600
No antivirus is used.
File count is 50k.
For timing I will use my script and also enter with file browser and silent count until I see all then files. This will be repeated multiple times to average the result.Both result, script and manual will be printed.
First I installed Win2008R2 on the 60 Gb SSD.
No antivirus is used.
WIN2008 R2 (accessed on the server it self via GUI )
script 7.2 - 8.2 sec (the higher value was more common)
manual 1+ sec (weary odd )
WIN2008 R2 (accessed from VM client)
script 1 sec
manual 1+ sec
Installed Freenas on a usb and used the SSD for a zpool instead. Hereafter the virtualbox win7 client will be used for the measurements.
Freenas 11 (zpool on SSD, samba "go fast" parameter is used)
script 3.7 sec
manual 3.5 - 4 sec
Then I installed Proxmox 4.4 on the SSD for be able to use the ZoL in a comfortable manner. The Debian was installed as LXC container.
Testing with the freenas "go fast" samba parameter didn't affect the outcome on Debian install.
Debian 8 EXT4 SSD
script 2.3 sec
manual 2.5+ sec
Debian 8 ZoL raid1 500GB
script 2.5 sec
manual 3 sec
Then I installed Freenas 11 as a VM under proxmox and tested both scenarios. Proxmox serving ZoL (raid1 500GB) as a block device and also PCI passthru of the raid1 4 TB zpool.
Special SMB parameter is used in both cases.
Freenas 11 in VM, block device from ZoL raid1
script 4.9 sec
manual 5 sec
Freenas 11 in VM, 4tb raid1 zpool pci passthru
script 5 sec
manual 5.5 sec
Summary:
FreeBSD + zfs + samba is less responsive then its counterparts period. Even if you boost it with SSD you will be outrun by ZoL on regular disk. One thing is odd that qemu passthru of disk is slower then have it served as ZoL block device. This could be that it's not exactly the same disks.
But having ZoL and then Freenas ZFS + samba just clocking in the same as a normal passthru Freenas ZFS + samba must indicate that there are some serious overhead with FreeBSD + samba.
While testing on other hardware I noticed that under windows you will get more fluctuating timings. Even if you have 2 folders with the same file count (50k) it can differ from 1 sec to 7 sec. Disable antivirus didn't affect that really bizarre behavior. So today I leaning to recommend Linux instead of win due you will get a more solid experience for end user if you are going to have large folders. I have no experience of newer win servers beyond 2008 so maybe this have address also in the windows world.
My findings today is that you will with ZoL cut the response time in half compare to its BSD brother. The overhead of virtulize Freenas is small compare to use Freenas it self.
For actual throwput I found Freenas as god as any store.
I think you've become overly fixed on a single metric, which appears to be all over the board with lots of OSes. I don't see a strong correlation between what's happening here and what you describe from within Solid Edge, which appears has very long delays. Ultimately, you want to get your fileserver to perform well with Solid Edge. Since the delays seem to be much longer browsing within Solid Edge than browsing through a File Explorer session, it's fairly clear that Solid Edge is doing something (or trying to do something). You should try to figure out what that "something" is. It may be a simple configuration change inside the application. One diagnostic step will be to reproduce the "slow" behavior with samba's logging set to "debug". Post the log here. Maybe look for logs in Solid Edge as well.
Perhaps compare Solid Edge performance between operating systems. If it performs noticeably better on one OS than another (for instance), use that OS. :)
I can just add that d_off field of struct dirent, which as I told before is required for proper directory listing by Samba, was recently added to FreeBSD 12. It seems the work is still in progress and that field is not really used yet, but that is a critical first step.