I get good file copy performance to and from server over SMB protocol at around 110MB/s. However, tasks that require quick file system operations are painfully slow. I've seen lots of TrueNAS benchmarks, but none focusing on testing performance of file system operations, so I don't know if this is sort of expected performance (hopefully not).
I observe 100% CPU usage by smbd (single core) whenever lots of files are involved. NFS is much faster for that case, but with NFS on Windows I get only 70MB/s, but that is not the biggest issue. My issue with NFS on Windows is that it NFS share hangs after a while and the only way to resolve it to reboot my Windows PC (yuck).
To peform my own file system benchmark, I used the Linux kernel source in gz format which contains about 81k files and is 1.2GB large when uncompressed.
Client:
Windows 11, with Intel 1Gbit NIC , Intel i7, 32GB Ram and Samsung 970 EVO SSD, making sure the client CPU and SSD were not a bottleneck.
SMB:
NFS:
Server share:
TrueNAS 13.0-U1.1, case insensitive RAIDZ ZFS share, with sync disabled, lz4 compression, L2ARC and ZIL cache. I run this test with default settings on SMB share, i.e. without any auxilary parameters on share. Global Samba settings as follows:
General observation during the benchmark:
Whenever something was slow during the benchmark, smbd was using 100% of CPU (1 core). IO was pretty much idle for all operations; probably cached as testing dataset it relatively small. The only exception is large file copy to and from server where network was the bottleneck.
Results:
Notes and observations to each task:
I know that Avoton in FreeNAS XL is not the most powerful CPU and that Samba is not the fastest file server around, however, I did expect better performance. One hour to 81k files and 1.2GB of data is just too slow to be usable.
I used truss and dtrace to investigate smbd CPU usage and I see there are lots and lots of calls for every file accessed. It seems deeper in the directory the file is, more syscalls are being generated.
Is this expected performance of FreeNAS XL Mini and TrueNAS Core? Does TrueNAS has any unit testing for file system performance? Are file system operations also so slow in your tests? Any "go fast" setting to recommend? Is anyone else observing so slow performance when lots of files are involved?
I observe 100% CPU usage by smbd (single core) whenever lots of files are involved. NFS is much faster for that case, but with NFS on Windows I get only 70MB/s, but that is not the biggest issue. My issue with NFS on Windows is that it NFS share hangs after a while and the only way to resolve it to reboot my Windows PC (yuck).
To peform my own file system benchmark, I used the Linux kernel source in gz format which contains about 81k files and is 1.2GB large when uncompressed.
Client:
Windows 11, with Intel 1Gbit NIC , Intel i7, 32GB Ram and Samsung 970 EVO SSD, making sure the client CPU and SSD were not a bottleneck.
SMB:
Code:
PS > Get-SmbConnection ServerName ShareName UserName Credential Dialect NumOpens ---------- --------- -------- ---------- ------- -------- server share 3.1.1 1 PS > Get-SmbClientConfiguration CompressibilitySamplingSize : 524288000 CompressibleThreshold : 104857600 ConnectionCountPerRssNetworkInterface : 4 DirectoryCacheEntriesMax : 16 DirectoryCacheEntrySizeMax : 65536 DirectoryCacheLifetime : 10 DisableCompression : False DormantFileLimit : 1023 EnableBandwidthThrottling : True EnableByteRangeLockingOnReadOnlyFiles : True EnableCompressibilitySampling : False EnableInsecureGuestLogons : True EnableLargeMtu : True EnableLoadBalanceScaleOut : True EnableMultiChannel : True EnableSecuritySignature : True EncryptionCiphers : AES_128_GCM, AES_128_CCM, AES_256_GCM, AES_256_CCM ExtendedSessionTimeout : 1000 FileInfoCacheEntriesMax : 64 FileInfoCacheLifetime : 10 FileNotFoundCacheEntriesMax : 128 FileNotFoundCacheLifetime : 5 ForceSMBEncryptionOverQuic : False KeepConn : 600 MaxCmds : 50 MaximumConnectionCountPerServer : 32 OplocksDisabled : False RequestCompression : False RequireSecuritySignature : False SessionTimeout : 60 SkipCertificateCheck : False UseOpportunisticLocking : True WindowSizeThreshold : 8
NFS:
Code:
mount -o rsize=128 -o wsize=128 -o anon -o nolock -o casesensitive=no \\server\mnt\nodename\share m: Local Remote Properties ------------------------------------------------------------------------------- m: \\192.168.254.13\mnt\nodename\share UID=-2, GID=-2 rsize=131072, wsize=131072 mount=soft, timeout=6.4 retry=1, locking=no fileaccess=755, lang=ANSI casesensitive=no sec=sys
Server share:
TrueNAS 13.0-U1.1, case insensitive RAIDZ ZFS share, with sync disabled, lz4 compression, L2ARC and ZIL cache. I run this test with default settings on SMB share, i.e. without any auxilary parameters on share. Global Samba settings as follows:
Code:
max stat cache size = 0 security = user username map = /usr/local/etc/smbuser smb encrypt = disabled
Code:
smbstatus -b Samba version 4.15.7 PID Username Group Machine Protocol Version Encryption Signing ---------------------------------------------------------------------------------------------------------------------------------------- 21485 username wheel 192.168.254.252 (ipv4:192.168.254.252:51017) SMB3_11 - partial(AES-128-GMAC)
General observation during the benchmark:
Whenever something was slow during the benchmark, smbd was using 100% of CPU (1 core). IO was pretty much idle for all operations; probably cached as testing dataset it relatively small. The only exception is large file copy to and from server where network was the bottleneck.
Results:
Task | FreeNAS XL (local) | Units/s | SMB Elapsed Time | SMB Units/s | NFS Elapsed Time | NFS Units/S | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
| |||||||
|
|
|
|
|
|
| |||||||
3. Zip Linux Kernel (no-comp) |
|
|
|
|
|
| |||||||
4. Large file copy Client to Server | NA |
|
|
|
| ||||||||
5. Large file copy Server to Client | NA |
|
|
|
| ||||||||
6. Delete files |
|
|
|
|
|
|
Notes and observations to each task:
Task | FreeNAS notes | FreeNAS cmd | SMB Notes | NFS Notes | Win cmd |
---|---|---|---|---|---|
1. Uncompress Linux kernel | bsdtar 100% CPU single core | time tar xf linux-5.19.4.tar.gz | smbd 100% one core, iostat 1-2% utilisation | nfsd 25% CPU of single core, lots of idle IO | Measure-Command {cd M:\test; tar xf C:\linux-5.19.4.tar.gz} |
2. Count files | instant | time find . -print | wc -l | smbd 100% one core, iostat 1-2% utilisation | nfsd 25% CPU of single core, lots of idle IO | Measure-Command {Get-ChildItem -Recurse -Path M:\test} |
3. Zip Linux Kernel (no-comp) | bsdtar 100% | time tar cf /dev/null linux-5.19.4/* | smbd 30%, iostat 1-2% utilisation | no bottleneck identified | Measure-Command {Compress-Archive -CompressionLevel NoCompression -Path M:\test\* -DestinationPath E:\linuxkernel.zip} |
4. Large file copy Client to Server | NA | NA | Works well | Only 70MB/s | Measure-Command {Copy-Item E:\linuxkernel.zip M:\test} |
5. Large file copy Server to Client | Works well | Only 70Mb/s | Measure-Command {Copy-Item M:\test\linuxkernel.zip E:\linuxkernel2.zip} | ||
6. Delete files | smbd 100% one core, iostat 1-2% utilisation | nfsd 25% CPU of single core, lots of idle IO | Measure-Command {Remove-Item M:\test\* -Recurse -Force} |
I know that Avoton in FreeNAS XL is not the most powerful CPU and that Samba is not the fastest file server around, however, I did expect better performance. One hour to 81k files and 1.2GB of data is just too slow to be usable.
I used truss and dtrace to investigate smbd CPU usage and I see there are lots and lots of calls for every file accessed. It seems deeper in the directory the file is, more syscalls are being generated.
Is this expected performance of FreeNAS XL Mini and TrueNAS Core? Does TrueNAS has any unit testing for file system performance? Are file system operations also so slow in your tests? Any "go fast" setting to recommend? Is anyone else observing so slow performance when lots of files are involved?