Understanding Samba Read Performance Characteristics on TrueNAS SCALE

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
Good morning! To start with, I do not want this thread to turn into some kind of hunt for a quick fix to my performance problems. What I am looking for here is a detailed and technical conversation about the read performance characteristics and design tradeoffs of Samba shares on TrueNAS SCALE.

First, some basic information about my use case and my setup:
  • My use case is video project editing and rendering, where the primary bottlenecks are large file sequential reads and large file random reads.
  • My machine is running TrueNAS-SCALE-22.12.3.2. (I have more details on the current hardware setup in my signature.)

Here's what my zpool looks like presently:
Code:
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 08:40:37 with 0 errors on Fri Aug 25 00:46:28 2023
config:

	NAME                                      STATE     READ WRITE CKSUM
	tank                                      ONLINE       0     0     0
	  raidz1-0                                ONLINE       0     0     0
	    7e4aee97-4bfb-4a88-8cc0-60f89a88c17c  ONLINE       0     0     0
	    88da4991-a66c-4b9e-a90c-60612294dc4a  ONLINE       0     0     0
	    42eb1da6-da35-4abc-bd3d-9ab54f7d6382  ONLINE       0     0     0
	  raidz1-1                                ONLINE       0     0     0
	    95b760c4-fb07-4388-89d9-5dd04cd5f098  ONLINE       0     0     0
	    ba771674-8f61-4412-8506-03ec36817bc3  ONLINE       0     0     0
	    952e6fd9-948c-4ec0-9819-c8916e66e955  ONLINE       0     0     0
	  raidz1-2                                ONLINE       0     0     0
	    f9298206-7146-425c-b5e0-6ee9a045c23e  ONLINE       0     0     0
	    cbf4c733-c10f-4e41-9c87-063c03b98fc9  ONLINE       0     0     0
	    c62bf56c-ab78-4b3b-9b33-19e0ae67f3c6  ONLINE       0     0     0
	  raidz1-3                                ONLINE       0     0     0
	    b9140a74-2689-4a89-a558-59bc04efba8d  ONLINE       0     0     0
	    38c8e6ab-8da5-4bd2-a67a-476bb8b31a3b  ONLINE       0     0     0
	    52c840f8-46f6-41e7-995d-c2f62789cdaa  ONLINE       0     0     0
	  raidz1-4                                ONLINE       0     0     0
	    ac9511ab-efdb-47db-82de-459d9a0913c8  ONLINE       0     0     0
	    20bcd52b-8d23-4a82-828b-d3990d937426  ONLINE       0     0     0
	    41349c9a-059b-4cd3-af40-32758cd26dbd  ONLINE       0     0     0
	cache
	  1f1b7e96-394f-45b6-83f9-7b0e0846e475    ONLINE       0     0     0

errors: No known data errors


I have an NVMe L2ARC in the box, but it should not be necessary to achieve pretty decent sequential read speeds on large files.

It's a Saturday and nobody else is using this system right now, so let's do some systematic testing.

Here's a snapshot of reading a large BRAW file (directly on the TrueNAS box, not over the network) which has not been touched in months and therefore cannot be in the cache:

Code:
root@veritas2[...es/202303/sources/session_one_a/bmpcc]# cat 1232_02230614_C008.braw | pv >/dev/null
 188GiB 0:03:37 [ 887MiB/s] [ ... ]
root@veritas2[...es/202303/sources/session_one_a/bmpcc]#


That works out to a practical read throughput of 7.4 gigabits per second from the disk array, which (in my opinion) is fantastic given it's a bunch of spinning rust on the other end of a couple of SAS-3 cables.

The second read of the same file is predictably much faster. My 128GB of RAM and 1TB L2ARC are doing their job:

Code:
root@veritas2[...es/202303/sources/session_one_a/bmpcc]# cat 1232_02230614_C008.braw | pv >/dev/null
 188GiB 0:02:32 [1.23GiB/s] [ ... ]
root@veritas2[...es/202303/sources/session_one_a/bmpcc]#


So, after slightly warming the cache, the read speed for this file increases to just over 10 gigabits per second. Great.

With regards to caching, we generally only have one or two video projects being "actively" worked on at a given time, so the majority of the data should usually be hanging around in L2ARC while the project is hot.

Moving to the network, I'm using 10gbe interfaces and 10gbe switches. MTUs are set to 9000 everywhere.

Here's what iperf3 looks like between the TrueNAS box (veritas2) and one of my macOS clients (Lapis):

Code:
Lapis:~ alex$ iperf3 -c veritas2 -f g
Connecting to host veritas2, port 5201
[  7] local 10.77.148.76 port 61569 connected to 10.77.1.50 port 5201
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-1.00   sec  1.15 GBytes  9.92 Gbits/sec
[  7]   1.00-2.00   sec  1.15 GBytes  9.88 Gbits/sec
[  7]   2.00-3.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   3.00-4.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   4.00-5.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   5.00-6.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   6.00-7.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   7.00-8.00   sec  1.15 GBytes  9.84 Gbits/sec
[  7]   8.00-9.00   sec  1.15 GBytes  9.90 Gbits/sec
[  7]   9.00-10.00  sec  1.15 GBytes  9.90 Gbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  7]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  sender
[  7]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  receiver

iperf Done.
Lapis:~ alex$


Here's what it looks like between veritas2 and one of my Linux clients (Opal):

Code:
[alex@Opal ~]$ iperf3 -c veritas2 -f g
Connecting to host veritas2, port 5201
[  5] local 10.77.245.62 port 52042 connected to 10.77.1.50 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1.14 GBytes  9.77 Gbits/sec   28   1.55 MBytes
[  5]   1.00-2.00   sec  1.15 GBytes  9.91 Gbits/sec    0   1.59 MBytes
[  5]   2.00-3.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.63 MBytes
[  5]   3.00-4.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.66 MBytes
[  5]   4.00-5.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.76 MBytes
[  5]   5.00-6.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.80 MBytes
[  5]   6.00-7.00   sec  1.15 GBytes  9.90 Gbits/sec   63   1.35 MBytes
[  5]   7.00-8.00   sec  1.15 GBytes  9.90 Gbits/sec    0   1.54 MBytes
[  5]   8.00-9.00   sec  1.15 GBytes  9.91 Gbits/sec    0   1.59 MBytes
[  5]   9.00-10.00  sec  1.15 GBytes  9.90 Gbits/sec    0   1.62 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec   91             sender
[  5]   0.00-10.04  sec  11.5 GBytes  9.85 Gbits/sec                  receiver

iperf Done.
[alex@Opal ~]$


On both machines, I'm getting about 1% loss with TCP overhead.

So here's the real question. What does it look like when I read the same file over SMB?

From my Linux client, here's what it looks like:

Code:
[alex@Opal bmpcc]$ cat 1232_02230614_C008.braw | pv >/dev/null
 188GiB 0:07:14 [ 444MiB/s] [ ... ]
[alex@Opal bmpcc]$


This is a measured practical read throughput of 3.7 gigabits per second. That's not slow by any means, but it's puzzling from a relative performance perspective. (Again, nobody else is using this system today.)

Put another way, if I can read this file at just over 10 gigabits per second locally, and I can push bits over the network at 9.9 gigabits per second, that means Samba itself is introducing a 63% overhead for this use case.

I'm used to protocol overhead (SCP and the like) degrading performance by 10+% compared to raw network performance. But over 60% loss? When considered from that perspective, it makes me wonder if something is wrong with the client or the server.

Speaking of the client, here is the client configuration for the above test:

Code:
//veritas2/videowork on /home/videowork type cifs (rw,nosuid,nodev,relatime,vers=3.1.1,cache=strict,username=alex,uid=7000,noforceuid,gid=7000,noforcegid,addr=10.77.1.50,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1,user=alex)


From my macOS client (macOS Ventura 13.5.2), things get genuinely sad:

Code:
Lapis:bmpcc alex$ cat 1232_02230614_C008.braw | pv >/dev/null
 188GiB 1:02:02 [51.8MiB/s] [ ... ]
Lapis:bmpcc alex$


There is no sugar coating this. The performance here is just bad. At a measured throughput of 0.4 gigabits per second, that's an eye-popping 96% loss compared to iperf's measured TCP throughput.

I'm pretty sure this is a performance regression, because I don't remember things ever being this bad before. But I don't know for sure when it started. Over the past couple of months, I have migrated my NAS hardware from TrueNAS Mini to a generic SuperMicro machine and migrated my NAS software from TrueNAS Core to TrueNAS SCALE. There have been so many changes in my environment it is impossible to narrow down this performance regression to a specific change.

Besides which, I realize I don't understand enough about the performance characteristics and tradeoffs for managing a Samba server. I want to be able to reason through the system instead of panicking and throwing configuration changes at the wall to see if they help.

The macOS client configuration (as reported by smbutil):

Code:
Lapis:~ alex$ smbutil statshares -a

==================================================================================================
SHARE                         ATTRIBUTE TYPE                VALUE
==================================================================================================
videowork
                              SERVER_NAME                   veritas2._smb._tcp.local
                              USER_ID                       7000
                              SMB_NEGOTIATE                 SMBV_NEG_SMB1_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB2_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB3_ENABLED
                              SMB_VERSION                   SMB_3.1.1
                              SMB_ENCRYPT_ALGORITHMS        AES_128_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_128_GCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_GCM_ENABLED
                              SMB_CURR_ENCRYPT_ALGORITHM    OFF
                              SMB_SHARE_TYPE                DISK
                              SIGNING_SUPPORTED             TRUE
                              EXTENDED_SECURITY_SUPPORTED   TRUE
                              UNIX_SUPPORT                  TRUE
                              LARGE_FILE_SUPPORTED          TRUE
                              OS_X_SERVER                   TRUE
                              FILE_IDS_SUPPORTED            TRUE
                              DFS_SUPPORTED                 TRUE
                              FILE_LEASING_SUPPORTED        TRUE
                              MULTI_CREDIT_SUPPORTED        TRUE

--------------------------------------------------------------------------------------------------
Lapis:~ alex$


So what is going on here? What design parameters are impacting these protocol overheads? What tuning parameters are going to be most relevant for my use case?

I've googled around a ton, and mostly what I'm finding is half-baked guidance for "set this parameter" without a detailed explanation of why. And a lot of the guidance out there seems either outdated, irrelevant, or outright dangerous.

My experience with this community has been fantastic so far, so I'm hoping someone here will have the expertise and the time to help me reason through this.

Thanks for reading!
 

NugentS

MVP
Joined
Apr 16, 2020
Messages
2,947
Watching this one with interest.
 

milkaudio

Cadet
Joined
Sep 18, 2023
Messages
2
I have similar performance issues with SMB on Truenas Scale 22.12.3.3 to all client Mac machines

The server is:
Dell r730xd
2 x Intel Xeon E5-2643V4 3.4ghz
128gb ram
Mellanox x4 25gb NIC

All networking from the clients is 10gb. Everything is on 9000MTU

iperf3 tests all show 9.6 gb to the server.

Therefore we can rule out networking as the issue.

But SMB performance especially on small files is abysmal.

One thing I have noticed is that when selecting and copying 10k small files it takes a very long time. If i copy the folder containing those files the speed is much improved.


The tests are done with one active client.

If you have any suggestions on more specific tests that i can run from the Mac clients to the server please let me know.
 

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
I spent some more time experimenting with the macOS client situation today.

First, I'm observing zero difference in speed between clients running macOS 13.5.2 and clients running macOS 14.0.

Second, I've spent some time testing SMB client signing configuration in nsmb.conf.

From my googling around, I have found SO MANY ARTICLES online that recommend disabling SMB client signing in macOS to improve client performance:
Where did this advice come from? Maybe from this deprecated support article from Apple: https://support.apple.com/en-gb/HT205926

Here are some quotes:
This article has been archived and is no longer updated by Apple.

In macOS 10.13.4 and later, packet signing is off by default. Packet signing for SMB 2 or SMB 3 connections turns on automatically when needed if the server offers it. The instructions in this article apply to macOS 10.13.3 and earlier.

The article then goes on to show how to set signing_required=no in /etc/nsmb.conf.

So what does "packet signing is off by default" mean vs. "Packet signing for SMB 2 or SMB 3 connections turns on automatically"???

Setting signing_required=no does not appear to have any visible effect. And in fact, if I check man nsmb.conf it clearly says signing_required has a default value of no.

I've spent some time poking around with the various settings on the server and on the client, trying to figure out how to actually disable signing on the connection. So far I haven't figured it out.

The closest I came to actually disabling signing was setting smb3 signing algorithms = HMAC-SHA256 on the server side. Since macOS does not seem to support HMAC-SHA256, this just made the connection stop compeltely.

I'm pretty far out of my depth here, and I am probably barking completely up the wrong tree on this signing stuff. I could definitely use some guidance from someone who knows SMB really well.
 

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
I'm bumping this because I'm still looking for any help I can get on understanding this issue.
 

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
I just upgraded from SCALE Bluefin 22.12.3.2 to Cobia 23.10.0 because of this headline feature:

SMB Performance and Scalability:
There have been several changes to the protocol stack to improve performance and scalability. This includes increasing the I/Os per second, the number of users, and the number of files per directory. Both OpenZFS and Samba changes were made to enable these improvements. The improvements are also aligned with work on NVMe performance (more information to follow).

I'm sad to report I have observed zero improvement in the SMB sequential read performance for my configuration when reading from a macOS client.

As a partial workaround, I've started using sshfs via macFUSE. The performance is still awful compared to cifs on Linux, but it's faster than SMB on macOS and it's almost usable.
 

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
I know it's been several months since I've posted any updates on this, but I've made some progress here.

I've confirmed to my satisfaction that the slowness I've documented above is actually a performance regression in SCALE. CORE does not exhibit this problem on my network.

TL;DR, I've opened a ticket: https://ixsystems.atlassian.net/browse/NAS-126796

Here are the details of my investigation (so far) and why I came to this conclusion.

But I don't know for sure when it started. Over the past couple of months, I have migrated my NAS hardware from TrueNAS Mini to a generic SuperMicro machine and migrated my NAS software from TrueNAS Core to TrueNAS SCALE. There have been so many changes in my environment it is impossible to narrow down this performance regression to a specific change.

Originally, I had two TrueNAS Mini XL+ machines, both configured to be almost identical. One of them was running in my office, and the other off-site as a warm backup / secondary. (One of my team members was doing a lot of video editing while working from home, so this arrangement made sense at the time.)

Around the middle of last year, my primary TrueNAS machine started to run low on free space. I decided to upgrade the hardware and migrate to TrueNAS SCALE at the same time.

That's when the trouble started. The new hardware was able to read data from the pool at over 1.23GiB/s, but SMB could only hit an average throughput of 51.8MiB/s. (edit: For macOS clients. Linux clients were not affected.)

Yesterday, I retrieved my off-site backup machine and reconfigured it to run on the same network as my primary machine and plugged it into the same switch. The backup is called Speculum (so named for the Latin for 'mirror', not the medical device) and it has much lower specs compared to the primary. (See my signature for the current details on both machines.)

So now I can perform A/B tests between both servers. The network is the same, the client is the same, the data is the same.

I've done hardware and software upgrades to the primary, so what does it currently look like if I transfer a large file from TrueNAS SCALE to my macOS workstation?

The client configuration:

Code:
Lapis:~ alex$ smbutil statshares -a

==================================================================================================
SHARE                         ATTRIBUTE TYPE                VALUE
==================================================================================================
videowork
                              SERVER_NAME                   veritas2._smb._tcp.local
                              USER_ID                       7000
                              SMB_NEGOTIATE                 SMBV_NEG_SMB1_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB2_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB3_ENABLED
                              SMB_VERSION                   SMB_3.1.1
                              SMB_ENCRYPT_ALGORITHMS        AES_128_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_128_GCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_GCM_ENABLED
                              SMB_CURR_ENCRYPT_ALGORITHM    OFF
                              SMB_SIGN_ALGORITHMS           AES_128_CMAC_ENABLED
                              SMB_SIGN_ALGORITHMS           AES_128_GMAC_ENABLED
                              SMB_CURR_SIGN_ALGORITHM       AES_128_GMAC
                              SMB_SHARE_TYPE                DISK
                              SIGNING_SUPPORTED             TRUE
                              EXTENDED_SECURITY_SUPPORTED   TRUE
                              UNIX_SUPPORT                  TRUE
                              LARGE_FILE_SUPPORTED          TRUE
                              OS_X_SERVER                   TRUE
                              FILE_IDS_SUPPORTED            TRUE
                              DFS_SUPPORTED                 TRUE
                              FILE_LEASING_SUPPORTED        TRUE
                              MULTI_CREDIT_SUPPORTED        TRUE
                              SESSION_RECONNECT_TIME        0:0
                              SESSION_RECONNECT_COUNT       0

--------------------------------------------------------------------------------------------------
Lapis:~ alex$


And the results if I sequentially transfer a large file:

Code:
Lapis:bmpcc alex$ cat 1239_12210059_C004.braw | pv >/dev/null
 196GiB 1:06:15 [50.7MiB/s] [ ... ]
Lapis:bmpcc alex$


That's a practical throughput of 0.43 gigabits/sec. :(

With that out of the way, what does it look like if I transfer a large file from TrueNAS CORE to my macOS workstation?

The client configuration:

Code:
Lapis:~ alex$ smbutil statshares -a

==================================================================================================
SHARE                         ATTRIBUTE TYPE                VALUE
==================================================================================================
videowork
                              SERVER_NAME                   speculum._smb._tcp.local
                              USER_ID                       7000
                              SMB_NEGOTIATE                 SMBV_NEG_SMB1_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB2_ENABLED
                              SMB_NEGOTIATE                 SMBV_NEG_SMB3_ENABLED
                              SMB_VERSION                   SMB_3.1.1
                              SMB_ENCRYPT_ALGORITHMS        AES_128_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_128_GCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_CCM_ENABLED
                              SMB_ENCRYPT_ALGORITHMS        AES_256_GCM_ENABLED
                              SMB_CURR_ENCRYPT_ALGORITHM    OFF
                              SMB_SIGN_ALGORITHMS           AES_128_CMAC_ENABLED
                              SMB_SIGN_ALGORITHMS           AES_128_GMAC_ENABLED
                              SMB_CURR_SIGN_ALGORITHM       AES_128_GMAC
                              SMB_SHARE_TYPE                DISK
                              SIGNING_SUPPORTED             TRUE
                              EXTENDED_SECURITY_SUPPORTED   TRUE
                              LARGE_FILE_SUPPORTED          TRUE
                              FILE_IDS_SUPPORTED            TRUE
                              DFS_SUPPORTED                 TRUE
                              FILE_LEASING_SUPPORTED        TRUE
                              MULTI_CREDIT_SUPPORTED        TRUE
                              SESSION_RECONNECT_TIME        0:0
                              SESSION_RECONNECT_COUNT       0

--------------------------------------------------------------------------------------------------
Lapis:~ alex$


And the results if I sequentially transfer the exact same file:

Code:
Lapis:bmpcc alex$ cat 1239_12210059_C004.braw | pv >/dev/null
 196GiB 0:07:44 [ 434MiB/s] [ ... ]
Lapis:bmpcc alex$


That's a practical throughput of 3.64 gigabits/sec. Wow! That's incredible! It's like night and day. And it makes me wonder what kind of throughput I could see on the more powerful hardware if this performance issue was resolved.

So what are my next steps?

Obviously I opened a ticket, so I'm hoping to get some insights from iXsystems sooner than later. I'm also going to begin digging deeper and running diffs to see if I can narrow down the differences between the two configurations. I have to assume there's something, I'm just missing it.
 
Last edited:

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
Last note before I sign off for the evening. I noticed the UNIX_SUPPORT and OS_X_SERVER attributes in the Veritas2 share, which were not present in the Speculum share. This was caused by the "Enable Apple SMB2/3 Protocol Extensions" flag in the SMB service being set.

I just now tested disabling that flag on Veritas2, and the file transfer speed did not improve.

I also tested enabling that flag on Speculum, and the file transfer speed did not degrade.

I also checked again, and both shares appear to have equivalent configurations.

The outlier here is "Enable Alternate Data Streams", but in both cases the checkbox is disabled, so I cannot check or uncheck it. (Regardless, I don't see how Alternate Data Streams could affect the performance of a simple sequential file read via cat on the command line. But who knows. ‍*shrug*)

Here's the share configuration in CORE:

1705373015935.png


Versus the configuration in SCALE:

1705373105772.png
 
Last edited:

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
I got some feedback from iXsystems and ran some more tests. At this point it looks like the share performance tanks when you select "Multi-protocol (NFSv4/SMB) shares".

This is surprising because the equivalent option in CORE "Multi-protocol (NFSv3/SMB) shares" does not have any visible performance impact.

Also, somewhat frustratingly, iXsystems closed my ticket as "User Configuration Error" with no explanation. I opened a new ticket: https://ixsystems.atlassian.net/browse/NAS-126798

Anybody following this issue, if you can reproduce and/or are affected by this issue, please give my ticket a thumbs up so iXsystems can pay better attention to this!
 
Last edited:

Uzmeyer

Cadet
Joined
Jan 16, 2024
Messages
1
I have a similar problem, we do audio production, generally one sequential video stream and a mix of a lot of sequential and random audio files ranging from 100s of kB to multible GB in size, with 10+ Mac clients and a couple Windows clients and have been trying to get working from network running for YEARS now.
We have three machines with:

Xeon Silver 4210
64GB RAM
10TB all SSD RaidZ2 Pool
Dual 10G Intel nic
Running TrueNAS CORE 13

One is used by windows clients only and they are having a good time working from it with no issues, one is primarily there for the mac clients but barely used since the performance on mac is just atrocious and the third i have offline to run experiments on.

I sadly can't do tests whenever I want since the macs are almost always in use and I only have an old Trashcan that isn't really representative of the in use macs, but I ran a suit of tests late in summer comparing mac vs linux vs windows on SMB and NFS and the gist of it was that writes from mac over smb were just horrible. I'll have to check the exact results when I'm back at work tomorrow but the scale of it was about 10x worse writes on large files and up to 100x worse for small files. Don't remember exactly how it was for reads but certainly not that dramatic.

I don't know if any of this is helpful for your cause but I could try various stuff on the offline machine, been wanting to try out Scale anyway so I could at least see if I get a significant difference from CORE.
 

Dysonco

Dabbler
Joined
Jul 4, 2023
Messages
27
I got some feedback from iXsystems and ran some more tests. At this point it looks like the share performance tanks when you select "Multi-protocol (NFSv4/SMB) shares".

This is surprising because the equivalent option in CORE "Multi-protocol (NFSv3/SMB) shares" does not have any visible performance impact.

Also, somewhat frustratingly, iXsystems closed my ticket as "User Configuration Error" with no explanation. I opened a new ticket: https://ixsystems.atlassian.net/browse/NAS-126798

Anybody following this issue, if you can reproduce and/or are affected by this issue, please give my ticket a thumbs up so iXsystems can pay better attention to this!
Hi Alex,

Very interesting thread! Thanks so much for letting me know, this it is DEFINITELY related to my issues.

So quick explanation of my use case relative to yours. I also work with video, I'm a producer/ director who also does some editing, so I set my NAS up as both a straight backup location but also with the idea the performance would be ample to edit direct over the network if necessary (typically I work on NVME Storage on my workstations). Hence the 10Gbe lan and ZFS1 arrays of 5 x 4Tb SSDs.

I was getting full 10Gb wire speed both read and write in Core, yet less than a quarter of that in Scale.
 

Dysonco

Dabbler
Joined
Jul 4, 2023
Messages
27
In my case having 'Multi-protocol (NFSv4/SMB) shares". Is unlikely to be the issue as both my Samba shares were set as 'Default Share Parameteres', although as this was a port from Core to Scale i guess 'default Share Parameters' could in fact have been 'Multi-protocol (NFSv4/SMB) shares"

I've set both shares to 'Private SMB Datasets and Shares' as that seems the next most appropriate setting? I'll do some performance tests.
 

Dysonco

Dabbler
Joined
Jul 4, 2023
Messages
27
Well, tests aren't looking good...

It's still dire!

Crystal Disk mark bench of shared folder over network. I'm barely getting the speed of a single spinning rust disk...

1705423779590.png
 

alexmarkley

Dabbler
Joined
Jul 27, 2021
Messages
40
So I've been able to conclusively identify "Multi-protocol (NFSv4/SMB) shares" as the culprit in my situation. If I remove that setting from my big videowork share, the performance goes up more than 10x.

This is very confusing, because I never encountered this issue with CORE.

I'm also disappointed because iXsystems closed my second ticket without much engagement. If there is a performance regression bug somewhere in the SCALE stack, you would think they would jump at the chance to track it down. I was offering to perform free labor in the form of testing and gathering data on my side!

But they were ultimately pretty dismissive of me. The specific response was "we do not troubleshoot performance problems" and "this is outside the scope of what we do for our free product."

That is an especially disappointing position for iXsystems to take, since I've purchased over $5,000 of hardware and software from them, and I literally told their sales team on the phone that I would be happy to pay for extended support. But they have never actually sent me a proposal or followed up with me.

It's almost like they're taking the position that I'm some freeloader asking them to do me a favor, which is frustrating and couldn't be farther from the truth.
 

rymandle05

Cadet
Joined
Jan 16, 2024
Messages
8
First off - Thank you @alexmarkley and everyone else for this thread and keeping at it!

I'm a long time TrueNAS Core user but recently switched to a TrueNAS Scale Cobia with my latest build using the new 45Drives HL15. I originally had been testing using Rocky Linux and and Ubuntu and could easily saturate the a 1G connection. After installing Cobia, I noticed SMB would be about 50% of capacity int he 60MB/s range. Thanks to the information, I was able to reproduce the problem reliably.

Just like what was indicated, the problem seems to stem from SMB datasets setup with ACL Type: SMB/NFSv4 but also ACL Mode: Restricted. If all SMB shares are setup this way then I see reduced SMB speeds transferring files form my M2 Mac mini. However, if I change the ACL Mode to anything else (i.e. discard) on one SMB dataset than all SMB shares are able to once again saturate the 1G connect with ~117MB/s. A key point is to make sure to restart smbd after doing this. The slow down doesn't seem to occur until that restart.

As changing one shares impacts all shares, I have to believe this is some kind of bug. My workaround right now is to one share with a different ACL. I may even create a "dummy" dataset that's unbrowsable and lacking access. I'd be interested to hear if the ACL Mode also works for you anyone else.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
@rymandle05 Interesting that the ACLmode setting seems to be globally impactful.

Just to clarify - this only impacts a MacOS client, and even in the "slow performance" config, a Windows/Linux system can hit it at full speed?
 

rymandle05

Cadet
Joined
Jan 16, 2024
Messages
8
@rymandle05 Interesting that the ACLmode setting seems to be globally impactful.

Just to clarify - this only impacts a MacOS client, and even in the "slow performance" config, a Windows/Linux system can hit it at full speed?
I have not tried Linux or Windows. I’m mostly a Mac person but I do have a Windows gaming PC. I’ll test that out tomorrow.
 

Dysonco

Dabbler
Joined
Jul 4, 2023
Messages
27
So, I've tried recreating both my Samba shares and stripping and recreating ACLs in as simple form as I can. Still got the same performance issues.

I should point out that my clients are windows 10.
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
I should point out that my clients are windows 10.
Does switching the ACL mode to Discard as suggested by @rymandle05 restore speed for you?

@alexmarkley Are you able to test with a non-MacOS client, just to see if the issue can be isolated further?
 

rymandle05

Cadet
Joined
Jan 16, 2024
Messages
8
Does switching the ACL mode to Discard as suggested by @rymandle05 restore speed for you?

@alexmarkley Are you able to test with a non-MacOS client, just to see if the issue can be isolated further?
And don’t forget to restart smb service. :smile: Changing the dataset alone wasn't enough to see a change.
 
Last edited:
Top