I know it's been a few days, but I'm just now getting a chance to test this scenario.
At present, my top-level pool settings look like this:
From there, all of my individual datasets have ACL Type and ACL Mode set to
Inherit. I'm guessing these are the defaults, because I did not adjust them manually.
This leads to a question:
@rymandle05 did you ever test with ACL Mode set to
Passthrough? If your throughput issues are only happening when ACL Mode is set to
Restricted, and the throughput is normal otherwise, we might be looking at two different performance-impacting issues.
Because adjusting the ACL settings on the dataset level could be a destructive operation, I set up my testing like this:
- I removed all of my SMB shares from the TrueNAS configuration.
- I created a new dataset called
videotest
.
- I copied about 200 GB of data from the
videowork
dataset to the videotest
dataset.
- I (locally) read the file via
cat foo | pv >/dev/null
a couple times until I was satisfied that my L2ARC was populated. (Finished in 2m 51s, throughput 1.15GiB/s.)
- I set the ACL on
videotest
to Type: SMB/NFSv4 and Mode: Discard.
- I created the new SMB share and proceeded with testing.
Here are the results of my tests. After each change to any parameter on the server-side, I performed an SMB server restart via
sudo systemctl restart smbd
.
Purpose: Default share parameters; ACL Type: SMB/NFSv4; Test: Sequential read of 196GiB file
SMB Client | ACL Mode Discard | ACL Mode Passthrough | ACL Mode Restricted |
---|
macOS 14.2.1 | Finished 4m 54s (685MiB/s) | Finished 5m 15s (638Mib/s) | Finished 4m 13s (795MiB/s) |
Linux | Finished 3m 3s (1.07GiB/s) | Finished 3m 1s (1.08GiB/s) | Finished 3m 1s (1.08GiB/s) |
Interestingly,
ACL Mode Restricted actually performed a bit better on macOS. To double-check this, I went back and re-ran the performance tests on macOS for
ACL Mode Discard and
ACL Mode Passthrough a second time. In both cases, when I re-ran the test, I picked the best (fastest) time to go into the table. (Just in case cache warming was affecting the results.)
For my next set of tests, I re-created the SMB share again, but this time with Purpose set to
Multi-protocol (NFSv4/SMB) shares. (Again, restarting smbd after each configuration change.)
Purpose: Multi-protocol (NFSv4/SMB) shares; ACL Type: SMB/NFSv4; Test: Sequential read of same 196GiB file
SMB Client | ACL Mode Discard | ACL Mode Passthrough | ACL Mode Restricted |
---|
macOS 14.2.1 | Finished 1h 2m 46s (53.6MiB/s) | Finished 1h 3m 38s (52.8MiB/s) | Finished 1h 4m 26s (52.2MiB/s) |
Linux | Finished 8m 7s (414MiB/s) | Finished 8m 6s (414MiB/s) | Finished 7m 59s (420MiB/s) |
I'm not sure I see a strong correlation between ACL settings and the SMB share throughput on my system. I'm guessing there's some other difference in the hardware or software configuration that is affecting the issue.
I am shocked by the Linux client performance with
Default share parameters. It's so fast I think it's actually limited by the read throughput of the array. This also means Linux read performance has also been impacted by
Multi-protocol (NFSv4/SMB) shares, albeit to a much lesser extent.
I'm still not sure
why this issue exists, but I'm satisfied that I know
how to work around it.
(Last note: I
was able to dig up an old Windows 10 machine and I tried to run tests using it as a client. However, I couldn't figure out how to get it connected to the SMB share. (Embarrassing!) So I gave up.)