TrueNAS Core vs Scale...SAMBA

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
I have a mixed 1g and 10g network, and I observered the other day, after building one of my hosts as a TrueNAS scale box, that whatever difference there is in SAMBA, scale will stick multichannel over my 2* 10g links, rather than stripe them across 2* 10g + 1g link I have on my desktop.

Something I've really not liked in the past with Core and back to FreeNAS days..

When just the 10G links are being used for multichannel, I can get 6gb/sec across each 10g NIC. With the 1g NIC in play, this drops to around 600mb/sec across all 3.

Scale, it seems to process as similar to how windows server will perform, which is that it doesn't bother with using the 1g link if 10g(s) is available.

So, considering moving my stuff all to scale just for this reason alone; but want to ask 3 Q's.

1) what is the version difference between SAMBA on core vs on scale?
2) when will core get the same version?
3) is scale considered legit prime-time release or is it still pre-release?
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
1) what is the version difference between SAMBA on core vs on scale?
2) when will core get the same version?
3) is scale considered legit prime-time release or is it still pre-release?

SCALE is more up to date with SAMBA...
13.1 will move the same SAMBA version in Q1 next year (multichannel is not a focus however.. we recommend SCALE)
SCALE is legit.... for SMB it is the more feature rich implementation (e.g Syncthing is better)

SCALE 23.10.0.1 is good... still early in its cycle. 23.10.1 will be better - next month.
 

jenksdrummer

Patron
Joined
Jun 7, 2011
Messages
250
Well, after more testing; this time copying back from one Corbia to another, I'm seeing SMB results that align with behavior that was under Core.

Basically it opts out of using the 1G nic, which is nice, but the speeds are similar/same as if the 1G nic was in play - IE, I'm seeing around 600mb/sec. If I modify samba to disable the 1G path, I get 4.4-6gb/sec on both 10G nics.

My testing seems to imply the source system with the data is the culprit. On Core; it would use all three NICs and give an aggregate of about 1x the slowest NIC (1G) across 2x 10G and 1x 1G NIC. On Scale, it does similar; it's ((SlowestLink)/(NumberofPaths)-(SlowestLink); IE, I was getting a rough aggregate of around 1G/sec under Core; but under Scale I get around 650MB/sec but on only 2 NICs. Actually...slower!

Disable the binding on the 1G NIC on the source server*; and it pops up to 4.4GB/sec (8.8GB/sec aggregate).


(*A bit of an assumption; I disabled the binding on both source and destination; but considering my earlier test was possible that I disabled the 1G binding on Core; but I do recall I had Scale at defaults when I was offloading Core...)

Anyhow, as SAMBA is under it's own boat; I figure this won't get fixed. Meanwhile, going back to Core since it seems more...matured.


But, will say seeing live reports as a good thing; but the sort order is out of whack. Also missing is compression reporting in the GUI under the datasets.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Well, after more testing; this time copying back from one Corbia to another, I'm seeing SMB results that align with behavior that was under Core.

Basically it opts out of using the 1G nic, which is nice, but the speeds are similar/same as if the 1G nic was in play - IE, I'm seeing around 600mb/sec. If I modify samba to disable the 1G path, I get 4.4-6gb/sec on both 10G nics.

My testing seems to imply the source system with the data is the culprit. On Core; it would use all three NICs and give an aggregate of about 1x the slowest NIC (1G) across 2x 10G and 1x 1G NIC. On Scale, it does similar; it's ((SlowestLink)/(NumberofPaths)-(SlowestLink); IE, I was getting a rough aggregate of around 1G/sec under Core; but under Scale I get around 650MB/sec but on only 2 NICs. Actually...slower!

Disable the binding on the 1G NIC on the source server*; and it pops up to 4.4GB/sec (8.8GB/sec aggregate).


(*A bit of an assumption; I disabled the binding on both source and destination; but considering my earlier test was possible that I disabled the 1G binding on Core; but I do recall I had Scale at defaults when I was offloading Core...)

Anyhow, as SAMBA is under it's own boat; I figure this won't get fixed. Meanwhile, going back to Core since it seems more...matured.


But, will say seeing live reports as a good thing; but the sort order is out of whack. Also missing is compression reporting in the GUI under the datasets.

With SCALE you do have to specially configure a larger ARC for max performance..... that won't be default until Dragonfish.
 
Top