SOLVED SMB performance slow

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
I use Dell T620, the specs as follow:
2 * XEON E5-2667 V2
384GB of RCC+REG Memory
1 Dell H700 RAID-5 internal Disk (15K SAS 300GB*12) with CacheCade (SAS 100GB*4) for Proxmox VE (PVE) VM and Container
1 Dell H810 RAID-5 with Dell MD-1000 (7.2K SAS 6TB *15) for TrueNAS SCALE 21.02 (PVE pci-e passthrough)
1 Dell H700 RAID-5 internal Disk (7.2K SATA-3 1TB*4) for TrueNAS SCALE 21.02 (PVE pci-e passthrough)
1 Motherboard internal SATA Disk non-RAID (7.2K SATA-3 1TB*2) for Windows 2019 (PVE pci-e passthrough)
All the H/W firmware is latest.
Proxmox VE has 1 TrueNAS SCALE 21.02 and 1 Windows 2019, lot Ubuntu.

When I use Windows 2019 and transfer data to TrueNAS SCALE 21.02, the single file over 30GB.
The TrusNAS Disk (MD-1000) write performance always lower then 5MB/s

I don't know it's RAID card driver issue or TrueNAS bugs.
Someone can give me some suggestions?
 

ornias

Wizard
Joined
Mar 6, 2020
Messages
1,458
Well, Lets evaluate that setup, shall we :)


I use Dell T620, the specs as follow:
2 * XEON E5-2667 V2
384GB of RCC+REG Memory

Thats solid for SFS :)

1 Dell H700 RAID-5 internal Disk (15K SAS 300GB*12) with CacheCade (SAS 100GB*4) for Proxmox VE (PVE) VM and Container
Not relevant I guess, because it isn't related to TrueNAS

1 Dell H810 RAID-5 with Dell MD-1000 (7.2K SAS 6TB *15) for TrueNAS SCALE 21.02 (PVE pci-e passthrough)
1 Dell H700 RAID-5 internal Disk (7.2K SATA-3 1TB*4) for TrueNAS SCALE 21.02 (PVE pci-e passthrough)
NEVER use a raid controller with ZFS and DEFINATELY NEVER raid 5 with 3TB+ disks >.<
Only use HBA's.

1 Motherboard internal SATA Disk non-RAID (7.2K SATA-3 1TB*2) for Windows 2019 (PVE pci-e passthrough)
Not relevant I guess, because it isn't related to TrueNAS

All the H/W firmware is latest.
Nice to know, but you still shouldn't be using raid controllers.

Proxmox VE
Okey, so everything is virtualised, that could be a bigger highlight ;-)

has 1 TrueNAS SCALE 21.02 and
Okey, but did you install the guest additions?
Slow performance has been noted without guestadditions.


1 Windows 2019, lot Ubuntu.
Not relevant I guess, because it isn't related to TrueNAS

When I use Windows 2019 and transfer data to TrueNAS SCALE 21.02, the single file over 30GB.
ok.

The TrusNAS Disk (MD-1000) write performance always lower then 5MB/s
Okey, but you shouldn't be using a raid-5 raidcard to begin with.


I don't know it's RAID card driver issue or TrueNAS bugs.
Someone can give me some suggestions?
Could be drivers, as TrueNAS isn't made to be used with raidcards.


ohh... And did I mention my lord-and-savior: Don't use raidcards? :P
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
I'd suggest testing with a client doing reads or writes via SMB... get a baseline. Ensure that there is a queue depth of at least 8. What bandwidth do you see?

When you copy from one system to another, its typically only 1 I/O at a time. Read then write wait for completion... then read then write. This copy process makes latency look like a throughput issue. The reality is it is a dumb copy algorithm coupled with some latency.
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
Thanks for reply.

I was use same H/W system for TrueNAS 12 early, but use PVE for storage and no performance issue.
TrueNAS SCALE 21.02 has default qemu-guest-agent, and I upgraded.
In the same network has other physical PC use 10Gbe and SMB protocol has same issue on scale 21.02 but not core 12.
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
In the config, I'd enabled tcp_bbr, txqueuelen 10000, mtu=900, -rxcsum -txcsum -tso -lro -gso, megaraid_sas.max_queue_depth=10000.
But still low disk r/w performance.
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
I tried to copy some over 40GB single file from MD-1000 to internal RAID-5 disk, the transfer speed appears 500MB/s.
I think is not the Dell RAID card H/W or drivers problem, but I still not find what's happen.
I use Intel X520 NIC, the Windows 2019 and TrueNAS SCALE 21.02 use Virtio NIC, all MTU is 9000 (not 900).
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
Thanks for reply.

I was use same H/W system for TrueNAS 12 early, but use PVE for storage and no performance issue.
TrueNAS SCALE 21.02 has default qemu-guest-agent, and I upgraded.
In the same network has other physical PC use 10Gbe and SMB protocol has same issue on scale 21.02 but not core 12.
it sounds like you changed two things at once.... TrueNAS edition and PVE for storage?

I have also been told that NTP mis-settings can slow down SMB..... can you verify those.
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
I checked NTP server is default (0.freebsd.pool.ntp.org).
Can you tell me what setting I miss check?

Yes, I used Core in PVE storage, performance is OK.
 

Konopelski

Cadet
Joined
Apr 12, 2021
Messages
1
I do have one outstanding reg setting for unthrottling SMB traffic (HKLM\System\CurrentControlSet\Services\LanmanWorkstation\Parameters\DisableBandwidthThrottling), but not a huge fan since it'd need to be applied at each machine.

MyCCPay
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
Thanks for reply.
I tried to set DisableBandwidthThrottling to 1, but it's not work for me.
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
If the TruenAS is reporting that drive latency is good and CPU/network utilization is fine.... then you can report a bug or wait for 21.04 (expected next week).

Obviously, it would be preferable to have seen the issue in a baremetal environment. Has anyone seen anything similar?
 

Maoyi

Dabbler
Joined
Apr 9, 2021
Messages
12
If the TruenAS is reporting that drive latency is good and CPU/network utilization is fine.... then you can report a bug or wait for 21.04 (expected next week).

Obviously, it would be preferable to have seen the issue in a baremetal environment. Has anyone seen anything similar?
When I install TrueNAS Scale 21.04, The issue resolved.
 

bryanpedini

Cadet
Joined
Mar 15, 2022
Messages
4
Hello,
Sorry in advance for necroposting, but I have a similar issue on SCALE 22.02.0.

I configured a Rz3 pool with 8 disks and two sub-datasets to separate between my PVE infrastructure and my own usage (seemed silly to allocate the entire pool to Proxmox just to have an OpenMediaVault VM to use for my documents, so I created another dataset in TrueNAS directly).
The Proxmox dataset has only an NFS share configured, my dataset has both an NFS and an SMB share configured.
Problem is, that on SMB I see 4MB chunks of data transfered over a couple of fractions of a second (maybe one second) at a time, while on SMB the speed is much higher, on the same network, on the same disks, on the same two machines communicating with each other.

Two things to be noted:
- 1st · I still haven't upgraded neither to 5Gbe nor even 2.5Gbe, I'm still running good ol' 1Gbe RJ45 copper CAT5e cables (except between TrueNAS and the switch and between Proxmox and the switch, where "just to be safe" I crimped CAT6 cables myself with CAT6 rated cables and CAT6 rated jacks, but that's besides the point and I digress)
- 2nd · I'm not a Windows user (except for gaming, and when it doesn't work I just backup the game saves and nuke the sh*t out of that crap OS, but I digress), so I'm testing all of this on an Arch Linux machine (my main d2d box)
 

morganL

Captain Morgan
Administrator
Moderator
iXsystems
Joined
Mar 10, 2018
Messages
2,694
The Proxmox dataset has only an NFS share configured, my dataset has both an NFS and an SMB share configured.
Problem is, that on SMB I see 4MB chunks of data transfered over a couple of fractions of a second (maybe one second) at a time, while on SMB the speed is much higher, on the same network, on the same disks, on the same two machines communicating with each other.
Did you mean NFS is slower?
NFS requires synchronous writes and a SLOG unless you relax that with sync=never
 

bryanpedini

Cadet
Joined
Mar 15, 2022
Messages
4
Did you mean NFS is slower?
NFS requires synchronous writes and a SLOG unless you relax that with sync=never
No sorry, I mistyped "SMB" twice.
The correct sentence should have been "on SMB I see 4MB chunks of data transfered over a couple of fractions of a second at a time, while on NFS the speed is much higher, on the same network, on the same disks, on the same two machines communicating with each other".

I can video POC the two scenarios if needed, to my eyes it seems that NFS is faster than SMB on my machine...
 

bryanpedini

Cadet
Joined
Mar 15, 2022
Messages
4
So today I needed to perform a heavy operation over SMB (namely, copy a 6GB ISO image), and noticed this:
Screenshot_20220320_173230.png

I was careful to screenshot exactly since the transfer started, where there's a spike, followed by a dip (probably calibration of speed and negotiation of something, idk), then this rollercoaster of less-than-optimally-reliable graph... at the next Win boot occasion I'll try on there too if I see any noticeable difference between the two SMB implementations.
Or I might wait until 22.04.0 (or whatever it'll be called the next release) and see ¯\_(ツ)_/¯
 
Top