Raid-Z2 SMB read speed ~66MB/s (slow?)

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
Despite being new to FreeNAS, over the past few months of use I have really come to appreciate how powerful it is as a storage operating system (in particular ZFS with adequate RAM!).

Drive Setup:
6x WD Red 3TB in Raid-Z2 (Half the drives have the 80.00 firmware, the others have 82.00)
4 connected via SATA2 onboard controller, 2 via SATA3 marvell controller (88SE9128 chipset)

Storage Pool settings:
Sync: Standard
Compression level: lz4
Share Type: Windows
Enable Atime: off
Case Sensitivity: Sensitive (greyed out)

Windows (SMB) Share settings:
Browsable to Network Clients: ticked
VFS Objects: zfsacl, zfs_space

SMB Service settings:
Local Master: ticked
Time Server for Domain: ticked
Auxiliary parameters:
ea support = no
store dos attributes = no
map archive = no
map hidden = no
map readonly = no
map system = no
Bind addresses: two IPs on separate networks


N.B. Added the auxiliary parameters this morning for some testing but it seems to have only improved windows directory listing timings when opening music directories with hundreds of files... not the read speeds.

At the moment I have consistent write speeds (from my Windows PC [Ryzen 1600x OC 4Ghz, 16GB DDR4 3200Mhz RAM, Samsung 850 Pro SSD, Gigabit NIC] to the FreeNAS 11.2-U2.1 server) of a beautiful 112MB/s for a single large file, which I understand is basically fully saturating a gigabit NIC, when taking overhead (& Windows) into account.
Yet my read speeds from the server are possibly quite low at 65-67MB/s. Iperf speed tests have also delivered results in the 528Mbps range, yet copying a large file from my laptop to my pc yields speeds of 112MB/s. This leads me to think the problem could be with my FreeNAS setup.

I hope the complete specs for my server in my signature and the additional info given in this post will be enough for someone to be able to point me in the right direction.
If any additional information is required, I would be happy to post screenshots of shell command outputs.
 
Last edited:

Jessep

Patron
Joined
Aug 19, 2018
Messages
379

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
If that does turn out to be the case, I guess I'll try just having the boot SSD connected the marvell controller. The other hard drives will be on the onboard controller.
Any suggestions when it comes to HBA cards?
 

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
@Jessep Just connected all data drives from the raidz2 array to the onboard SATA controller. Same 66MBps read speed.
Could an old SATA cable be at fault? I mean, even if by a stroke of sheer bad luck and I used a SATA 1 cable, that would not even be the bottleneck with a bandwidth of 150MBps.
 

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
Here is a screenshot of a DD test I just ran. Read performance seems to be over twice that of the write performance. :thinking:
DD test.png


Is it SMB related?
Outputs of smb4.conf including Plex dataset info. I made sure that the dataset, in this case "Plex", I ran the DD tests on has no compression. In the first of the next three screenshots I blanked out my interfaces... one is localhost, the other two are on my local network on different subnets. The second one not on my "true" network is just there as a "failsafe" so that if one port dies, the server will still be reachable (static route setup in main router (DHCP server)).
SMB4.conf 1.png
SMB4.conf 2.png

SMB4.conf Plex dataset.png
 

Jessep

Patron
Joined
Aug 19, 2018
Messages
379
Looks like that board uses Realtek NICs, also not recommended.

As for what your specific issue is I'm not sure.
 

Andrew Barnes

Dabbler
Joined
Dec 4, 2014
Messages
21
Frikke, have you tried running your dd benchmark command in locally on the zpool?
if that gives you the performance you are expecting, then you could look into the network and also to look at comparing NFS against CIFS/SMB, SFTP, RSYNC, etc

you might find that CIFS/SMB is bottle necking on CPU?

another possibility, a long shot, if you are using encryption in your pool, but you don't have hardware support in your CPU? this would also be seen in CPU usage - have a look at your cpu usage during your SMB/CIFS and NFS testing.
 

Frikkie

Dabbler
Joined
Mar 10, 2019
Messages
41
I managed to acquire 2 Fujitsu server 4-Gigabit port NICs from work; Installed one of them in my FreeNAS system, set up in the interface in the GUI with an IP on a different subnet, made sure the new interface is also bound to SMB, hooked up my PC directly to the 4-port NIC et voila!... 112MB/s sustained read speeds through windows explorer.
I guess one just needs to stay the hell away from Realtek NICs.
Now it's time to install the second NIC in my main system, order 4 LAN cables from Amazon and see what kind of performance I can get with all 4 ports populated on both NICs. :D
 
Top