• Currently using VMware at work? We want to hear from you.

    Thinking of making a switch from VMware? We'd love to hear your thoughts and feedback about which hypervisor you have been researching or already using. Click here to vote and share your thoughts! You can vote HERE!

Is my memory slowing me down?

Status
Not open for further replies.

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
I am seeing some strange slowness with my transfer speeds from FreeNAS. My setup is pretty solid but I am thinking that perhaps 16GB of Ram is not enough?

My current build are TWO identical boxes (One is running FreeNAS, the other is Running CentOS with mirrored SSD Drives):

X10SDV-6c+ Mother Board
Xeon D-1528 (6c/12t) Processor
16GB DDR4 ECC Memory
6x 3TB WED Red (For the FreeNAS Box)

Each box is plugged up with 1Gb Network cable to my network
And I just added
Cat6 directly between the two servers. (Currently this is not a crossover cable, I read that most modern NICs handle Crossover switching automatically but I could be mistaken).

The 1Gb link appears more stable. Usually starts around 112 MB/s and has been dropping to 60-70 MB/s.
The 10Gb link is having worse performance. It starts around 300-400 MB/s and quickly drops to 40-50 MB/s. I would think that 16GB DDR4 was enough, but maybe with the 6 drives I need to get another 16GB DDR4 ECC stick. I wanted to know what you guys thought.
 
Last edited by a moderator:

m0nkey_

MVP
Joined
Oct 27, 2015
Messages
2,739
I don't think RAM is your limiting factor here. I'll be more curious about the disk IO performance and what else you have running on the box.

Have you tested network connectivity using iperf?
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Thanks for the quick reply!

I have never used iperf before, so I have learned something new...

iperf is showing me more what i expect.

Through 1Gb link:
I get a stable 112-113 MB/s throughout the entire test.

Through 10Gb Link:
It varies quite a bit more, but still 266 - 525 MB/s (Usually around the 400-450 MB/s mark)

This makes me think that it is SMB having issues and not the FreeNAS system. I believe SMB is single threaded, but surely my Xeon-D 1528 would be capable of maxing out a SMB data stream? I will try to get a NFS share mounted through the 10Gb link and test.
 
Last edited by a moderator:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,263
Xeon D cores only run at 2ish GHz.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Xeon D cores only run at 2ish GHz.

I am aware. Are you suggesting a 2ghz processor can't saturate a 1Gb SMB share? I am not sure if that is accurate...Could you provide documentation or some tests on GHz speed to SMB throughput? I could be wrong, but I would not think this was the problem here (Although possibly for the upward bounds of the 10Gb nic). According to the system load, less than half of a core total is currently being used.

Also, I did a test with an NFS share with similar results. 1GB link seemed to cap out around 60-113 MB/s
10Gb link starts around ~250 MB/s, and then drops to 20-30 MB/s.
 
Last edited:

Stux

MVP
Joined
Jun 2, 2016
Messages
4,263
I was more angling towards the 10gig performance. The Xeon Ds are going to be slower than the 4Ghz Xeons for single threaded SMB, no matter how many cores they have.

Your issue could well be due to random read performance on your array.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
I am still a bit confused as to the issue. Maxing out the NICs using iperf doesnt seem to make my system sweat at all and seems to perform as expected.

I also tried a few DD commands to test the read/write performance (With count set to 10000 and a longer test with 100000). it doesn't seem like the system has issues with read/write performance.
Code:
% dd if=/dev/zero of=/mnt/tank/Data/test.dat bs=2048k count=10000
20971520000 bytes transferred in 5.994379 secs (3498530584 bytes/sec)

% dd if=/dev/zero of=/mnt/tank/Data/test.dat bs=2048k count=100000
209715200000 bytes transferred in 57.795787 secs (3628555161 bytes/sec)

% dd of=/dev/null if=/mnt/tank/Data/test.dat bs=2048k count=10000
20971520000 bytes transferred in 3.154333 secs (6648480123 bytes/sec)

%dd of=/dev/null if=/mnt/tank/Data/test.dat bs=2048k count=100000
209715200000 bytes transferred in 33.045782 secs (6346201753 bytes/sec)

NFS (Which is multi-threaded also has significant slow downs. Server load is quite low, even during transfers. Perhaps someone knows of some better tests?
 
Last edited by a moderator:

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
Made a bit more of a discovery after poking around reddit.

My issues are ONLY affecting READS, not WRITES

SMB and NFS over the 10Gb connection writes ~250 MB/s reliably. If I do a DD test through the NFS share as a write it hits about 500 MB/s. The server hits about 10% CPU utilization max.
With SMB, it hits about 3%.

Definitely something going on with READS only. Still perplexed
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,263
Remember, when benchmarking reads with DD that you need to ensure that the dataset has compression off... unless you're reading a pre-generated file... and reading after a reboot to ensure that ARC is not playing games.
 

icsy7867

Contributor
Joined
Dec 31, 2015
Messages
167
More oddness!

The machine I was testing from was an oVirst guest VM that had some Samsung 850 Pro's mirrored using LVM. Well this is actually performing worse than expected, and I believe is the actual cause of the "Slowness" not my Freenas system.

To test, I did a PCI/SCSI passthrough of a NVM drive to the Windows guest and copied over a test file over SMB which easily hit 250-300 MB/s and the sustained this speed for the entire duration of the transfer. It looks like I was troubleshooting the wrong system the entire time!
 
Status
Not open for further replies.
Top