slow speeds on LAN, Disks are fine though.

Status
Not open for further replies.

TheShellshock67

Dabbler
Joined
Feb 22, 2013
Messages
25
Dear forum members,

I'm at a loss here.
Specs of the machine in question:

8TB Disks
16GB of DDR2 5300F ECC RAM
2 cores at 2 ghz (some kind of xeon)

Usually my Freenas/ZFS NAS devices were pretty slow due to low ammounts of RAM.
This time RAM issues were out of the question becaus we started with 4gigs and saw huge performance issues.
Then bought some new RAM and the speeds rose considerably.
Now we're getting slow speeds even with FTP.
Speeds like 64 Read 54 Write (in megs/s)
DD making a file of 50GB on the machine itself (via ssh) makes a far better number of about 300megs/s.
i tested with a couple of different NICS a Broadcom (onboard) a Intel single GIG-E and a Intel quad GIG-E nic, no considerable difference.
Top shows my cpu either almost idle or used for max 50% (in CIFS)
Any ideas what i could try here?

Thanks in advance

TheShellshock67
 

Robert Trevellyan

Pony Wrangler
Joined
May 16, 2014
Messages
3,778

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
DDR2 implies Core 2-era stuff. That implies FSB.

Nothing kills performance quite like the FSB...

...except of course for nearly full pools.
 

TheShellshock67

Dabbler
Joined
Feb 22, 2013
Messages
25
Sorry for the long time ago, had a very busy time...... sigh.

What I meant with the 8TB disk was total diskspace 8x1TB WD RE3 disks.
The machine in question is in fact old, it is a Dell Poweredge 2900... i know, but it was almost for free.
@Ericloewe : Could you explain to me why FSB kills performance?
And why do I see good performance with dd (without compression) and not with any protocols?

Thanks so far!
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Sorry for the long time ago, had a very busy time...... sigh.

What I meant with the 8TB disk was total diskspace 8x1TB WD RE3 disks.
The machine in question is in fact old, it is a Dell Poweredge 2900... i know, but it was almost for free.
@Ericloewe : Could you explain to me why FSB kills performance?
And why do I see good performance with dd (without compression) and not with any protocols?

Thanks so far!
Everything in and out of the processor has to travel through the FSB, which is rather slow. This results in a memory/IO bottleneck, even though the core itself could do some more work.
 

TheShellshock67

Dabbler
Joined
Feb 22, 2013
Messages
25
Everything in and out of the processor has to travel through the FSB, which is rather slow. This results in a memory/IO bottleneck, even though the core itself could do some more work.
Ok i get that, but why is the DD command, which writes or reads a file locally, almost 10 times faster than if I copy the same file over ISCSI/FTP/NFS/SMB or whatever transfer protocol freenas supports?
Does the FBS bottleneck only come into play if some sort of network traffic is performed?
Can I do other checks to see where the problem lies instead of swapping out the network cards and adding even more ram?

Thanks for the answers, maybe my brain actually stores something now :P
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I suspect your CIFS processor usage doesn't include what's behind the scene like interrupt, ZFS, Nice, User. So in a nutshell, 50% CPU can really mean 100% CPU. Beside 50% based on what, single core or more?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Ok i get that, but why is the DD command, which writes or reads a file locally, almost 10 times faster than if I copy the same file over ISCSI/FTP/NFS/SMB or whatever transfer protocol freenas supports?
Does the FBS bottleneck only come into play if some sort of network traffic is performed?
Can I do other checks to see where the problem lies instead of swapping out the network cards and adding even more ram?

Thanks for the answers, maybe my brain actually stores something now :p
Network involves an extra set of travels through the FSB, which can certainly contribute.
 

TheShellshock67

Dabbler
Joined
Feb 22, 2013
Messages
25
I suspect your CIFS processor usage doesn't include what's behind the scene like interrupt, ZFS, Nice, User. So in a nutshell, 50% CPU can really mean 100% CPU. Beside 50% based on what, single core or more?
50% total usage, it is a dualcore processor.
could be that CIFS is only capable of filling one core for 100%. Hence the 50% total.

Network involves an extra set of travels through the FSB, which can certainly contribute.
Mm ok, that sounds like something that slows the server down. but could it seriously be so much difference?
Is there any way to test this?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
dd writes (and reads) are extremely simplistic. They simply read (or write) to the zpool. With real workloads you have the equivalent of the read (or write) to/from the zpool, the processing for Samba, plus the network output. So you just took a small workload and made it MUCH MUCH more complicated. Comparing dd to CIFS sharing is Apples to Oranges when you have a FSB.

We know FSBs suck. Dozens of users have been the victims of FSB. That's why Intel abandoned the FSB design 7 years ago (yes, it's been THAT long). That's why my hardware recommendations don't include such hardware (not to mention they generally won't support 8GB+ of RAM and if they do they generally consume so damn much power you are better off buying new than trying to reuse old).
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Who would have thought 7 years ago that Nehalem would still be usable for all kinds of workloads...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I would have. I bought my i7-920 a few minutes after midnight on Newegg on the night they came out. The different between that and my prior setup (which was no slouch at the time) was just impossible to explain. It was (and is) a very capable processor, and I know several people still using them in their primary desktops.
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I would have. I bought my i7-920 a few minutes after midnight on Newegg on the night they came out. The different between that and my prior setup (which was no slouch at the time) was just impossible to explain. It was (and is) a very capable processor, and I know several people still using them in their primary desktops.
I only replaced my i7 930 (I was a late adopter, it must've been a year before Sandy Bridge showed up) because I wanted more RAM and felt silly buying more DDR3 - implying more DDR3 lying around doing nothing. Well, that and the system instability (which I'd narrowed down to the motherboard/CPU/RAM combination).

So I thought to myself "Hey, how much does a Xeon E5 cost?". Turns out the Xeon E5 1650 v3 isn't that much more expensive than the equivalent Core i7. It also turns out that 4GB ECC DDR4 DIMMs are rather rare. :p

I guess my point is that the silly old FSB and the design choices that resulted from it really held Merom and Penryn back.
 

Apollo

Wizard
Joined
Jun 13, 2013
Messages
1,458
I would have. I bought my i7-920 a few minutes after midnight on Newegg on the night they came out. The different between that and my prior setup (which was no slouch at the time) was just impossible to explain. It was (and is) a very capable processor, and I know several people still using them in their primary desktops.
I am still using mine as my desktop. Bought it in February 2009. I am using for all sort of things. It will be retired some time but DDR4 systems do not seems very impressive to do the jump yet.
 

TheShellshock67

Dabbler
Joined
Feb 22, 2013
Messages
25
this has been solved by new hardware.

This was indeed the FSB that bottlenecked everything.

That and the fact that this server used almost 700 Watts if under load meant that it is gone.

Thanks
 
Status
Not open for further replies.
Top