performance issue.

Status
Not open for further replies.
Joined
Dec 23, 2014
Messages
17
I have asked this question lately in the same forum and i am asking it again but this time my testings are bit different. so please do not mind to start it again.
here is my freenas system Hardware details.

HDD 1TBx2 black and green , 64MB cache both.
32GB ECC RAM.
1 SSD for IO testing.
Intel 4x1GB LANcard.
CPU 3.2x2 Ghz Processors.
16GB 2.0 Kingston USB for Freenas boot iso.
system : Dell 490 Workstation


second system is linux base and have same hardware specs but RAM is 12GB ECC.

actually the problem is i can not come to a conclusion why network output is not reaching 1GB LAN limit.

i have seen videos of freeNAS where they are almost hitting near 10G with the help of L2Arch and Cache.. with almost same hardware specs however i agree they must be using SAS instead of SATA.
here you can see the results.


But yes in light of my above settings i believe i must at least saturate 1Gb ethernet link which is connected VIA crossover cable. here are some test results.


On my FreeNAS server i have 2 ZFS pools.

1 is on Mirrored SATA drives as mantioned above.

and
1 SSD drive for SATA vs SSD Ethernet through put testing.


here is my SATA Test via dd command

[root@freenas] /mnt/sata/sata# dd if=/dev/zero of=largefile1 bs=512K count=2096
2096+0 records in
2096+0 records out
1098907648 bytes transferred in 1.002915 secs (1095713485 bytes/sec)
[root@freenas] /mnt/sata/sata# dd if=/dev/zero of=largefile2 bs=512K count=4096
4096+0 records in
4096+0 records out
2147483648 bytes transferred in 2.865263 secs (749489195 bytes/sec)
[root@freenas] /mnt/sata/sata# dd if=/dev/zero of=largefile3 bs=512K count=8096
8096+0 records in
8096+0 records out
4244635648 bytes transferred in 50.883219 secs (83419165 bytes/sec)
[root@freenas] /mnt/sata/sata# dd if=/dev/zero of=largefile3 bs=512K count=16096
16096+0 records in
16096+0 records out
8438939648 bytes transferred in 91.452580 secs (92276671 bytes/sec)


you guys can see i have a write speed of 8GB file in 91 Seconds. which is cool.

however read test is a bit confusing for me. in linux we use hdparm for read test in freebsd i dont know.
so just by googling i found the workaroud by DD. which is below


dd if=largefile6 of=/dev/zero bs=512k count=16096
4096+0 records in
4096+0 records out
2147483648 bytes transferred in 0.897553 secs (2392598239 bytes/sec)

now this is insainly fast. i dont know why.......... anyways if we stuck up to writing speed which is almost 90MB per second. while i am writing anything on the drives VIA NFS. (please note as i mentioned above both systems are strong in hardware perspective) it bumps around 30 to 49 MB.

my client end is linux base machine and here is my hdparm stats.
root@bull:/ssd# hdparm -t /dev/sda
/dev/sda:
Timing buffered disk reads: 366 MB in 3.00 seconds = 121.84 MB/sec

when i am copying any huge data like 8GB or so. it shows like this (let me give you guys my rsync example).

root@bull:/tmp# rsync --progress largefile2 /sata/
largefile2
4244635648 100% 44.88MB/s 0:01:30 (xfer#1, to-check=0/1)


you guys can see my disk IOs on both sides are above 100MB almost but it is giving me 45MB only on 4GB file transfer.

now please can you guys tell me what i am doing wrong or where is the problem in my case.

Thanks,
yousuf
 

willnx

Dabbler
Joined
Aug 11, 2013
Messages
49
Have you done any iperf testing between the two systems/ could you post what you get for both systems (i.e. when each is ran in server mode).
It'll be clear really quickly if there's network congestion which is causing your performance to suffer.
 
Joined
Dec 23, 2014
Messages
17
Test when FreeNAS is Server
-------------------------------------------
[root@freenas] ~# iperf -s -f MB
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.06 MByte (default)
------------------------------------------------------------
[ 4] local 10.x.x.35 port 5001 connected with 10.x.x.17 port 43294
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 984 MBytes 98.1 MBytes/sec
[ 5] local 10.x.x.35 port 5001 connected with 10.x.x.17 port 43300
[ 5] 0.0-10.0 sec 992 MBytes 98.9 MBytes/sec

root@bull:~# iperf -c 10.x.x.35 -f MB
------------------------------------------------------------
Client connecting to 10.x.x.35, TCP port 5001
TCP window size: 0.02 MByte (default)
------------------------------------------------------------
[ 3] local 10.x.x.17 port 43300 connected with 10.x.x.35 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 992 MBytes 99.2 MBytes/sec


Test When FreeNAS is client
root@bull:~# iperf -s -f MB
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 0.08 MByte (default)
------------------------------------------------------------
[ 4] local 10.x.x.17 port 5001 connected with 10.x.x.35 port 38744
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 1123 MBytes 112 MBytes/sec



[root@freenas] ~# iperf -c 10.x.x.17 -f MB
------------------------------------------------------------
Client connecting to 10.x.x.17, TCP port 5001
TCP window size: 0.03 MByte (default)
------------------------------------------------------------
[ 3] local 10.x.x.35 port 38744 connected with 10.x.x.17 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1123 MBytes 112 MBytes/sec




as you guys can see my disk IO and Ethernet are seems fine but speed is half of it. and one more thing i notice is that.
when i copy a huge file to and from the freenas server some time it bumps up to 70MB and then after few seconds my file transfer rate drops to 30MB.

so the question is not just slow transfer. my another question is why my bandwidth is not constant. why fluctuation btw 70MB to 30MB. which is very big threshold.

Thanks,
MYK
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I didn't answer and I don't feel you should have created a thread (but I didn't delete it either) because the answer was given to you in the last thread...
Anyways, mirror is good but your pool size in terms of the number of spindles is very small. Further, with NAS, your overall performance is a result of the layering of a bunch of complex subsystems on them. Not only is the NAS limited by the weakest of these (guessing: the actual disks in your case), but also those weaknesses tend to get amplified through the other layers.

That's not saying that this is what your problem is, but I'm saying that the weakest bit of your NAS is probably the disk itself.

You shouldn't expect a different answer because you made a new thread. :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, the fact that it seems noticeably slower in one direction (112MBytes/sec vs 98) seems to suggest that there's something up.

Quite frankly, a Dell 490 workstation is a machine that's nearly ten years old, and suffers from a whole bunch of possible issues.

You haven't actually told us what the actual CPU in there is; if we charitably assume that it's a Xeon 5160 then it is a 3GHz part with two cores and 4MB cache. That isn't really great. Geekbench score of 1500 per core or 2700 for two, even if we push that out to two sockets the score's about 5500.

A single E3-1230 v3 core runs around 3300, or more than twice as fast, and there's four of them, for a total of around 13000.

The Xeon 5100 suffers from a front side bus design, which we understand today to be a poor design, which creates latency and contention and other various issues which aren't too noticeable as a workstation, but probably interfere with making a zippy server.

DDR2-533 is way slow as a memory technology.

SATA 3Gbps for the disk connectivity probably doesn't hurt you for conventional hard disks, but for a SSD you are losing speed there too.

Basically you have a weak system. Remember in the previous thread how I mentioned that

with NAS, your overall performance is a result of the layering of a bunch of complex subsystems on them. Not only is the NAS limited by the weakest of these (guessing: the actual disks in your case), but also those weaknesses tend to get amplified through the other layers.

Well, you have a bunch of weak subsystems. You pile these on each other and the first one's weakness is amplified by the second one's, is amplified by the third one, etc. Individually none of the weaknesses are necessarily catastrophic, but as a whole ... it sucks.

My suggestion? Come back with hardware that isn't ten years old.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I didn't answer and I don't feel you should have created a thread (but I didn't delete it either) because the answer was given to you in the last thread...

You shouldn't expect a different answer because you made a new thread. :p

Well, I gave him a different answer anyways, mostly because I too had some 2006-era hardware I really wanted to work well with ZFS, and which worked about as poorly as his.
 
Joined
Dec 23, 2014
Messages
17
Thanks for sharing Cyberjock.
actually i am a bit non technical in hardware perspective. by spindles i thought RPMs. all my drives are 7200 RPMs. and drive test with DD showed good results.
so maybe i perceive it wrong. therefore i created the new thread.

Can you please explain what does "your pool size in terms of the number of spindles is very small" actually i can not revile the formula/hidden code inside it :)

or in other words

my pool size is 1TB and RMP 7200 Sata (both drives). by above mentioned answer what pool size i should create by keeping in mind the spindles?
 
Joined
Dec 23, 2014
Messages
17
jgreco, Thanks i think you answered my question. :) now i can understand that machine is the problem. man you guys are great..............
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Well, we don't *know* that for absolute certain, but the historical experience is that old gear performs poorly. NAS is like a relay race. If you have one poor runner, you might still come out okay, because the others can make it up, but if everyone's slow, then it's a loss for sure.

Individually your subsystems are probably not a catastrophic failure, which is why they test sort-of okay, but they're running real hard to get you what they give you individually. As a whole, then, with multiple things competing for resources, it is a bad scene.
 

willnx

Dabbler
Joined
Aug 11, 2013
Messages
49
Can you please explain what does "your pool size in terms of the number of spindles is very small" actually i can not revile the formula/hidden code inside it :)

Overall disk count, i.e. the number of HDDs you have in a zpool.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Overall disk count, i.e. the number of HDDs you have in a zpool.

Well I was talking about these.

Iron_Spindles_Satin_Black11.jpg


Y'know, spindles. Haha. Ok I've been up too long. Time to head out.
 
Joined
Dec 23, 2014
Messages
17
thanks guys i changed the hardware and it worked.
actually i was excited by the hardware as it got 2x3.2 Processors Xeon and 32GB RAM. now after lot of research and your help i realize new technology is way much better then what we had in 2006.
Thanks again.
 
Status
Not open for further replies.
Top