Is possible to get Gb network transfer with my setup?

Status
Not open for further replies.

madmax

Explorer
Joined
Aug 31, 2012
Messages
64
I'm just trying to optimize my server so when I transfer big files over the network I can get the most out of it.

I use Iperf in server mode and did if from router pfsense and a workstation to the server and both achieved 112 Mbytes/sec to each other but when I transfer a file from server to my workstation, the file transfer is from 50 Mbytes/sec to 70 Mbytes/sec range.

I'm using PCI-E SSD on my workstation with 3770k on Intel Gb NIC so I don't think that's the problem

That leaves the raid on the Freenas server. I have 4 WD black 2 TB 7200RPM in RaidZ2 pool running off SSD Sata3 HDD for a system drive. The CPU is Intel i3-2120T at 2.6 GHz. The drives are in SATA2 ports though, I hoping even though in raid that they wouldn't come close to saturating the bus. Would a raid card be better in my situation? Also using Intel Gb NIC as well. WD saids a maximum sustained data rate of 138MB/s so is this my problem? Will I achieve much better results in RaidZ1 or upgrade to ZFS v28 and using RaidZ3? I know that parity might slow down things but hoping for a contrary solution. How much closer would I get to Gb if I have them all in stripe raid instead? Do I just have to look at different HDs to achieve Gb?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How much RAM do you have?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
How fast is your workstation? It's possible that the data on your workstation can't go faster than 50-70MB/sec.

When I want to test from workstation to server I typically make a ramdrive or use a fast SSD to make sure there is no speed limits from the hard disk.

If you use Windows a program called ImDisk (its free) and you can use it to make a ramdrive temporarily.
 

madmax

Explorer
Joined
Aug 31, 2012
Messages
64
Thanks for the program suggestion. Thats a pretty cool program. I made a virtual disk to the ram and load a big file and watch it and did pretty much the same thing. 70 MB/s which isn't bad but sometimes I transfer 30 Gb or larger files and folders at a time.

I never really had a doubt that it wasn't my workstation was the problem, I would be shock if so. Its running 4.5 Ghz Ivy bridge 3770k with 2133 MHz 16 Gb ram on a OZZ Revo 3 PCI-E drive so I don't think there's anything that would slow it down.

Are you thinking that RaidZ or the drives aren't the problem?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Can you try with just a pair of the drives mirrored? I've found that RAIDZ2 is particularly piggy in a 4-drive setup, but then again I seem to have a pile of problems that others don't.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
What protocol are you using to transfer the files?
The system drive makes no difference for FreeNAS so I guess we could focus on the RAIDZ2.

Maybe try dding a large file on you FreeNAS system to test the max write/read of the array, it should be twice the speed of a single 7200RPM drive, and I doubt your CPU limited at all in this case.

Then try the file transfer again but run top durring it and see how much CPU is being used. Your CPU is a quad core with 8 threads so if your using Samba anything over 20%ish percent means you may be CPU limited. If your using any other protocol I think anything over 90%ish means you may be CPU limited.
Not an expert at this but I hope that helps.

If your using Samba you may also want to try enabling NFS, mounting the NFS share on your PfSense router, and then trying a file transfer to PfSense's ram (so you are not limited by pfsesne's disk) or you could just boot an ubuntu live cd on your workstation cause it already has a fast SSD and try the NFS file transfer there.

Also did you create your RAIDZ2 with the force 4k checkbox? If the output of the below command is 9 your array isn't 4k alligned, if it is 12 it is.
Code:
zdb -C data | grep ashift


Your hardware is so good I think you should be able to saturate a Gigabit connection.
 

madmax

Explorer
Joined
Aug 31, 2012
Messages
64
Can you try with just a pair of the drives mirrored? I've found that RAIDZ2 is particularly piggy in a 4-drive setup, but then again I seem to have a pile of problems that others don't.

Don't have extra hard drives and have data already on the pool so I have to find other drives to back it up first. Going to wait on that option b/c I think I will be upgrading to 8.3 beta but not yet though. I'm thinking that its the parity checking that put a heavy overhead but if you say its just the 4-drive setup maybe I can go to a six drive in the future, would that help?

What protocol are you using to transfer the files?

CIFS/Samba. Windows map network drive from there to the workstation desktop.

Your CPU is a quad core with 8 threads so if your using Samba anything over 20%ish percent means you may be CPU limited

My FreeNas server is running Intel i3-2120T at 2.6 GHz. which is a dual core with 2 hype thread so it ends up to be quad core but the workstation is quad core with 4 hyper thread.

Also did you create your RAIDZ2 with the force 4k checkbox? If the output of the below command is 9 your array isn't 4k alligned, if it is 12 it is.
Code:
zdb -C data | grep ashift

tried the code but I get

[root@freenas] ~# zdb -C data | grep ashift
zdb: can't open data: No such file or directory

I don't think I used 4k, didn't know what it really was, now I have a clue, seems like its coming to all hard drives but I don't think my hard drive supports it..not sure

I'm running WD2002FAEX, I tried researching the drive for 4k advanced format and haven't been able to find a clear answer. It seems no. Does it matter, is it implemented on the software level or can you initialize it with a format?

Is another command to see if I have it set?

So my results of the dd test are as follows


[root@freenas] ~# dd of=/dev/zero if=/mnt/Media/ddtestfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 311.595552 secs (67303657 bytes/sec)
[root@freenas] ~# dd if=/dev/zero of=/mnt/Media/ddtestfile bs=2048k count=10000
10000+0 records in
10000+0 records out
20971520000 bytes transferred in 123.791950 secs (169409400 bytes/sec)

67 MB/sec and 170 MB/sec....does that seem right?
WD saids 133 MB/sec sustained data transfer?
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
133MB/s sustained data transfer probably means read, so expect slower on write.
What's weird if your getting slower reads than writes, you might have done the dd commands wrong or just pasted them weird in the forum. I believe your write should be first, and your read after. And I think your read should output to /dev/null.
Lets get that straitened up before we start to figure out what the limiting factor is.

sorry, command should be the below for FreeNAS
Code:
zdb -U /data/zfs/zpool.cache | grep ashift

Also you can't do anything to fix the vdev's allignment, if you want it 4k assigned you gotta rebuild it
but from looking around the net the consensus is the drives you have aren't advanced format, but I can't confirm this
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
For the zdb -C command: zdb -C poolname | grep ashift. It's faster to check against the cache file, 2[sup]nd[/sup] command, and they have the same info.


Run the actual dd commands from the [thread=981]performance sticky[/thread]. Then you can directly compare against other peoples systems.


70 MB/s which isn't bad but sometimes I transfer 30 Gb or larger files and folders at a time.
Try some FTP transfers. My CIFs speeds are nearly the same as yours, but I get near line speed FTP transfers.

Have you done any Samba tuning? If not it should help some.

CIFS/Samba. Windows map network drive from there to the workstation desktop.
What's the client OS?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I was able to run the command

# zdb -C zpoolnamehere | grep ashift

and I got 2 lines of "12". Since I have 2 vdevs that makes sense.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If your write rates are really only about 67MB/sec, then you definitely won't be able to write to the file share faster than that. Something is odd about your configuration. My guess is its related to your hard disks or hard disk controller. Those speeds are quite low for your disks(IMO).

I'd definitely check to see if AHCI is enabled in the BIOS. SATA 2 speeds should be fine for your disks since the disks will be your bottleneck.

Nobody answered your question about v28 ZFS though. You "may" see a speed increase with ZFS v28. I would definitely not bank on that though. Your write speeds should be MUCH faster than what you are getting for 4 disks. What motherboard are you using? Perhaps the controller just sucks?

If none of those things I mentioned above help I'd say try changing to a different ZPOOL type and see if that matters. Your CPU should be able to kick butt and take names like nobody's business though. Your issue doesn't appear to be at all related to the network card, network cables, etc. It definitely bottlenecked with the server somehow. The trick will be to figure out what "it" is and cheat it, fool it, or something. I won't be able to help much else, but I'll definitely be watching the thread.
 

madmax

Explorer
Joined
Aug 31, 2012
Messages
64
For the zdb -C command: zdb -C poolname | grep ashift. It's faster to check against the cache file, 2[sup]nd[/sup] command, and they have the same info.

Alright I got a 9 which I figured b/c I was pretty sure that I didn't click the 4k. But since my drive doesn't support the advanced format, it doesn't matter correct?


Run the actual dd commands from the [thread=981]performance sticky[/thread]. Then you can directly compare against other peoples systems.

Okay so I did the test according to the thread and compared.

dd if=/dev/zero of=/mnt/Media/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 638.131130 secs (168263508 bytes/sec)

So my write is 168.263508 MB/s and

dd if=/mnt/Media/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 1493.807070 secs (71879552 bytes/sec)

my read is 71.879552 MB/sec.

I also made sure that the drive was not in standby but on always on and also the advanced power management was set to maximum performance and then reboot and retest and that was my result.

Try some FTP transfers. My CIFs speeds are nearly the same as yours, but I get near line speed FTP transfers.

Getting a little less then over CIFS, about 55 MB/s.

Have you done any Samba tuning? If not it should help some.

No, any tested references, guides or post to look into?

What's the client OS?
Windows 8

I'd definitely check to see if AHCI is enabled in the BIOS.
Confirmed it is set

What motherboard are you using? Perhaps the controller just sucks?

Using JetWay NF9A-Q67 and using Intel Q67 express chip set, not sure what makes the controller b/c Jetway doesn't say much. I'm assuming it part of the chip set from Intel.

http://www.jetway.com.tw/jw/ipcboard_view.asp?proname=NF9A-Q67&productid=858

If none of those things I mentioned above help I'd say try changing to a different ZPOOL type and see if that matters.

I tried testing the main pool, b/c all 4 drives are under one pool /mnt/Media
and tried testing dataset within the pool /mnt/Media/Videos and its the same result.

Just a correction I had said earlier about E3-1220L V2 CPU was running on the FreeNas, its running on the pfsense. My FreeNas is running Intel i3-2120T at 2.6 GHz, 2 cores with 2 hyper thread. I don't think this will change the fact that I should still be able reach speeds for 100 MB/sec on file transfer on CIFS.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
107374182400 bytes transferred in 638.131130 secs (168263508 bytes/sec)
Your write performance seems fine.

107374182400 bytes transferred in 1493.807070 secs (71879552 bytes/sec)
Your read performance is quite poor. You can try [thread=1226]disabling hyperthreading[/thread]. In truth I will be surprised if it helps.


Output of the following for all your drives:
Code:
smartctl -q noserial -a /dev/adaX



What tuneables/sysctls do you have set?


My FreeNas is running Intel i3-2120T at 2.6 GHz, 2 cores with 2 hyper thread. I don't think this will change the fact that I should still be able reach speeds for 100 MB/sec on file transfer on CIFS.
If anything it should be more likely to.
 

madmax

Explorer
Joined
Aug 31, 2012
Messages
64
Output of the following for all your drives:
Code:
smartctl -q noserial -a /dev/adaX
Its quite long to post for all the drives. Were you looking for errors? Checked and no errors. Brand new drives but DOA and errors are possible with hard drives for sure.

What tuneables/sysctls do you have set?
I had auto tune and I'm not familiar with what it does but it setup three different tuneables and sysctls. I disable and tried a dd testing and got the same result, poor read.

So I found some old drives Seagate 500 GB 7200.10 and I had open slot in my bay so I hook it up and got 60 MB/sec write and 50 MB/sec read. It wasn't good.

I decide it was either my controller on the MOBO or it was some kind of software issue, either RAIDZ2 overhead is to much to get better reads or a driver issue. Another thing I was thinking was ZFS version is just not optimize well enough.

I put my data on the two the Seagates and then detach the RaidZ2 and did some testing on different configurations.

Singe WD ZFS test:
Write:
dd if=/dev/zero of=/mnt/wdsingle/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 876.344627 secs (122525065 bytes/sec) 123 MB/sec
Read:
dd if=/mnt/wdsingle/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 757.506809 secs (141746821 bytes/sec) 142 MB/sec

Single UFS:
Write:
dd if=/dev/zero of=/mnt/ufs/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 811.904384 secs (132249788 bytes/sec) 132 MB/sec
Read:
dd if=/mnt/ufs/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 795.505474 secs (134976045 bytes/sec) 135 MB/sec

Two WD ZFS Raid 0:
Write:
dd if=/dev/zero of=/mnt/cache/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 406.474891 secs (264159447 bytes/sec) 264 MB/sec
Read:
dd if=/mnt/cache/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 395.653702 secs (271384248 bytes/sec) 271 MB/sec

Raid 1:
Write:
dd if=/dev/zero of=/mnt/mir/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 862.105900 secs (124548715 bytes/sec) 125 MB/sec
Read:
Mess this up but it seems on par.

RaidZ:
Write:
dd if=/dev/zero of=/mnt/RaidZ/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 309.948297 secs (346426109 bytes/sec) 346 MB/sec
Read:
Ran out of time to finish.




RaidZ2 (these are new results not from initial test):
Write:
dd if=/dev/zero of=/mnt/RaidZ2/tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 642.520472 secs (167114025 bytes/sec) 167 MB/sec
Read:
dd if=/mnt/RaidZ2/tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 1626.323914 secs (66022630 bytes/sec) 66 MB/sec

So as you can see I def can get over or close to what my hard drive should be performing at. I think I eliminated some things here by rebuilding the raid and testing the other raid configurations. The only thing I think could be a factor is either a new install of 8.2 either on the same SSD or CF which I don't think it will make a difference, I don't think there something defective or incompatibility with the setup. If anything I think drivers could be a issue but I don't know how FreeBSD really works something I need to look into more. The main point I wanted to do this is find out if its hardware problem or controller problem and I was pinched hole into that performance. I think its combination of my hard drives probably not be the best performer for RaidZ2, I would get better results with 15000 rpm drive but its not a option. I think ultimately its overhead you get with the protection from the RaidZ2 and the optimization of the version I'm using. My next setup is see how much of improvement I get with RaidZ2 ZFS v28 in 8.3 beta and go from there. Also I need search the site to find out if anyone else is using WD black in raidZ2 and see what there performance is. The poor read does boggle my mind but at the same time I don't know if in raidZ2 that is typical to have less of read then write b/c of algorithm that it uses.

If anything I just go to as two mirror setup instead or add another drive for RaidZ.

What your guys opinion? Anything I said or did incorrect? Thanks for everyone help so far.
 

Joshua Parker Ruehlig

Hall of Famer
Joined
Dec 5, 2011
Messages
5,949
Read should be faster than write, even in RAIDZ2. It should be boosted a bit more by ARC cache as well.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
Its quite long to post for all the drives. Were you looking for errors? Checked and no errors. Brand new drives but DOA and errors are possible with hard drives for sure.
Errors? What's an error. Actually run a -t long against all the drives first and then post the output I asked for.

What your guys opinion? Anything I said or did incorrect? Thanks for everyone help so far.
You don't understand how raidz2 works. As Joshua said reads are faster than writes. Your problem is quite odd. I suspect an underlying hardware issue is the cause.

Where FreeNAS is installed will not affect zpool performance. Also, I would force 4k formatting for the pool if you ever plan on upgrading the disks sometime in the future.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
This is funny and I'm not trying to steal this thread but my writes appear faster than my reads when I use CIFS (100Mb/sec write, 70Mb/sec read). I chocked it up to the reads needed to calculate the sumcheck before putting it out on the network and my drives are not speed demons either. But then I ran the dd test and as you can see from below, my speeds look good. I am running 8.0.4 so maybe an update is in the future, once 8.3.0-Stable hits the streets :cool:.

As for ZFS V28 possibly being faster... From what I've been reading I'm hearing it's slower than V15 at this time with respect to FreeNAS.

My Write Speed:
Note: dd was taking ~10% to 24% CPU time, average was close to 15%, took 8.39 minutes to complete.
Code:
dd if=/dev/zero of=/mnt/farm/tmp.000 bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 503.331613 secs (213326919 bytes/sec)


My Read Speed:
Note: dd was taking ~22% CPU time throughout the process, took 6.13 minutes to complete.
Code:
dd if=/mnt/farm/tmp.000 of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 368.046105 secs (291741119 bytes/sec)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Maybe I missed it but what are your system specs? RAM, how are you connecting the HDs, and are you running the 32 bit or 64 bit version of FreeNAS 8.2?
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
This is funny and I'm not trying to steal this thread but my writes appear faster than my reads when I use CIFS (100Mb/sec write, 70Mb/sec read).
Same here except my CIFs speeds are slower :( (77Mb/sec write, 65Mb/sec read).

As for ZFS V28 possibly being faster... From what I've been reading I'm hearing it's slower than V15 at this time with respect to FreeNAS.
I'm getting the same speeds using a mirror pool V15 vs upgraded V28. I still have further testing to do including testing a V15 pool back on 8.2. Hopefully, I will get to it this weekend.
 
Status
Not open for further replies.
Top