Horrible CIFS performance

Status
Not open for further replies.

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
I just did some basic benchmarking on my CIFS shares and got consistent 10MB/s Write and 8MB/s Read speeds.
Now, I know, performance is a tricky topic but performance as bad as this can not be normal. Any ideas on what I could try to do about this?
My box is a standard HP Microserver N40l with 8GBs of RAM and a Raid-Z with 4 x 2TB WD Green/Red drives.
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Are the greens and reds mixed? I'm confused.. What is your link speed? Seems like your on 10/100..
 

Tomas Liumparas

Dabbler
Joined
Jan 11, 2014
Messages
32
Are you on Windows?
I've started using Teracopy for transffers. I was usually having 12MB/s by Windows explorer copy system (when transferring a huge amount of files), with Teracopy however I was able to get ~50MB/s (for a huge bunch of small files) and up to 90Mb/s for single file transffers. Was quite imprested :)
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
I"ve used Teracopy.. Usually slower for me depending on what kind of computer I'm on.. With that kind of hardware I'm thinking the drives are mixed.. Have you WDIdled the drives? Not doing so can crush performance..
 

Tomas Liumparas

Dabbler
Joined
Jan 11, 2014
Messages
32
BTW, I've recently installed 2xWD Greens at work on a D-Link's consumer NAS box and two more WD Greens on workstations computers. The Wdidle was set to 22minutes, so maybe WD tossed out the idea of making to park drive heads every 3 seconds? I've read a lot of posts about it here.
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
Are the greens and reds mixed? I'm confused.. What is your link speed? Seems like your on 10/100..

Yes, sorry for the confusion. I have two Reds and two Greens. My network is Gigabit all the way through ;)
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
Are you on Windows?
I've started using Teracopy for transffers. I was usually having 12MB/s by Windows explorer copy system (when transferring a huge amount of files), with Teracopy however I was able to get ~50MB/s (for a huge bunch of small files) and up to 90Mb/s for single file transffers. Was quite imprested :)


I did those benchmarks on my MacBook but am having my main problems when streaming to my TV where I can't watch movies without having terrible stutter.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
My guess, is that you either have a bad cable or something in your network is slowing it down to 100Mb/s.

Try connecting your N40L directly to your MacBook and test it again. I have a similarly equipped N40L, though I run a pair of 2Gb ZFS mirrors. I normally get about 45MB/s transferring files via CIFS.
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
My guess, is that you either have a bad cable or something in your network is slowing it down to 100Mb/s.

Try connecting your N40L directly to your MacBook and test it again. I have a similarly equipped N40L, though I run a pair of 2Gb ZFS mirrors. I normally get about 45Mb/s transferring files via CIFS.


I really don't think so since my iperf testing shows perfectly fine bandwiths.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
How full is the pool?

Have you run SMART tests on your disks?

Have you run "zpool status -v" recently? Were there any problems?
 

Yatti420

Wizard
Joined
Aug 12, 2012
Messages
1,437
Did you run WDIDLE on the drives both red and greens..
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
The drives and the pool appear to be fine. I did some basic testing using dd and got Read/Write speeds of around 100MB/s easily.
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
How full is the pool?

Have you run SMART tests on your disks?

Have you run "zpool status -v" recently? Were there any problems?

The pool is about 2/3 full, SMART tests are all running without any errors and also zpool status -v doesn't show anything unusual.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
First, post your iperf stats.

Second, post the output of zpool get all.
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
First, post your iperf stats.

[ ID] Interval Transfer Bandwidth​
[ 4] 0.0-10.1 sec 112 MBytes 93.4 Mbits/sec​

Second, post the output of zpool get all.

NAME PROPERTY VALUE SOURCE​
daten size 7.27T -​
daten capacity 65% -​
daten altroot /mnt local​
daten health ONLINE -​
daten guid 5458048850334553511 default​
daten version - default​
daten bootfs - default​
daten delegation on default​
daten autoreplace off default​
daten cachefile /data/zfs/zpool.cache local​
daten failmode wait default​
daten listsnapshots off default​
daten autoexpand on local​
daten dedupditto 0 default​
daten dedupratio 1.00x -​
daten free 2.54T -​
daten allocated 4.73T -​
daten readonly off -​
daten comment - default​
daten expandsize 0 -​
daten freeing 0 default​
daten feature@async_destroy enabled local​
daten feature@empty_bpobj active local​
daten feature@lz4_compress enabled local​
daten feature@multi_vdev_crash_dump enabled local​
daten feature@spacemap_histogram active local​
daten feature@enabled_txg active local​
daten feature@hole_birth active local​
daten feature@extensible_dataset enabled local​
daten feature@bookmarks enabled local​
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That iperf result IS 100Mb.. So no, you aren't using Gigabit. You have a 100Mb link somewhere. Time to go find it!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
and I screwed up.. I meant zfs get all and not zpool get all.
 

Lukas

Dabbler
Joined
Jul 8, 2013
Messages
33
That iperf result IS 100Mb.. So no, you aren't using Gigabit. You have a 100Mb link somewhere. Time to go find it!

Ah, yeah. Sorry, my bad. Did the last one over Wifi accidentally. This is what it looks like over Ethernet:
[ ID] Interval Transfer Bandwidth​
[156] 0.0- 1.0 sec 52432 KBytes 429523 Kbits/sec​
[156] 1.0- 2.0 sec 48728 KBytes 399180 Kbits/sec​
[156] 2.0- 3.0 sec 44840 KBytes 367329 Kbits/sec​
[156] 3.0- 4.0 sec 49088 KBytes 402129 Kbits/sec​
[156] 4.0- 5.0 sec 44120 KBytes 361431 Kbits/sec​
[156] 5.0- 6.0 sec 45608 KBytes 373621 Kbits/sec​
[156] 6.0- 7.0 sec 48104 KBytes 394068 Kbits/sec​
[156] 7.0- 8.0 sec 44152 KBytes 361693 Kbits/sec​
[156] 8.0- 9.0 sec 49320 KBytes 404029 Kbits/sec​
[156] 9.0-10.0 sec 43984 KBytes 360317 Kbits/sec​
[156] 0.0-10.0 sec 470384 KBytes Kbits/sec​
Done.​
So, as you can see, pure TCP performance is not the problem. The disks are performing fine as well. Gotta be something else.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
That is horrible over Gigabit. Anything below about 850Mbit is "bad", and anything around 650Mbitis typically from Realtek's crappy NICs. But yours is even worse than that. I think you'd better start talking about your network hardware because something is totally f*cked with your network topology. Because either you have crappy NICs(Intel is the best performers), or you have otherwise done something wrong.

To be brutally honest, don't necessarily expect a response from me if you don't have Intel NICs. They are proven time and time again to work the best and perform at nearly 1Gb every time without fail.
 
Status
Not open for further replies.
Top