I've searched. I've tried. I need help. CIF share is SLOW and something will stop

Status
Not open for further replies.

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
Hi Everyone,

A bit more input from my side. I did try out the following (changed) settings in smb.conf, all without any major change in performance:
SO_SNDBUF=8192 SO_RCVBUF=8192
IPTOS_LOWDELAY
TCP_NODELAY
read_raw=no write_raw=no

Trying out varying block size in dd command agains SMB disk, file size about 1.3GB, all without any major change
bs=512, count=2560k
bs=2k, count=640k
bs=8k, count=160k
bs=32k, count=40k

Not what I expected. Measuring block size effect on raw performance (dd on freenas server direct to/from disk, 1.3GB file, read performance shown)
8k - 64k about 1.4 GB/s
4k 1.1 GB/s
2k 850 MB/s
1k 570 MB/s
512 349 MB/s
The raid array has a 256GB SSD cache! Highest write performance is about 260 MB/s.

The only clear effect I could find was the use of SMB client.
  • Debian client on same ESXi server as FreeNAS (virtual network only) using dd from mounted SMB disk: 27 MB/s read performance
  • Same debian cient using smbclient get from FreeNAS: 80 MB/s
  • Win XP client on same ESXi server as FreeNAS using Intel NAS Performance Toolkit: 27 MB/s
  • Debian client on other (old) ESXI server (physical 1 Gb network) using dd: 18 MB/s
  • OSX client on physical 1 Gb network using dd: 37 MB/s

Still trying to understand why :confused:

/Mats

Thanks for the input! I love forums where people are actually helpful to each other.
 

paleoN

Wizard
Joined
Apr 22, 2012
Messages
1,403
How'd I do?
You were going strong until the end. :p Usually 196 Reallocated_Event_Count means the number of attempted reallocation events, failed attempts & successful attempts.
 

hidperf

Dabbler
Joined
Jun 12, 2012
Messages
34
You were going strong until the end. :p Usually 196 Reallocated_Event_Count means the number of attempted reallocation events, failed attempts & successful attempts.

Damn. I should have researched a few more places. I trusted the Fedora Wiki. First page I looked at.

BTW, thanks for your patience and all the help. And the lessons.
 

matram

Dabbler
Joined
Aug 22, 2012
Messages
18
More input

Hi Guys,

A bit more input from my side. Trying to explain the difference between speed with various clients I tried the following.

iperf to measure TCP bandwidth to freeNAS
over virtual network on same ESXi server: 138 MB/s
over 1 GB network from another server: 107 MB/s
Close to theoretical limit (85%) so this is not the problem.

Measuring network and CPU load using esxtop on ESXi server running freeNAS (using dd, about 1.3 GB file, 8k blocks)
Write to FreeNAS: 70 - 80 GB/s, 48% dropped TCP packets, 75% CPU load
Read from FreeNAS: 24 - 25 GB/s, 0% dropped packets, 75% CPU load
Indicates write to freeNAS is starting to hit a network limit (dropped packets) but reads are limited by another mechanism.

Guessing the bottleneck might be CPU i tried with 1, 2 and 4 CPU cores for the freeNAS VM.
CPU load went up from 75% to 140% to 200% but still no effect on read which is pegged at around 25 MB/s to a Debian test client.

Still scratching my head, starting to think the big issue may be network on the client side?

/Mats
 
Status
Not open for further replies.
Top