Poor Performance on 10 GbE and 5 disk striped SSD RAID

Status
Not open for further replies.

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
Hi

I have build this system:

2x E5-2640 v3 Intel 8 Core Xeon 2.60GHz
Supermicro X10DRi-T, Dual Intel 10GbE LAN,16 Dimm Slots (upto 1TB RAM), On Board Graphics, On Board SATA RAID 0,1, IPMI & Remote KVM
8x 16GB 2133MHz DDR4 ECC Registered DIMM Module
2x Intel 120GB SATA SSD S3500 Enterprise Series Drive For Operating System
LSI 9300-8i Host Bus Adaptor
5 x 1TB Samsung 850 PRO

I have made a striped raid of the Samsung SSD disks, which would get be about 2500 MB/s, teoretically:smile:

When I run iperf I have full throughput:
[ 4] 0.0-10.0 sec 9526 MBytes 953 MBytes/sec

When I test the internal dd test:
root@raw ~]# dd if=/dev/zero of=/mnt/XXXX/YYYY/largefile02 bs=512K
count=100000
100000+0 records in
100000+0 records out
52428800000 bytes transferred in 13.833011 secs (3790122073 bytes/sec) (3.52983 GB/s)

I have made a SMB share(SMB2_10) and when I connect I get something completely different:
About 720 MB/s for write and about 160 MB/s read

Can anyone tell why this is? Shouldn't it be possible to get higher SMB rates?
 
Joined
Feb 2, 2016
Messages
574
Is your CPU maxed out during the SMB transfer? Is one core of your CPU maxed out during the SMB transfer? Does your test pool have compression turned on? Is the data you're writing to the SMB share the same type of data (file size, compressibility, etc.) as the local test data? Is there an in-line virus scanner running on the test computer?

Cheers,
Matt
 

Tino Zidore

Dabbler
Joined
Nov 23, 2015
Messages
30
Status
Not open for further replies.
Top