10G Network TCP tuning

Status
Not open for further replies.

tbaror

Contributor
Joined
Mar 20, 2013
Messages
105
Hello All,

I have a new storage build with following spec below ,
The storage serve as cifs nas and iscsi for XEN 3 hosts hypervisors.
The pool is built with 2x8 Disks per each vdev , each vdev is Raid-Z2
Pool where divided to 3 zvol's 4TB each for iscsi Xen network , and one Dataset of 18TB allocated for storage NAS based cifs usage ,4 ssd's where assign globally to cache 2x mirror for metadata(log/zil),2x in strip for read (l2arc).
All system is currently up and running , but i would like to tune the 10G network card to be optimized , i read few posts about 10G network tuning , i picked two suggestion for tuning and i would like your opinion which is most suitable to given scenario spec?
Thanks

Code:
#option 1
kern.ipc.soacceptqueue=256
kern.ipc.maxsockbuf=16777216
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=4194304
net.inet.tcp.recvbuf_inc=4194304
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=4194304
net.inet.tcp.sendbuf_inc=4194304


Code:
#option 2
sysctl kern.ipc.soacceptqueue=1028
sysctl kern.ipc.maxsockbuf=33554432
sysctl net.inet.tcp.recvbuf_max=33554432
sysctl net.inet.tcp.recvspace=4194304
sysctl net.inet.tcp.recvbuf_inc=524288
sysctl net.inet.tcp.recvbuf_auto=1
sysctl net.inet.tcp.sendbuf_max=33554432
sysctl net.inet.tcp.sendspace=2097152
sysctl net.inet.tcp.sendbuf_inc=262144
sysctl net.inet.tcp.sendbuf_auto=1 


Code:
Chaisis: 4U Supermicro SC846BE1C-R1K28B
CPU & MB: Intel Xeon E5-1620v3 /Supermicro X10SRL-F
Memory:128GB DDR4-2133 2Rx4 ECC REG DIMM
HBA:LSI LSI00344 9300-8i SGL, SAS 3, 4 port HBA, 12Gb/s, JBOD, PCI-Express 3.0 x8 LP
Cash HDD: 4x Samsung SSD 850 Pro Int. 2.5" 512GB, MZ-7KE512BW
HDD: 16 x Seagate Constellation ES.3 4000GB, 7200 RPM, 3.5" SATAIII, 128MB - ST4000NM0124
OS HDD: 2x SMC SATAIII on DOM Drive 64GB, MLC SSD-DM064-PHI
Network: Intel Ethernet Converged Network Adapter X710-DA4
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
not enough vdevs. not enough ram for l2arc to serve 3 xenserver, which indicates that u will have many random reads.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
I'm curious what your performance is now. How many VMs do you have? I agree with @zambanini, and think you would have performance issues serving three hypervisors with that setup.

Usually, when I'm making recommendations for VM storage, I'll recommend striped mirrors for maximum I/O. You also need a lot of memory; 128GB seems like it would be too low, though, and I'd probably recommend you max out your mobo with 256GB of memory.

Having 1TB of L2ARC is overkill given how much memory you have, and you may even be hurting performance with that much L2ARC. I usually recommend targeting 1:5 ARC:L2ARC ratio, and you're in the neighborhood of 1:10 right now.
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
also take care, why you should have more free space available. that kind of setup should never fill up to 80%. I recommend you should stay far below that. also why do you use sata disks? i hope you really don'T use it for a productive enviroment
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
As far as space goes, you should never exceed 80% ever on a ZFS array. For iSCSI usage, you should stay below 50%. This is because ZFS is copy-on-write; as far as ZFS is concerned, an iSCSI volume is a single large file, so any changes to it requires that the iSCSI volume be copies. If you fill beyond 50%, then ZFS is forced to fragment the file in order to complete the CoW activity, which causes terrible latency spikes.

I don't share your concerns about SATA drives (I see them used extremely commonly for this use in the enterprise nowadays). Is there a reason why you oppose them?
 

zambanini

Patron
Joined
Sep 11, 2013
Messages
479
sdc, error rastes, dual controller use, warranty.

nearline sata drives for backup: why not
 

tbaror

Contributor
Joined
Mar 20, 2013
Messages
105
Thanks All for your advise, if we will constant that its to slow we will upgrade to 256GB and ad additional vdev of 8 disks,as for current load this system will worst case we will use simultaneously spin-on 15 VMs guest across all 3 hyper-visors , this is in worst case but on average will be like 8 VMs working simultaneously .
type of load on the NAS either video conversion from high-res to low res (mp4) to transferring of video materials, and executing different scrips to just simple clicking on the UI the virtual guests.
To my original question regarding TCP tune any advise regarding that subject?
Thanks again
 
Status
Not open for further replies.
Top