Performance balancing, need help to fine tuning

Status
Not open for further replies.
Joined
Sep 24, 2015
Messages
9
Hello all,

I've a few question to well balanced my server I build for my work. I decided to use Freenas for multiple raisons, one is the user friendly interface (if I'm not at the works and have a little problem, a well trained guy can solve the problem without losing is mind :) ).

So at first I will explain you my configuration and my goal.
My goal was to build a server that can handle many simultanous request on manies files (50 mb per files) and somes access on big files (6-12 Gb a file). The goal is to deliver to artists a quick read access to their files and a correct access for writing file. The write process can be 1/3 lower than read file, read is a priority.

The server is a 36 HDDs bay based on Supermicro case, exactly this is the configuration :
SuperMicro Storage Server R6048-E1CR36N
Rackmount 6U
Chassis CSE-847BE1C-R1K28LPB
1200W Redundant PSU (Platinium 94%)
Mainboard X10DRi-T4+
2x Xeon E5-2600v3 series
64 GB DDR4-2133 Reg ECC
LSI3108 SAS3 HW RAID
36x Hot-swap SAS3 Bays (with LSI Expander) 6TB SAS3 (12Gb/s) 7.2k 128Mb (Seagate RE 24/7)
Quad 10GBase-T LAN w/ Intel X540
Supermicro AOC-S3108L SAS3 RAID Card w/ 2GB cache Supermicro SuperCap Module (BBU) Supermicro 2-port Int-to-Ext SAS3 expansion (to ext. expanders)
2x Samsung SM863 DataCenter SSD 480GB SATA3
Intel E10G42BFSR Network Adapter Dual 10GBase-SR ( PCIE)
In attachment, the lspci from the server for more details.

I made 3 hardware raid 6 group with the LSI card (all the hardware maintenance is auto) and the result is stripped in Freenas.
I use one of SSD for ZIL and the other for L2ARC.

Here is the config optimisation I read and put in the /boot/loader.conf :
# Basics
vm.kmem_size_max="60G"
vm.kmem_size="55G"
vfs.zfs.arc_min="20G"
vfs.zfs.arc_max="50G"
vfs.zfs.prefetch_disable="1"
vfs.zfs.txg.timeout="5"
kern.maxvnodes=250000
vfs.zfs.write_limit_override=1073741824

# L2ARC
# by default at the moment

As you see, don't use option at the moment for L2ARC. At the moment in rsync from a client and run from it (standard PC hdd) to the server using NFS (6 servers running on Freenas) I got this performance in bi-drectionnal rsync :
For 2 GB iso : 1,991,245,824 100% 335.74MB/s 0:00:05 (xfr#1, to-chk=0/1)

So my question are :
- Does my loader.conf is well setup for my usage ?
- What can I do to have more performances ?
- Does I really need ZIL on a SSD or is better to remove ZIL (turning it off) and give the SSD to L2ARC ?
- What others tips / fine tunning you think I can do to have more performance ?

At this time, I just use a single standard 1 GB port to connect server to network. I'm waiting a fiber cable to connect it at 10Gb/s, exactly two fiber cables. One for the network team/floor, one for deliver on other ip the datas to the renderfarm.

Thank you in advanced,
Best Regards,

Matt
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
You are setting yourself up for failure. ZFS (and therefore, FreeNAS) is designed to have direct access and control over each disk. You should not be using HW RAID. I would fix that first. And until you see what's going on, I wouldn't start by modifying the loader.conf settings.
 
Joined
Sep 24, 2015
Messages
9
Hi Depasseg,

So, it's better if I modified or flash my raid card to only use it at connector and not at raid controllers ?
Best Regards,

Matt
 

SweetAndLow

Sweet'NASty
Joined
Nov 6, 2013
Messages
6,421
Hi Depasseg,

So, it's better if I modified or flash my raid card to only use it at connector and not at raid controllers ?
Best Regards,

Matt
Yes, read the stickies here before you go any further. The fact that you can't use a raid card is the most talked about thing with zfs. I suggest you start with the noob guide by cyberjock.
 
Joined
Sep 24, 2015
Messages
9
Hi SweetAndLow,

Oki, found the document. Don't know really well the ZFS, before on the others server as made, I worked with XFS and raid card are supported.
Thanks,

Will check the documentation and hope found a solution to keep Freenas on this server... If I can't, I'll go back to Suse server... but less friendly as Freenas ;)
Best Regards,

Matt
 
Status
Not open for further replies.
Top