SOLVED Freenas build - horrible Preformance - how to improve - VMWARE NFS -> solved rebuild with new hardware

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Hi all,

we are running a freenas on an "old" HP Server for our VMWare cluster as "slow" storage in addition to our full flash FC SAN. But this slow storage is way to slow currently.
The hardware:
HP DL (number I need to look up) with 14x LFF slots
OS Version:FreeNAS-11.2-U6 (Build Date: Sep 17, 2019 0:16)
Processor:Intel(R) Xeon(R) CPU E5-2430L 0 @ 2.00GHz (24 cores)
Memory:192 GiB

We have 2 Expansion shelf attached via SAS and multipathed those.
We have dual 10G uplink running in failover.
25x1.8TB 10k SAS 12G
22x4TB 7.2k SAS 12G
2x 0.48TB SAS 12G SSD with PLP
1x 0.96TB SAS 12G SSD with PLP
1x 0.48TB SATA 6G SSD with PLP

currently we are running the following:
one volume consisting of 2x Z2 with 10x 4TB 7.2k each
This pool has 2x 0.48TB SAS 12G SSD in mirror as ZIL
and 1x 0.96TB SAS 12G SSD as L2ARC
plus 1x 4TB as hot spare

from this pool we getting in max 250MB/s write speed, which is freaking slow, even for long sequential writes. Usually it drops down to 120MB/s, which is single disk performance more or less.. on the read side it looks a bit brighter with up to 300MB/s..
how to improve?

the second volume consisting of 5x Z1 with 5x 1.8TB 10k each and a single 0.48TB SATA 6G SSD as ZIL
for this pool the performance is horrible. write at max at 90MB/s for sequential, which is less than half than a single disk would do..


I am looking for a way to get way more performance out of this machine. I can move the data temporary to the full flash FC SAN to reconfigure one pool after the other.

Goal would be to have a large( 40TB+), high redundant volume for backups and a smaller(20+ TB) with more speed but still good redundancy for some some VMs.

I am open to any suggestions.

We recyclend the 1.8TB SAS 12G drives from an HP MSA iSCSI SAN, this with its old AMD Athlon 2700 and 1GB of Memory was way faster than the then volume of those disk are right now with freenas.

CPU load peaks to about 50% in worst case.
Dedub is off for both
compression is on with lz4
sync writes enabled
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
HBA? IT mode? Read/Write Pool performance?
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
The 2 external shelf's are attached to a LSI SAS 9207-8e SAS2308 PCIe x8 3.0 2x SFF-8088 6G SAS S-ATA HBA JBOD Controller.
The 14 internal HDDs (all 4TB) are connected to a HP P420 running IT mode.
Regarding performance of the pools, I have already mentioned it...
 

melloa

Wizard
Joined
May 22, 2016
Messages
1,749

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
For the pool with the 4 TB 7.2k drives:
root@XXX:/mnt/RAIDZ2x2 # /usr/bin/time -h dd if=/dev/zero of=sometestfilebs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 0.428236 secs (71736155 bytes/sec)
0.43s real 0.03s user 0.39s sys

for the pool with the 1.8TB 10k drives
root@XXX:/mnt/Z1x5_1800_2_NFS # /usr/bin/time -h dd if=/dev/zero of=sometestfile bs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 0.437705 secs (70184262 bytes/sec)
0.44s real 0.02s user 0.41s sys
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
for the read speed:
the 4 TB 7.2k pool:
root@XXX:/mnt/RAIDZ2x2 # /usr/bin/time -h dd if=sometestfile of=/dev/zero bs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 0.663730 secs (46283894 bytes/sec)
0.66s real 0.00s user 0.65s sys

and the 1.8TB 10k pool:
root@XXX:/mnt/Z1x5_1800_2_NFS # /usr/bin/time -h dd if=sometestfile of=/dev/zero bs=1024 count=30000
30000+0 records in
30000+0 records out
30720000 bytes transferred in 0.732580 secs (41933973 bytes/sec)
0.73s real 0.02s user 0.71s sys
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
horribleslow_ZFS.jpg
So both pools are idle. No powered on VMs.
I have moved HDD of a test server to the one pool and created another one the other pool and benched both after each other.
Performance wise it looks horrible. The write speed is way to slow. Looking at the 4TB drives, the speed here is way below of a single drive.
Looking at the 1.8TB write speeds are equal of a single drive....
Those 1.8TB 10K drives come out of an "old" HP MSA, which we used to run iSCSI of, before the unit had a controller failure. With that unit it was possible to archive constant 410MB/s and that was limited only by its 4x 1GbE connection... We run those 25 disks as RAID50 consisting of 5x 5 disk RAID5. The HP MSA only had and AMD Athlon 2700 and 1GB DRAM Cache in each of it's 2 controllers. Now the FreeNAS machine is way more powerfull but fails totally to deliver any of it's power.
how does that come?
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Since these are SAS drives behind a RAID controller (even in IT mode, the P420 is still a RAID card at heart!) check the write cache on the individual drives. Here's a quote from another post where a user was experiencing poor performance on SAS devices.

You can check the status with smartctl -g wcache /dev/sdX
You can enable it with smartctl -s wcache,on /dev/sdX
you can disable it with smartctl -s wcache,off /dev/sdX

It does not persist after power cycle though so it will need to be part of the post-init scripts.

Also, do you have exact model/part numbers for the SSDs? Certain ones will be better or worse at SLOG workloads.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Hey already checked that - caches are on :)
For the pool with the 4TB, there are 2 Samsung SSD PM1633a 480GB, should be fast enough.. at least I hope
For L2ARC at that pool there is a single HGST ZeusIOPS SSD S842E800M2

For the 1.8TB there is single PM883 480GB as SLOG
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
Top