Speed issues

Joined
Jan 27, 2021
Messages
8
Hi,
I have the following setup:

1 HP DL 380P U2 Gen 8
2 x Xeon(R) CPU E5-2650L v2 @ 1.70GHz
192 Gb RAM
Raid Controller P822 in hba mode
10 x 4 Tb SAS 12G disks
1x Intel optane NVME 16Gb card.
1x 1Gb connection for management
1 x 10Gb SPF+ connection direct link bridge to an esxi server

I have the latest version of Truenas the storage is empty.

My issue is that after I create an NFS shared storage and try to use it on the connected esxi server the performance is total crap.
With sync active I get around 50 MiB/s transfer rate and with sync disabled around 130MiB/s

I have tried all the raid versions from stripe to raidz2 and raidz3 with no change in performance.
Configured with slog on the nvme disk and without.

I was expecting a transfer rate of at least 800 MiB/s. Is this not realistic? With the P822 activated in raid mode and the 2 Gb cache this is the rate I get.

Any suggestions?
 
Joined
Jan 27, 2021
Messages
8
I have forgot to say that during transfer the processor is at about 1-2% and free the memory is at 134 GiB
 

Attachments

  • by default 2021-01-27 at 10.28.02.png
    by default 2021-01-27 at 10.28.02.png
    107.5 KB · Views: 187
Joined
Dec 29, 2014
Messages
1,135
Assuming you have the SLOG added correctly, you may need to tune the dirty data max value. Remember that this could leave you vulnerable to power failures or anything else that crashed the system hard.
1611754847993.png

Does a zpool status show the SLOG like the following?
Code:
root@freenas2:/ # zpool status RAIDZ2-I
  pool: RAIDZ2-I
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0 in 0 days 01:59:52 with 0 errors on Sun Jan 24 01:59:56 2021
config:

        NAME                                            STATE     READ WRITE CKSUM
        RAIDZ2-I                                        ONLINE       0     0     0
          raidz2-0                                      ONLINE       0     0     0
            gptid/67a9a148-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/68893123-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/696903c2-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/6a501044-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/6b4526cb-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/6c34b281-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/6d271bd9-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/6e33d52c-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
          raidz2-1                                      ONLINE       0     0     0
            gptid/a1436a28-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a24a517e-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a3404858-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a43c8614-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a53a0b93-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a657fa7a-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a761f10f-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
            gptid/a8b3b2da-de13-11e8-adca-e4c722848f30  ONLINE       0     0     0
        logs
          gptid/53144d07-0018-11eb-be5d-5c838f806d36    ONLINE       0     0     0
        spares
          gptid/c01c4d23-de13-11e8-adca-e4c722848f30    AVAIL

errors: No known data errors
 
Joined
Jan 27, 2021
Messages
8
Not sure what slog adde correctly means (what are the alternatives)
But this is the zpool status:
Code:
pool: storage
 state: ONLINE
config:

    NAME                                            STATE     READ WRITE CKSUM
    storage                                         ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        gptid/143739ec-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
        gptid/15709eb5-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        gptid/1405df84-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
        gptid/136f0a21-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        gptid/152eb7d8-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
        gptid/157f7f58-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
      mirror-3                                      ONLINE       0     0     0
        gptid/12bbab37-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
        gptid/147a1c5e-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
      mirror-4                                      ONLINE       0     0     0
        gptid/15145d04-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
        gptid/155af3fd-6089-11eb-8617-000af77e615e  ONLINE       0     0     0
    logs
      gptid/fddf96a0-609e-11eb-8617-000af77e615e    ONLINE       0     0     0

errors: No known data errors
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Your problem is the controller. Even in HBA mode, it's interfering with ZFS and causing double-caching. You need a controller that can run in IT mode.
 
Joined
Dec 29, 2014
Messages
1,135
As @Samuel Tai mentioned, that is a bad HBA choice. Even is HBA mode, I have seen numerous people mention that the "ciss" driver which that card uses is problematic. I have in a DL360 G7 replaced the HP RAID card with an LSI-9207-8i. The connectors to the SAS backplane are the same, so that should be an easy and relatively cheap replacement. The cards are going to ~= $50 on eBay.
 
Joined
Jan 27, 2021
Messages
8
OK I understand the the controller is not the right one for the job but the question is why the slog speed is bad? The Slog is on an nvme ssd that is not connected in any way to the controller.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
SLOG is not a cache. It's a write replay buffer. Writes still have to land on your pool through your controller.
 
Joined
Dec 29, 2014
Messages
1,135
I just noticed that your drives are 12G. You might want to look at an LSI 9300 series card instead since they support 12G. I am sure there is a 12G HP equivalent card, but I don't know the model number off the top of my head. Paging @HoneyBadger :smile:

Edit: Unless I am mistaken, the P822 is a 6G RAID card. I doubt the 6G versus 12G makes too much difference since the physical platters are the slowest part of the equation. I know the 9207 would be cheaper than a 9300, but getting a card the has the right type of SAS connectors is going to make your replacement much easier.
 
Last edited:
Joined
Jan 27, 2021
Messages
8
There is an HP H240 12GB SAS/SATA HBA but I already overspent on this project from the initial budget. I will go back to a standard hardware raid and share the storage with NFS. I just have now the stupid NVME SSD that is useless :(
Thanks all for the info and help.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
No, please don't use HW RAID.

 
Joined
Jan 27, 2021
Messages
8
Yes I that TureNAS is not supporting the HW RAID. I was thinking to use HW RAID and linux for creating a NFS share and drop the ZFS for the moment.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
OK, that makes sense.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
If you want to stay with TrueNAS you need another HBA. The HP H240 works and is fast but I would go for a 9xxx HBA from LSI.
Additional a consumer SSD is only good for L2ARC if at all.
 
Joined
Jan 27, 2021
Messages
8
Thanks for the info the current setup with the current hardware under Truenas was getting this performance:

zfs-hba-mirror-nvme-log-mb.png


With the hardware raid:
raid10-hardware-mb.png


The test was done from an other servers connected to the storage server with NFS share and direct SPF+ connection.

For the moment until I get an HBA card compatible with TrueNas I will use the existing setup.

Also I have a question on what to expect with an HBA card. Will it be close to the HW Raid Speed?

Thanks
 
Last edited:

mstang1988

Contributor
Joined
Aug 20, 2012
Messages
102
Thanks for the info the current setup with the current hardware under Truenas was getting this performance:

View attachment 44784

With the hardware raid:
View attachment 44785

The test was done from an other servers connected to the storage server with NFS share and direct SPF+ connection.

For the moment until I get an HBA card compatible with TrueNas I will use the existing setup.

Also I have a question on what to expect with an HBA card. Will it be close to the HW Raid Speed?

Thanks
I've done tuning recently myself. I'll share my findings:
Also bump up your file size to 16GB once you are close to being tuned. You will likely see your numbers drop (caching).
 
Top