Poor read performance on encrypted pool

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Good morning

First of all, specs:
MB: Supermicro X9DRL-3F
CPU: 2x Intel Xeon E5-2650 v2 (2,6GHz base, 3,4GHz boost, HyperThreading turned off in BIOS)
RAM: 128GB ECC DDR3 1600MHz
Pool: 10x10TB WD Red (white label EMAZ) connected through on-board SAS-controller (8 drives) and 2 drives in a LSI 9211-8i HBA in IT-mode. RAIDz2 encrypted pool with default compression enabled)
NIC: Intel X540-T2 10GbE

Running ESXi 6.7 with freeNAS 11.3RC VM dedicated 4 physical cores and 90GB RAM, storage controllers passed through to freeNAS VM.

During simple file copy tests over SMB from a windows client (physical machine), I see sustained write performance of around 600MB/s. Read performance for cached files (several transfers of same file from server to Windows client) see speeds saturating 10GbE connection (around 1,05-1,10GB/s).

But for "cold" files not stored in ARC, speeds are "only" around 300MB/s. Not bad, but I am trying to figure out what exactly is the bottleneck.

Results are identical running virtualized as described, and barebone boot into freeNAS. Again, performance for cached files are good, and performance from test pool running off a Intel DC P3700 accessed over SMB also gives good results, which eliminates SMB optimization and networking from being the limiting factor.

I believe it might have something to do with either compression, or encryption/decryption performance. It doesn't seem to scale with CPU cores/threads, but might scale with frequency. Frankly, 300+MB/s for cold files is sufficient for my use, but it would be nice to know if I would see gains from upgrading to i.e. E5-2667 v2 (3,3GHz base, 4,0GHz boost). Any idea if I'm on the right track here?
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Hey,

Do you have CPU pass to VM so it can utilize the AES-NI? Can you post dmidecode -t processor -t cache to see details please.
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
I check that later today (I am at work right now). But as I said, performance is identical for both virtualized, and barebone freeNAS. I only switched to a virtualized setup about a week ago, but have been running barebone freeNAS on the current hardware setup for about 6 months with the same read performance. Shouldn't I have seen a performance difference if that was the case?
 

HolyK

Ninja Turtle
Moderator
Joined
May 26, 2011
Messages
654
Aha okay sorry for some reason i've skipped that paragraph. Anyway i would still check if the AES-NI is reported to be sure. Technically it could be disabled in BIOS (for whatever reason?) so then you would see the same speeds no matter if bare or VM.

Anyway ... how exactly did you tested? Only transfers via SMB (and ruling out that SMB/network is not the source of issue) or have you done some local tests as well?

How about something like gdd if=/dev/zero bs=64k of=/encryptedpool/blah/ddtst.out
Let it run for a while to overcome the cache. ~200GB might be sufficient. Oh and don't use /dev/random as that one can be an easy bottleneck.
And take a look on the CPU utilization (cores and %) while it is running.

Also you can check output of zpool iostat -v poolname
 

ChrisReeve

Explorer
Joined
Feb 21, 2019
Messages
91
Again, I will check all of this after work today. And as you say, it might be disabled in BIOS. I doubt it, but I will double check just to be sure.

The only tests I have done, are pure file copying to and from an SMB share. I have ruled out bottlenecks on the client (high performance Adata SX8200 Pro 1TB), again, with adeqate performance, and identical file copy tests to a test ZFS pool consisting of a single DC P3700 400GB, also via SMB. And, since I am able to reach 1GB/s read speeds when the files are cached in ARC, that shows that both the network, and the SMB protocols can handle 1GB/s throughput. I only see slowdowns when copying files that haven't been accessed in a long time.

I appreciate any additional tips and tests I can run, and will go through them all later today.
 
Top