ChrisReeve
Explorer
- Joined
- Feb 21, 2019
- Messages
- 91
Good morning
First of all, specs:
MB: Supermicro X9DRL-3F
CPU: 2x Intel Xeon E5-2650 v2 (2,6GHz base, 3,4GHz boost, HyperThreading turned off in BIOS)
RAM: 128GB ECC DDR3 1600MHz
Pool: 10x10TB WD Red (white label EMAZ) connected through on-board SAS-controller (8 drives) and 2 drives in a LSI 9211-8i HBA in IT-mode. RAIDz2 encrypted pool with default compression enabled)
NIC: Intel X540-T2 10GbE
Running ESXi 6.7 with freeNAS 11.3RC VM dedicated 4 physical cores and 90GB RAM, storage controllers passed through to freeNAS VM.
During simple file copy tests over SMB from a windows client (physical machine), I see sustained write performance of around 600MB/s. Read performance for cached files (several transfers of same file from server to Windows client) see speeds saturating 10GbE connection (around 1,05-1,10GB/s).
But for "cold" files not stored in ARC, speeds are "only" around 300MB/s. Not bad, but I am trying to figure out what exactly is the bottleneck.
Results are identical running virtualized as described, and barebone boot into freeNAS. Again, performance for cached files are good, and performance from test pool running off a Intel DC P3700 accessed over SMB also gives good results, which eliminates SMB optimization and networking from being the limiting factor.
I believe it might have something to do with either compression, or encryption/decryption performance. It doesn't seem to scale with CPU cores/threads, but might scale with frequency. Frankly, 300+MB/s for cold files is sufficient for my use, but it would be nice to know if I would see gains from upgrading to i.e. E5-2667 v2 (3,3GHz base, 4,0GHz boost). Any idea if I'm on the right track here?
First of all, specs:
MB: Supermicro X9DRL-3F
CPU: 2x Intel Xeon E5-2650 v2 (2,6GHz base, 3,4GHz boost, HyperThreading turned off in BIOS)
RAM: 128GB ECC DDR3 1600MHz
Pool: 10x10TB WD Red (white label EMAZ) connected through on-board SAS-controller (8 drives) and 2 drives in a LSI 9211-8i HBA in IT-mode. RAIDz2 encrypted pool with default compression enabled)
NIC: Intel X540-T2 10GbE
Running ESXi 6.7 with freeNAS 11.3RC VM dedicated 4 physical cores and 90GB RAM, storage controllers passed through to freeNAS VM.
During simple file copy tests over SMB from a windows client (physical machine), I see sustained write performance of around 600MB/s. Read performance for cached files (several transfers of same file from server to Windows client) see speeds saturating 10GbE connection (around 1,05-1,10GB/s).
But for "cold" files not stored in ARC, speeds are "only" around 300MB/s. Not bad, but I am trying to figure out what exactly is the bottleneck.
Results are identical running virtualized as described, and barebone boot into freeNAS. Again, performance for cached files are good, and performance from test pool running off a Intel DC P3700 accessed over SMB also gives good results, which eliminates SMB optimization and networking from being the limiting factor.
I believe it might have something to do with either compression, or encryption/decryption performance. It doesn't seem to scale with CPU cores/threads, but might scale with frequency. Frankly, 300+MB/s for cold files is sufficient for my use, but it would be nice to know if I would see gains from upgrading to i.e. E5-2667 v2 (3,3GHz base, 4,0GHz boost). Any idea if I'm on the right track here?