Missing performance on sequential read operations and sync writes

Status
Not open for further replies.

Eagleman

Dabbler
Joined
Jan 31, 2014
Messages
17
I have recently migrated my virtual FreeNAS system from ESXi to a physical installation:


FreeNAS-11.0-U4 | Intel Xeon E5-1620 v4 (@ 3.50GHz) | Supermicro X10SRI-F | 64GB DDR4 ECC 2133 RAM | 10GbE (Intel X520-SR2)


Pool Silver (Raid 10)


4x HGST Deskstar NAS V2 6TB


Pool Easy (Raid 10)

4x Samsung 850 EVO (512GB) & 3x Samsung 850 EVO (256GB) & 1x Samsung 850 Pro (256GB)

SLOG Intel Optane 900P (280GB)

I am using the pool Easy as a datastore pool for my ESXi server. Both FreeNAS and ESXi are directly connected using a 10GbE network card (Intel X520-SR2) using 2 cables to both machines. Since I am using NFS as datastore protocol I am not using LACP on the uplinks, they are however in failover mode, pull 1 and the other one takes over. I do NOT have any tunables set.

Now I am not sure about the performance I am getting. I am using an all flash datastore for my VMs but when I run several benchmarks I am missing some performance on the sequential read performance and sync writes (compared to local performance on bhyve).

The network link to ESXi is measured with iperf (ESXi to FreeNAS):

[ 3] 0.0-10.0 sec 11178 MBytes 9374 Mbits/sec

This is my exact pool config:

pool: easy
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
easy ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/78a111b3-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
gptid/78da5dac-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
gptid/79142bc6-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
gptid/794cf5da-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
gptid/a589b4a5-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
gptid/a5c7b115-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
gptid/b07d54cc-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
gptid/b0b8af3b-c7be-11e7-9dfa-001b216cc170 ONLINE 0 0 0
logs
gptid/9c560dbc-d093-11e7-a282-001b216cc170 ONLINE 0 0 0

For example, when I run a VM on bhyve with AHCI drivers with sync set to always I am getting the following numbers:
clip_image002.jpg

Txczf0h.png


Now when I run the same test on a VM on ESXi I get the following numbers:
clip_image002.jpg

SiozVhX.png


I am somehow limited at 690 MB/s on my datastores. The sync writes over NFS are also cut more to half when looking at CrystalDiskMark. When running locally on byve I should also be getting more performance, here is an example of a single Samsung 850 EVO (256GB) on Windows:

ZMOMhry.png



I do know the size of my benchmark is to small to blow out the ARC, but even with the ARC in play I am not getting near the performance of a single SSD on Windows 10.

But basically 8 of those SSD’s in a raid 10 config can’t come close to a single SSD. Is there something I am missing, or is this performance with my config expected from a COW system?
 

Eagleman

Dabbler
Joined
Jan 31, 2014
Messages
17
what kind of SAS expander?

No SAS expander, I am using IBM M1015's and the SATA backplane in my case (CSE-SATA-933).

the 512GB SSDs are connected using the CSE-SATA-933 and are directly connected to my motherboard.
the 256GB SSDs are inside the case connected to a single port on the M1015

EDIT:
Seemed all of my SSDs were attached to the CSE-SATA-933. I moved them all to the onboard sata ports and saw slight improvements:

bhyve
oSfPw4X.png


ESXi:
q5fwMp1.png


The seq reads have increased by almost 700MB/s, this was the first run after a reboot and did not increase at the fifth run.
However the performance of this pool is still not what I expected it to be.
 
Last edited:

Eagleman

Dabbler
Joined
Jan 31, 2014
Messages
17
Lets bring this performance test back to as close as possible as the pool, I ran some dd with compression off and got the following results:

Code:
2x mirror: Samsung 850 EVO (256GB) x2

root@freenas:/mnt/benchmark # dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 329.917452 secs (325457722 bytes/sec)

root@freenas:/mnt/benchmark # dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 99.280997 secs (1081517972 bytes/sec)



2x2 mirror: Samsung 850 EVO (512GB) x4 

root@freenas:/mnt/easy/benchmark # dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 157.263621 secs (682765550 bytes/sec)

root@freenas:/mnt/easy/benchmark # dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 66.430672 secs (1616334423 bytes/sec)



2x2x2 mirrror: Samsung 850 EVO (512GB) x4 + Samsung 850 EVO (256GB) x2

root@freenas:/mnt/easy # dd if=/dev/zero of=tmp.dat bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 157.546034 secs (681541642 bytes/sec)


root@freenas:/mnt/easy # dd if=tmp.dat of=/dev/null bs=2048k count=50k
51200+0 records in
51200+0 records out
107374182400 bytes transferred in 68.641886 secs (1564266200 bytes/sec)




Now what is interesting is why the 2x2x2 mirrror is unable to deliver higher read speeds compared to the 2x2 mirror, it is reading from 6 SSDs instead of 4 but it is delivering less performance, is this because I added slower drives compared to the rest of the pool?
 
Status
Not open for further replies.
Top