iSCSI storage permormance test

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
Hi

I have tested my iSCSI storage for VMware, below platform details and test results. Please let me know is it good score or not, and what I can do to improve iSCSI performance form example Logical block size etc

Intel Xeon(R) CPU E5-2650 2.00GHz
H310 min flashed to 9211-8i IT
RAM 64 GB
RAIDZ1 - 8 x Intel SSD D3-S4510 1.92TB + spare
Intel 10 Gbit ehternet for iSCSI

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 548.314 MB/s - Sequential Read is worse than Sequential Write???
Sequential Write (Q= 32,T= 1) : 1233.076 MB/s
Random Read 4KiB (Q= 8,T= 8) : 274.620 MB/s [ 67045.9 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 228.392 MB/s [ 55759.8 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 181.031 MB/s [ 44197.0 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 235.885 MB/s [ 57589.1 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 10.883 MB/s [ 2657.0 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 16.403 MB/s [ 4004.6 IOPS]

Test : 100 MiB [E: 0.2% (0.1/40.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/08/18 12:30:46
OS : Windows Server 2016 [10.0 Build 17763] (x64)
 
Last edited:

Zeronic

Cadet
Joined
Apr 21, 2015
Messages
3
I've got a similar server and I found it was reading from RAM during the read side of the test. So I don't know if this is a hard cap or some other tunable that need to be set.

FreeNAS-11.2-U5
CPU: Intel(R) Xeon(R) CPU E5-1620 v4 @ 3.50GHz (8 cores)
RAM: 96 GiB
NIC: Intel X520-DA2 (Teamed in Failover)
Disk: 12 Disk, 3x vdev, RaidZ2 4x HGST 600GB 15k RPM

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 606.396 MB/s
Sequential Write (Q= 32,T= 1) : 1150.581 MB/s
Random Read 4KiB (Q= 8,T= 8) : 287.287 MB/s [ 70138.4 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 293.891 MB/s [ 71750.7 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 103.781 MB/s [ 25337.2 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 85.339 MB/s [ 20834.7 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 12.613 MB/s [ 3079.3 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 19.584 MB/s [ 4781.3 IOPS]

Test : 1024 MiB [C: 19.4% (23.0/118.7 GiB)] (x6) [Interval=5 sec]
Date : 2019/08/18 22:16:42
OS : Windows 10 Professional [10.0 Build 17134] (x64)
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Bear in mind that unless you force sync=always on the ZVOLs backing the iSCSI LUNs, your write scores will be artificially high as they are effectively "writing into RAM" on the storage side, which is unsafe.

The test size used in both cases is also likely fully included in the ARC cache on the server, so this will skew your read results as well.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
aaa write is boosted via RAM :) below scores for 32 GiB size and sync=always for ZVOL. Is the score satisfactory for 8xSSD?

RAIDZ1

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 543.492 MB/s
Sequential Write (Q= 32,T= 1) : 575.022 MB/s
Random Read 4KiB (Q= 8,T= 8) : 224.315 MB/s [ 54764.4 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 47.835 MB/s [ 11678.5 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 184.949 MB/s [ 45153.6 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 46.496 MB/s [ 11351.6 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 9.957 MB/s [ 2430.9 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 6.111 MB/s [ 1491.9 IOPS]

Test : 32768 MiB [E: 0.2% (0.1/40.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/08/19 8:15:15
OS : Windows Server 2016 [10.0 Build 17763] (x64)


Mirror 4x2

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 554.537 MB/s
Sequential Write (Q= 32,T= 1) : 762.736 MB/s
Random Read 4KiB (Q= 8,T= 8) : 228.103 MB/s [ 55689.2 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 79.665 MB/s [ 19449.5 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 186.038 MB/s [ 45419.4 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 74.036 MB/s [ 18075.2 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 9.984 MB/s [ 2437.5 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 6.979 MB/s [ 1703.9 IOPS]

Test : 32768 MiB [E: 0.1% (0.1/100.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/08/19 10:46:18
OS : Windows Server 2016 [10.0 Build 17763] (x64)
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
That's a more accurate representation of the pool's performance. Also note how the write speeds are higher across the board for mirror vdevs versus RAIDZ1, even with SSDs. You could also try a 2x4-drive RAIDZ1 layout which would likely perform between the two.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
2x40drive RAIDZ1

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 548.210 MB/s
Sequential Write (Q= 32,T= 1) : 662.355 MB/s
Random Read 4KiB (Q= 8,T= 8) : 227.403 MB/s [ 55518.3 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 57.618 MB/s [ 14066.9 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 185.948 MB/s [ 45397.5 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 58.729 MB/s [ 14338.1 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 10.391 MB/s [ 2536.9 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 6.499 MB/s [ 1586.7 IOPS]

Test : 32768 MiB [E: 0.2% (0.1/40.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/08/20 8:08:09
OS : Windows Server 2016 [10.0 Build 17763] (x64)
------------------------------------------------------------------------------------------

I'm not sure what is between 2 RAIDZ1 mirror or stripe, one SSD has 1.92 GB so looks like stripe

1566291812068.png


1566281780888.png


9xSSD Stripe (9 not 8 because I used spare also)

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 549.063 MB/s
Sequential Write (Q= 32,T= 1) : 856.660 MB/s
Random Read 4KiB (Q= 8,T= 8) : 224.901 MB/s [ 54907.5 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 119.039 MB/s [ 29062.3 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 185.096 MB/s [ 45189.5 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 115.695 MB/s [ 28245.8 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 10.927 MB/s [ 2667.7 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 6.614 MB/s [ 1614.7 IOPS]

Test : 100 MiB [E: 0.2% (0.1/40.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/08/20 14:19:55
OS : Windows Server 2016 [10.0 Build 17763] (x64)
 
Last edited:

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
vdevs are striped so I can lost one disk in each vdev RAIDZ1 am I right???
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
vdevs are striped so I can lost one disk in each vdev RAIDZ1 am I right???
Correct; for the same reason, losing two drives in a single vdev (and losing the vdev) will result in the whole pool going offline.
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
I wonder how scores are changing when I will add SLOG on INTEL DC P3700...
 

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
Hi

Today I added intel P3700 400 GB for slog for my storage (raidz1 - 8 x Intel SSD D3-S4510 1.92TB) below results

-----------------------------------------------------------------------
CrystalDiskMark 6.0.1 x64 (C) 2007-2018 hiyohiyo
Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

Sequential Read (Q= 32,T= 1) : 507.865 MB/s
Sequential Write (Q= 32,T= 1) : 875.076 MB/s
Random Read 4KiB (Q= 8,T= 8) : 222.577 MB/s [ 54340.1 IOPS]
Random Write 4KiB (Q= 8,T= 8) : 145.446 MB/s [ 35509.3 IOPS]
Random Read 4KiB (Q= 32,T= 1) : 190.420 MB/s [ 46489.3 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 108.381 MB/s [ 26460.2 IOPS]
Random Read 4KiB (Q= 1,T= 1) : 10.711 MB/s [ 2615.0 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 8.983 MB/s [ 2193.1 IOPS]

Test : 2048 MiB [E: 0.2% (0.1/40.0 GiB)] (x5) [Interval=5 sec]
Date : 2019/09/18 19:25:04
OS : Windows Server 2016 [10.0 Build 17763] (x64)
 
Last edited:

poldas

Contributor
Joined
Sep 18, 2012
Messages
104
The question is:

Is it enough for small company iscsi VMware storage?

- 6 x VM Windows 2019 (VMware)
- 10 x MS SQL db for local apps
 
Top