iSCSI Performance: Par for the Course?

Joined
Mar 25, 2018
Messages
9
I am wondering whether IO the performance I am experiencing in iSCSI-backed VMs is expected for my hardware.

I know the 5400 RPM drives are going to slow things down, but I still feel it might be slower than it should be.

I've just started up a two test VMs to see how things work. I anticipate running a total of 10-20 VMs once fully operational. They will be hosting full cryptocurrency nodes, web servers, and database servers.

The two machines are the home lab boxes in my signature.
  • They are directly connected with a cat 6 cable (no switch - straight from one network card to the other).
  • The IPs are statically set to a separate subnet from other traffic. Only iSCSI is on these interfaces/network.
I tested three ways
  1. Sync Disabled
  2. Ram Disk SLOG (as described in Testing the benefits of SLOG using a RAM disk!)
  3. Sync Enabled, No SLOG
See screenshots and output files for detailed numbers. The text files include IOPS.

The main problem is ~400K (not M) sync writes. I can deal with slow, but that's almost crippling.
Code:
# zpool status
  pool: iscsi-tank0
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:18:03 with 0 errors on Sun Jun  9 00:18:05 2019
config:

        NAME                                            STATE     READ WRITE CKSUM
        iscsi-tank0                                     ONLINE       0     0     0
          mirror-0                                      ONLINE       0     0     0
            gptid/a5761110-6dc9-11e9-8737-90b11c250e64  ONLINE       0     0     0
            gptid/ae44d0b6-6dc9-11e9-8737-90b11c250e64  ONLINE       0     0     0
          mirror-1                                      ONLINE       0     0     0
            gptid/d9ab4007-6dc9-11e9-8737-90b11c250e64  ONLINE       0     0     0
            gptid/e3699d01-6dc9-11e9-8737-90b11c250e64  ONLINE       0     0     0
          mirror-2                                      ONLINE       0     0     0
            gptid/1daa3574-6dca-11e9-8737-90b11c250e64  ONLINE       0     0     0
            gptid/2a5d71cc-6dca-11e9-8737-90b11c250e64  ONLINE       0     0     0
          mirror-3                                      ONLINE       0     0     0
            gptid/5293781d-6dca-11e9-8737-90b11c250e64  ONLINE       0     0     0
            gptid/5fac570e-6dca-11e9-8737-90b11c250e64  ONLINE       0     0     0
        spares
          gptid/a3c3b09c-6dca-11e9-8737-90b11c250e64    AVAIL
Code:
## Sync Enabled
-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (UWP) (C) 2007-2018 hiyohiyo
                          Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :    64.591 MB/s
  Sequential Write (Q= 32,T= 1) :     9.618 MB/s
  Random Read 4KiB (Q=  8,T= 8) :     5.138 MB/s [   1254.4 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :     0.413 MB/s [    100.8 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :     5.225 MB/s [   1275.6 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :     0.425 MB/s [    103.8 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :     4.300 MB/s [   1049.8 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :     0.426 MB/s [    104.0 IOPS]

  Test : 500 MiB [C: 35.8% (22.7/63.4 GiB)] (x5)  [Interval=5 sec]
  Date : 2019/06/17 21:48:18
    OS : Windows 10 Professional [10.0 Build 18362] (x64)
 

##RAM SLOG
-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (UWP) (C) 2007-2018 hiyohiyo
                          Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :    67.187 MB/s
  Sequential Write (Q= 32,T= 1) :    54.430 MB/s
  Random Read 4KiB (Q=  8,T= 8) :     5.151 MB/s [   1257.6 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :     4.860 MB/s [   1186.5 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :     5.397 MB/s [   1317.6 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :     4.850 MB/s [   1184.1 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :     4.492 MB/s [   1096.7 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :     4.100 MB/s [   1001.0 IOPS]

  Test : 500 MiB [C: 35.8% (22.7/63.4 GiB)] (x5)  [Interval=5 sec]
  Date : 2019/06/17 22:01:14
    OS : Windows 10 Professional [10.0 Build 18362] (x64)
 
## Sync Disabled
-----------------------------------------------------------------------
CrystalDiskMark 6.0.2 x64 (UWP) (C) 2007-2018 hiyohiyo
                          Crystal Dew World : https://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes

   Sequential Read (Q= 32,T= 1) :    67.869 MB/s
  Sequential Write (Q= 32,T= 1) :    61.400 MB/s
  Random Read 4KiB (Q=  8,T= 8) :     5.337 MB/s [   1303.0 IOPS]
 Random Write 4KiB (Q=  8,T= 8) :     5.779 MB/s [   1410.9 IOPS]
  Random Read 4KiB (Q= 32,T= 1) :     5.305 MB/s [   1295.2 IOPS]
 Random Write 4KiB (Q= 32,T= 1) :     5.766 MB/s [   1407.7 IOPS]
  Random Read 4KiB (Q=  1,T= 1) :     4.491 MB/s [   1096.4 IOPS]
 Random Write 4KiB (Q=  1,T= 1) :     4.777 MB/s [   1166.3 IOPS]

  Test : 500 MiB [C: 35.8% (22.7/63.4 GiB)] (x5)  [Interval=5 sec]
  Date : 2019/06/17 22:52:21
    OS : Windows 10 Professional [10.0 Build 18362] (x64)
 

Attachments

  • CDM 01 Sync Enabled.PNG
    CDM 01 Sync Enabled.PNG
    61.4 KB · Views: 424
  • CDM 02 RAM SLOG.PNG
    CDM 02 RAM SLOG.PNG
    60 KB · Views: 410
  • CDM 03 Sync Disabled.PNG
    CDM 03 Sync Disabled.PNG
    61.3 KB · Views: 410

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
I am wondering whether IO the performance I am experiencing in iSCSI-backed VMs is expected for my hardware.
Have you reviewed these resources:

Why iSCSI often requires more resources for the same result (block storage)
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/

Some differences between RAIDZ and mirrors, and why we use mirrors for block storage (iSCSI)
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Consumer-Grade 2.5" Laptop Drives
These drives will never perform well. They are intrinsically slow due to their design. Slower than 3.5" drives.

Here are some results I got so you have something to compare against:

My iSCSI pool with 16 x drives in 8 mirror vdevs, no SLOG:
iSCSI performance.PNG


Same pool with that addition of a NVMe SSD for SLOG:
iSCSI with SLOG.PNG


Testing performance of one of the SATA drives in the pool individually:
SATA HDD performance.PNG


Test of a mSATA SSD as a comparison:
mSATA SSD performance.PNG


Another comparison I did using a SATA SSD:
SATA SSD performance.PNG


More vdevs gives more performance. I have been meaning to do some testing with various numbers of drives to give more data points to this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Well I am not quite as pessimistic on 2.5" drives as @Chris Moore is ... and for awhile there were 2.5" drives that could kick 3.5" around the block... but the 2.5" HDD market has largely given over to SSD, with notebook- or portable-USB-derived 2.5"s making the majority of the stock these days. The ones greater than 2TB are basically all SMR or other crap-for-ZFS-tech. The laptop drives will often be slower than the NAS drives - and WD really hasn't done anything with those since 2013 (WD10JFCX).

When I built our big iSCSI SAN in ~2015 it was on the Spinpoint M9T's and um whatever the slower 15mm 2TB 2.5" drive was. It wasn't stellar write performance, but it was pretty competitive, and it was loaded with 1TB of L2ARC, so reads were insane.

Sync writes are always going to be slow. You basically have to get the fastest SLOG technology you can afford, and then live with that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080
Well I am not quite as pessimistic on 2.5" drives as @Chris Moore is ... and for awhile there were 2.5" drives that could kick 3.5" around the block...
I was really just talking about these specific drives which appear to be laptop style drives that are relatively slow. There was a time when the best dives you could get for performance were the 2.5" 10k or 15k fast spinners. We still have some of those in the Sun/Oracle SAN at work. They are 300GB each but there are over 200 of them... Not all 2.5" drives are slow, even if they are slow compared to SSDs, but the laptop style drives are not the best performers. Then there is the vdev count.
 
Joined
Mar 25, 2018
Messages
9
Firstly, thanks for the time taken to respond and point me to resources!
Some differences between RAIDZ and mirrors, and why we use mirrors for block storage (iSCSI)
https://www.ixsystems.com/community...and-why-we-use-mirrors-for-block-storage.112/
Yes, I was aware that stripes of mirrors is preferred for VM storage. My pool (quantity-wise) is exactly half of yours: 8 x drives in 4 mirror vdevs.
Why iSCSI often requires more resources for the same result (block storage)
https://www.ixsystems.com/community...res-more-resources-for-the-same-result.28178/
That's helpful. I may need to scale back expectations. I feel like the person @jgreco was referring to in that thread!
What people want -> old repurposed 486 with 32MB RAM and a dozen cheap SATA disks in RAIDZ2

What people need -> E5-1637v3 with 128GB RAM and a dozen decent SATA disks, mirrored
What Vince has -> E5-2620 v2 with 32GB RAM and less than a dozen cheap SATA disks, mirrored :D

When I built our big iSCSI SAN in ~2015 it was on the Spinpoint M9T's and um whatever the slower 15mm 2TB 2.5" drive was. It wasn't stellar write performance, but it was pretty competitive, and it was loaded with 1TB of L2ARC, so reads were insane.

Sync writes are always going to be slow. You basically have to get the fastest SLOG technology you can afford, and then live with that.
That's good to hear.

Once I confirm that I'm not doing something insanely wrong with the "base" pool, I plan on adding some L2ARC (and then RAM if necessary) as well as a SLOG.

More vdevs gives more performance. I have been meaning to do some testing with various numbers of drives to give more data points to this.
I will be doing some testing for my own purposes. I can certainly let you know how it goes. Is there any test methodology / OS / Test Suite that would be helpful for you and/or preferred by the wider community?

If you want really fast VM writes, keep your occupancy rates low. As low as 10-25% if possible. Going past 50% may eventually lead to very poor performance as fragmentation grows with age and rewrites.
Could this be the issue? On the Proxmox (client) end, there's lots of space free on the iSCSI block device. I was thinking of it as "free". However, it's definitely showing as used in the FreeNAS GUI. (screenshot attached)
 

Attachments

  • Pool and Zvol Usage.png
    Pool and Zvol Usage.png
    40 KB · Views: 431
Top