Zvol dead performance

michaeleino

Dabbler
Joined
Jan 17, 2014
Messages
24
Hello All
I have a 1 RaidZ2 Vdev consists of 8x14TB+1spare Ultrastar DC HC530 "Which I don't think it is SMR"..

OS Version: FreeNAS-11.3-U4.1 --- I can't really remember.. but I think this issue appears after 11.1 or 11.2
HW:
Supermicro Model: SSG-6049P-E1CR24L
Memory: 95 GiB
NO ZIL/SLOG
disks are new, and passes the SMART short/long tests.

Tunables:
Code:
kern.ipc.maxsockbuf 8388608
kern.ipc.nmbclusters 6042656
vfs.zfs.l2arc_norw 0
vfs.zfs.l2arc_write_boost 40000000
vfs.zfs.l2arc_write_max 10000000
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.zfetch.max_distance 33554432
vfs.zfs.arc_max 51431000000

The issue with the Zvol when I do attach it to the VM, the write speed looks like a turtle ... or even slower :(

Here is the test on the dataset that contain the ZVOL ~ 380 MB/s
Code:
root@fn[/mnt/superstorage/VMs]# iozone –Ra –g 1G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.487 $
        Compiled for 64 bit mode.
        Build: freebsd

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa,
                 Alexey Skidanov, Sudhir Kumar.

    Run began: Sun Sep 13 22:10:41 2020

    Command line used: iozone –Ra –g 1G –i 0 –i 1 -+u -+r
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4   377891   610287  1688531  1895721  1624659   675563  1355045    701378   1312798   701378   653961  1442420  1695195

iozone test complete.

root@fn[/mnt/superstorage/VMs]# iozone –Ra –g 2G –i 0 –i 1 -+u -+r       
    Iozone: Performance Test of File I/O
            Version $Revision: 3.487 $
        Compiled for 64 bit mode.
        Build: freebsd

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa,
                 Alexey Skidanov, Sudhir Kumar.

    Run began: Sun Sep 13 22:10:54 2020

    Command line used: iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4   367858   576535  1931527  2089386  1662389   606667  1426135    669247   1365383   673655   615359  1147892  1835760

iozone test complete.


and here is the test from the VM which have the Zvol:
Code:
root@guest:~# iozone –Ra –g 1G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux-AMD64

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa.

    Run began: Sun Sep 13 21:41:30 2020

    CPU utilization Resolution = 0.000 seconds.
    CPU utilization Excel chart enabled
    Read & Write sync mode active.
    Command line used: iozone -+u -+r –Ra –g 1G –i 0 –i 1
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4       11        9  2668325  2474613  1778004        9  1723770        10   1760512  1319250  1252318  2317080  2486072

iozone test complete.

root@guest:~# iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux-AMD64

    Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
                 Al Slater, Scott Rhine, Mike Wisner, Ken Goss
                 Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
                 Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
                 Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
                 Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
                 Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
                 Vangel Bojaxhi, Ben England, Vikentsi Lapa.

    Run began: Sun Sep 13 20:46:05 2020

    CPU utilization Resolution = 0.000 seconds.
    CPU utilization Excel chart enabled
    Read & Write sync mode active.
    Command line used: iozone -+u -+r –Ra –g 2G –i 0 –i 1
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4       11       11  1451192  1954379  2001745        8  1459080        13   1451192  1155924   939486  2207515  1924603

iozone test complete.

zpool status:
Code:
root@fn[/mnt/superstorage/VMs]# zpool status
  pool: freenas-boot
 state: ONLINE
  scan: scrub repaired 0 in 0 days 00:05:05 with 0 errors on Sun Sep 13 03:50:05 2020
config:

    NAME        STATE     READ WRITE CKSUM
    freenas-boot  ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da9p2   ONLINE       0     0     0
        da10p2  ONLINE       0     0     0

errors: No known data errors

  pool: superstorage
 state: ONLINE
  scan: scrub repaired 0 in 8 days 23:29:20 with 0 errors on Mon Sep  7 23:30:19 2020
config:

    NAME                                            STATE     READ WRITE CKSUM
    superstorage                                    ONLINE       0     0     0
      raidz2-0                                      ONLINE       0     0     0
        gptid/10b1bad4-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/12161a32-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/139575f0-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/14df717c-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/162e05ae-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/179149fb-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/18dc05c9-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
        gptid/1a2602eb-ad3f-11e9-887d-ac1f6b751d78  ONLINE       0     0     0
    spares
      gptid/1b7a2947-ad3f-11e9-887d-ac1f6b751d78    AVAIL   

errors: No known data errors

The guest OSs that tested are: Ubuntu 20.04.1 LTS // Ubuntu 18.04.5 LTS // Ubuntu 16.04.7 LTS // Centos 8.2
With Ubuntu18.04 i have tried kernel v3.18-generic // v4.12-generic // v4.4-generic // 4.15-generic // 4.15-generic-hwe // v5.4-generic-hwe

I've read most available posts about VMs and Zvol as a block... but I beleive it should not be dead like this!
Code:
https://www.ixsystems.com/community/threads/optimal-zvol-configuration-for-debian-vm-guests-performance.84556/
https://www.ixsystems.com/community/threads/the-path-to-success-for-block-storage.81165/

even the delay/tolreance of bhyve should not kill the performance like this.

earlier on an old system I was using a PC with 16GB-nonECC ram and desktop HDDs with freenas 9.x and Vbox jail ... and it was totally totally awesome !

What should I check!? I'm really hitting the walls o_O
Thanks in advance.
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Typically this slow write performance is because sync is enabled. What does zfs get sync superstorage/VMs/<name of your zvol> show? If this is enabled, try zfs set sync=disabled superstorage/VMs/<name of your zvol>, and restart your VM.
 

michaeleino

Dabbler
Joined
Jan 17, 2014
Messages
24
Typically this slow write performance is because sync is enabled. What does zfs get sync superstorage/VMs/<name of your zvol> show? If this is enabled, try zfs set sync=disabled superstorage/VMs/<name of your zvol>, and restart your VM.
Hello Samuel, I thought i did that, and didn't take effect ... after doing it one more time it is really make a difference now !!
But the degradation is still awful:
I think there is something not behaving correct with the bhyve disk driver or the Zvol itself

here is the dataset again:
Code:
root@fn[/mnt/superstorage/VMs]# iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.487 $
        Compiled for 64 bit mode.
        Build: freebsd

    Run began: Wed Sep 16 22:26:11 2020

    Command line used: iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4   387926   629796  1635797  1961520  1589779   779258  1616100    760762   1365383   732478   685482  1594501  1867692

iozone test complete.


Here is the Zvol after disabling the sync:
Code:
root@guest:~# iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux-AMD64

    Run began: Wed Sep 16 22:32:41 2020

    CPU utilization Resolution = 0.000 seconds.
    CPU utilization Excel chart enabled
    Read & Write sync mode active.
    Command line used: iozone -+u -+r –Ra –g 2G –i 0 –i 1
    Output is in kBytes/sec
    Time Resolution = 0.000001 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4      117     2550  2708713  3182373  2246778     3287  1861217      3986   1815584  1467055  1316016  2609952  2625909

iozone test complete

And here is the Zvol after disabling the sync and dedup:
Code:
root@guest:~# iozone –Ra –g 2G –i 0 –i 1 -+u -+r
    Iozone: Performance Test of File I/O
            Version $Revision: 3.429 $
        Compiled for 64 bit mode.
        Build: linux-AMD64

    Run began: Wed Sep 16 22:27:47 2020

    CPU utilization Resolution = 0.000 seconds.
    CPU utilization Excel chart enabled
    Read & Write sync mode active.
    Command line used: iozone -+u -+r –Ra –g 2G –i 0 –i 1
    Output is in kBytes/sec
    Time Resolution = 0.000005 seconds.
    Processor cache size set to 1024 kBytes.
    Processor cache line size set to 32 bytes.
    File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                   
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4      443    11957  2047116  3085338  2134615     8985  1120214     12064   2039340  1524088  1145307  3176617  3303676

iozone test complete.


What do you think?
 

hescominsoon

Patron
Joined
Jul 27, 2016
Messages
456
Typically this slow write performance is because sync is enabled. What does zfs get sync superstorage/VMs/<name of your zvol> show? If this is enabled, try zfs set sync=disabled superstorage/VMs/<name of your zvol>, and restart your VM.
for vm's you WANT synch..that's where the slog comes in..but he had already disabled synch..:)
 

Samuel Tai

Never underestimate your own stupidity
Moderator
Joined
Apr 24, 2020
Messages
5,399
Are you running your emulated VM disk with the emulated AHCI driver or the VirtIO paravirtualized driver?
 
Top