Speed issues with NVME vs Spinning disk

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
I have a new dell server with 2 AMD 32 core EPYC processors and 512GB of ram. For testing I have 8 spinning 20TB drives and 2 NVME 8TB drives.

On the server Im running ESXI 6.7 with Truenas as a VM and a windows server 2022. For testing the windows server was assigned 32 cores and 256G of ram. The TrueNAS server was assigned 24 cores and 128G of ram. The goal of this setup is to use the windows box for Veeam backups stored on the Truenas. The VM's are installed on a datastore that is a seperate 1TB NVME drive. This is a single host with both VM's on the same host on the same vswitch. I created a pool of the 8 spinning drives with a SLOG of one of the NVME's.

This is from the windows host to the C drive which is on the 1TB NVME
1688073576965.png


I created an ISCSI connection from the windows host to Treunas zvol on the 8 spinning disk (Z2) with the NVME slog.
1688073651140.png


Its not terrible but I showed a similar result on a 8 disk pool without the SLOG drive.

I then created a pool of the 2 NVME drives in a mirror and was really disapointed by the results. This is with sync turned off as the reads were about half with sync enabled. Its actually slower with the 2 NVME mirror than with the 8 disk Z2 which I am having trouble understanding. The NVME are passed through to the Truenas VM via PCI passthrough. The 20TB spinning disks are handed to the Truenas VM as RAW disks.

1688073741782.png


Having trouble understanding the poor performance of the NVME mirror compared to the performance of the spinning disks with the NVME SLOG.
 

Attachments

  • 1688073643314.png
    1688073643314.png
    180.6 KB · Views: 73

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680

 

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
The solution should start with MPIO.
Make some more vswitches (I’d say 4 for nvme). This should help drive perfmance upwards.
Thanks. After creating 4 more vswitches and adding 4 more nics to the truenas VM the VM is no longer on the network. Cant get DHCP or set a static IP. I then disconnected all the other nics but there is still no network connectivity on vmx0. I did not make any changes to the network settings on the first nic in vmware.
 

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
I tried just adding a second NIC and adding it to the same vswitch as the first which has network connectivity. The truenas still can't access the network with 2 nics added. Tried DHCP and static address.
 

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
From another forum post I found it seems that after adding nics that truenas renumbers the interfaces so vmx0 is now one of the other virtual nics. I put all the virtual nics in the same vswitch and i can access truenas now. Will have to compare mac's to figure out which interface is which now.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I tried just adding a second NIC and adding it to the same vswitch as the first which has network connectivity. The truenas still can't access the network with 2 nics added. Tried DHCP and static address.

You cannot have two ethernet interfaces on the same layer 3 network. You have to create separate subnets for each network.


You also cannot mix DHCP and static addressing; once you get more than one interface with an IP address on your NAS, you need to use static addressing.

From another forum post I found it seems that after adding nics that truenas renumbers the interfaces so vmx0 is now one of the other virtual nics. I put all the virtual nics in the same vswitch and i can access truenas now. Will have to compare mac's to figure out which interface is which now.

It's probably not TrueNAS renumbering it. ESXi enumeration for PCIe devices is a bit funky once you get multiple interfaces going. Merely adding a bunch of ethernet interfaces to a single vswitch is probably not going to work the way you think, and certainly is not what @NickF meant.
 

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
You cannot have two ethernet interfaces on the same layer 3 network. You have to create separate subnets for each network.


You also cannot mix DHCP and static addressing; once you get more than one interface with an IP address on your NAS, you need to use static addressing.



It's probably not TrueNAS renumbering it. ESXi enumeration for PCIe devices is a bit funky once you get multiple interfaces going. Merely adding a bunch of ethernet interfaces to a single vswitch is probably not going to work the way you think, and certainly is not what @NickF meant.
Thank you. I understand how the virtual networking works, and understand the nics will need to be on separate networks for the ISCSI. After adding the nics, where the VMware "network adapter 1" was vmx0, vmx0 is now assigned to "network adapter 2" with a new mac address. I have mapped out how Truenas is mapping the VMX interfaces to the corresponding VMware adapters and am making changes accordingly to test MPIO. Thank you.
 

Crashdodson

Cadet
Joined
Jun 29, 2023
Messages
8
Setting up MPIO with 4 nics has more than doubled the speed now.

One Nic
1688149166788.png


4 Nic's

1688149272682.png


Are there diminishing returns for adding even more vnics to this setup?
 

NickF

Guru
Joined
Jun 12, 2014
Messages
763
Somewhere between 2-8 NICs depending on a whole host of factors. :P
4 is probably a sane place to land for you, considering these are all virtual switches. You can try more if you want. I'm sure we'll hit a bottleneck somewhere else in the stack the more you have though, this is all alot of CPU/Memory overhead to switch this quantity of packets. There's obviously a non-0% performance impact of doing network switching in software, and most of that is probably dependent on how fast your RAM is.
 
Last edited:
Top