iSCSI Transfer Speeds Inside the same "Box"

Status
Not open for further replies.

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
Hey everyone, this is my first post here. I've been lurking around for a few weeks, and I'm looking to build my first home lab with esxi and virtualizing freeNAS to manage my datastores.

I was reading the following thread, https://forums.freenas.org/index.ph...csi-vs-nfs-performance-testing-results.46553/, and seeing a few others like it with a 10 Gbps ethernet card being used. Is this simply to supply clients on your network with 10 Gbps transfer rates, or is an ethernet card rated at that speed required since iSCSI is wrapping scsi commands in TCP/IP and must by handled by an ethernet card? IE is your ethernet card going to cap your transfer speeds in an entirely virtualized test environment (e.g. VM freeNAS)? I thought in a virtual environment you could create a virtual switch and your read/write speeds would be more/less based on the CPU/RAM/SAS or SATA drives you have connected.

When using iSCSI in a virtual environment (ESXi specifically), are you constrained by your ethernet card (1 Gbps vs 10 Gpbs), even if all of your OS's and hard drives are within the same physical "Box," are all virtualized, and connected by a virtual switch? From the latter replies in the thread that doesn't seem to be the case as @soulburn had a baremetal freeNAS but handled esxi's datastores.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,994
Simple answer is No, the external interface will not limit the internal speed but you need to set up the internal network properly. And let me preface this by the fact that I'm no expert, this is just what I have figured out while doing my own research.

Attached is a screen capture of my ESXi 6.5 machine network setup. It's not terribly busy either so this should make it easier to follow.

So you will have a vSwitch (by default) and when you create your FreeNAS and other VMs you need to select the VMXNET3 Ethernet driver vice anything else. This is the VMWare high speed driver and will allow for maximum transfer rates internal to the machine. Of course when you transfer data out to the physical Ethernet port, you will be limited by that connection for the transfer.

So the below image is of my FreeNAS setup, note that I have the LAN driver set to VMXNET3.
Capture0.JPG


Now is the ESXi Network configuration screen capture. Note how every interface connects to the vSwitch.
Capture.JPG


My limitation when running iSCSI for my Windows VMs is my pool design, not a limitation of the VMXNET3 driver.

Hopefully this helps you out some.
 

Spearfoot

He of the long foot
Moderator
Joined
May 13, 2015
Messages
2,478
Hey everyone, this is my first post here. I've been lurking around for a few weeks, and I'm looking to build my first home lab with esxi and virtualizing freeNAS to manage my datastores.

I was reading the following thread, https://forums.freenas.org/index.ph...csi-vs-nfs-performance-testing-results.46553/, and seeing a few others like it with a 10 Gbps ethernet card being used. Is this simply to supply clients on your network with 10 Gbps transfer rates, or is an ethernet card rated at that speed required since iSCSI is wrapping scsi commands in TCP/IP and must by handled by an ethernet card? IE is your ethernet card going to cap your transfer speeds in an entirely virtualized test environment (e.g. VM freeNAS)? I thought in a virtual environment you could create a virtual switch and your read/write speeds would be more/less based on the CPU/RAM/SAS or SATA drives you have connected.

When using iSCSI in a virtual environment (ESXi specifically), are you constrained by your ethernet card (1 Gbps vs 10 Gpbs), even if all of your OS's and hard drives are within the same physical "Box," are all virtualized, and connected by a virtual switch? From the latter replies in the thread that doesn't seem to be the case as @soulburn had a baremetal freeNAS but handled esxi's datastores.
I agree with @joeschmuck, that the answer to your last question is 'No'. You're asking about an AIO (All-in-One) system, where FreeNAS runs as a VM on ESXi and provides datastore storage to ESXi via iSCSI or NFS or both. Virtual machine disk transfer rates created on an AIO system like this are NOT constrained by the network card. Instead, as you pointed out, transfer speeds are constrained by the speed of the system itself (CPU/RAM/HBA/disks). On my AIO systems, I configure a separate storage network on its own virtual switch to segregate datastore traffic from standard LAN traffic, as described in @Benjamin Bryan's excellent article.

However... if you share a datastore over the LAN to an external ESXi server, then you WILL be constrained by network speed, and this is when a 10Gb/s connection would come in handy.

VMware network configuration for separate LAN and storage networks:
vmware-network-configuration.jpg
 

Snowy

Dabbler
Joined
Dec 8, 2016
Messages
13
Excellent replies. I'll keep on doing my build research with these in mind.


Sent from my iPhone using Tapatalk
 
Status
Not open for further replies.
Top