SOLVED iSCSI connection drops

Status
Not open for further replies.

Rayb

Dabbler
Joined
Jan 26, 2018
Messages
16
Hi,

I set up a FreeNAS from various parts, comfiguration as follows:
Build FreeNAS-11.1-U1

Platform Intel(R) Core(TM) i5-4690K CPU @ 3.50GHz

Memory 16207MB

6 x 7200 2TB drives
1 x SSD as SLOG for the ZVOL.

CIFS, NFS and AFP works fine.

I have 2 ESXi hosts, however, and figured I'd use iSCSI for data stores (NFS works too, but iSCSI is better for some workloads)

I have been troubleshooting some vMotion performance issues nad choppy I/O to the iSCSI volume, and checking the logs I found this

Code:
Feb  4 19:24:18 bombadil WARNING: 192.168.0.184 (iqn.1998-01.com.vmware:esx1vmnic3): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:29 bombadil WARNING: 192.168.0.184 (iqn.1998-01.com.vmware:esx1vmnic3): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:37 bombadil WARNING: 192.168.0.182 (iqn.1998-01.com.vmware:esx2vmnic2): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:38 bombadil WARNING: 192.168.0.184 (iqn.1998-01.com.vmware:esx1vmnic3): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:46 bombadil WARNING: 192.168.0.182 (iqn.1998-01.com.vmware:esx2vmnic2): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:55 bombadil WARNING: 192.168.0.182 (iqn.1998-01.com.vmware:esx2vmnic2): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:24:55 bombadil WARNING: 192.168.0.184 (iqn.1998-01.com.vmware:esx1vmnic3): no ping reply (NOP-Out) after 5 seconds; dropping connection
Feb  4 19:25:07 bombadil WARNING: 192.168.0.184 (iqn.1998-01.com.vmware:esx1vmnic3): no ping reply (NOP-Out) after 5 seconds; dropping connection


And the same goes for the ESXi side:

Code:
Description   Type   Date Time   Task   Target   User
Lost access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.   Information   2/4/2018 7:21:42 PM	   esx2.home.baastad.local  
Successfully restored access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) following connectivity issues.   Information   2/4/2018 7:21:14 PM	   esx2.home.baastad.local  
Lost access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.   Information   2/4/2018 7:21:06 PM	   esx2.home.baastad.local  
Successfully restored access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) following connectivity issues.   Information   2/4/2018 7:20:50 PM	   esx2.home.baastad.local  
Lost access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.   Information   2/4/2018 7:20:42 PM	   esx2.home.baastad.local  
Successfully restored access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) following connectivity issues.   Information   2/4/2018 7:20:29 PM	   esx2.home.baastad.local  
Lost access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) due to connectivity issues. Recovery attempt is in progress and outcome will be reported shortly.   Information   2/4/2018 7:20:21 PM	   esx2.home.baastad.local  
Successfully restored access to volume 5a74dd31-395b13f4-0a8e-18a90566a4e2 (FreeNASiSCSI) following connectivity issues.   Information   2/4/2018 7:19:40 PM	   esx2.home.baastad.local  



The connections only drop when I put some I/O on the link(s).
I have tried both single and multipath, but the result is the same for both.

If I actually get a VM on the iSCSI lun, it performs well, but Storage vMotioning VMs onto the lun takes forever (as is to be expected with a huge number of connection drops)

Only thing I can think of is MTU mismatch, but I have set that to 1500 for VMware, and I have not enabled any mtu changes on the FreeNAS side.

Any suggestions? I amd at my wit's end here...
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What's top look like when performing an sVmotion? Curious to see memory usage since you have very little of it. What NIC's do you have on FreeNAS where the sVmotion traffic traverses? Have you tried setting sync=disabled to see if your slog is giving you issues? Where is the data coming from??? Different pool on same host or completely different storage on another box?
 
Last edited by a moderator:

Rayb

Dabbler
Joined
Jan 26, 2018
Messages
16
What's TOP look like when performing an sVmotion? Curious to see memory usage since you have very little of it.

Before:

Code:
last pid: 69874;  load averages:  0.07,  0.18,  0.18																		  up 2+20:48:53  07:39:28

78 processes:  2 running, 76 sleeping

CPU:  2.1% user,  0.0% nice,  1.6% system,  0.0% interrupt, 96.2% idle

Mem: 17M Active, 295M Inact, 455M Laundry, 14G Wired, 190M Free

ARC: 11G Total, 602M MFU, 10G MRU, 69K Anon, 33M Header, 24M Other

	 10G Compressed, 17G Uncompressed, 1.60:1 Ratio

Swap: 6144M Total, 298M Used, 5846M Free, 4% Inuse



  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND

  230 root		 16  20	0   187M  126M kqread  0  14:49  0.88% python3.6

 8349 root		 12  35	0 52464K 19380K uwait   0  5:53   0.13% consul

62043 root		  1  20	0  8208K  3980K CPU3	3   0:00  0.03% top

68463 root		  4  20	0  6252K  1876K rpcsvc  2  22:04  0.02% nfsd

38836 nobody		1  20	0  7404K  2940K select  3   0:09  0.01% mdnsd

 8317 root		  1  20	0   147M 18728K kqread  3  0:13   0.00% uwsgi


During (about 2 minutes in):

Code:
last pid: 71138;  load averages:  0.06,  0.14,  0.16																		  up 2+20:50:46  07:41:21

82 processes:  1 running, 81 sleeping

CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle

Mem: 7152K Active, 291M Inact, 461M Laundry, 14G Wired, 193M Free

ARC: 11G Total, 598M MFU, 10G MRU, 6423K Anon, 34M Header, 24M Other

	 11G Compressed, 17G Uncompressed, 1.60:1 Ratio

Swap: 6144M Total, 298M Used, 5846M Free, 4% Inuse



  PID USERNAME	THR PRI NICE   SIZE	RES STATE   C   TIME	WCPU COMMAND

  230 root		 16  20	0   187M  126M kqread  1  14:50  0.19% python3.6

 8349 root		 12  35	0 52464K 19380K uwait   3  5:54   0.13% consul

68463 root		  4  20	0  6252K  1876K rpcsvc  3  22:04  0.07% nfsd

62043 root		  1  20	0  8208K  3996K CPU1	1   0:00  0.05% top

15804 root		  2  20	0 21272K  5820K kqread  3   0:06   0.03% syslog-ng

 8317 root		  1  20	0   147M 18728K kqread  0  0:13   0.00% uwsgi


Disconnect lines show up in the logs immediately.

Maybe I can't read top correctly, but this does not look like an overloaded system to me?

What NIC's do you have on FreeNAS where the sVmotion traffic traverses?

3 NICs in a load balancing setup, 2 e1000s and the on board one. Can't remember the driver right now (nor do I remember how to list it, but it performed quit well before I added the e1000s)

Code:
root@bombadil:/nonexistent # ifconfig
alc0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=82098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
		ether 74:d4:35:ec:bf:3b
		hwaddr 74:d4:35:ec:bf:3b
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
em0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=2098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 74:d4:35:ec:bf:3b
		hwaddr 68:05:ca:6c:c5:6a
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=2098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 74:d4:35:ec:bf:3b
		hwaddr 68:05:ca:75:5f:06
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
		options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
		inet6 ::1 prefixlen 128
		inet6 fe80::1%lo0 prefixlen 64 scopeid 0x4
		inet 127.0.0.1 netmask 0xff000000
		nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
		groups: lo
lagg0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=2098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 74:d4:35:ec:bf:3b
		inet 192.168.0.101 netmask 0xffffff00 broadcast 192.168.0.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect
		status: active
		groups: lagg
		laggproto loadbalance lagghash l2,l3,l4
		laggport: alc0 flags=4<ACTIVE>
		laggport: em0 flags=4<ACTIVE>
		laggport: em1 flags=4<ACTIVE>



For what is is worth, I have no problem being restricted to a 1 Gb wire speed link.

Have you tried setting sync=disabled to see if your slog is giving you issues? Where is the data coming from??? Different pool on same host or completely different storage on another box?

This has no effect. (SLOG disk works just fine when writing NFS tho)

Where is the data coming from??? Different pool on same host or completely different storage on another box?

I have tried an NFS store on the same box and a different box, the result seems to be the same, tons of disconnects and almost no throughput. there must me something I am missing here?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Have you tried removing the LAGG completely and just using one standalone interface on each system?
 

Rayb

Dabbler
Joined
Jan 26, 2018
Messages
16
Solved, 1 interface seems to fix the problem. sVmotion speed isn't all that, but there are no more disconnects.
Thanks for taking the time :)
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
I didn't even think of it until just now, but LAGG isn't typically used for iSCSI traffic. MPIO is the preferred method for that protocol (and as far as I know, the only suggested method from VMware best-practices for iSCSI). This may also explain why NFS worked for you because LAGG is supported for that configuration. I've attached my best practice guide when using FreeNAS, ESXi, and iSCSI.
 

Attachments

  • FreeNAS and ESXi iSCSI best practice.txt
    1.7 KB · Views: 3,097
Last edited by a moderator:

tralalalala

Cadet
Joined
Apr 18, 2017
Messages
1
Same here.. dropping iSCSI connection under load, 10 gig Myricom MTU 9000 ... I dropped the autotuned: vfs.zfs.arc_max that was auto set to 9GB on a 12 GB system.. problem solved.
No VMware, vanilla standalone system.
 
Last edited by a moderator:

schwartznet

Explorer
Joined
Jul 10, 2012
Messages
58
I am having this issue and it is driving me nuts. No matter what interfaces i trasfer files to it does link state down then up and slows to 0 then back to 112MB/sec. I have 256GB ram and its not using even a fraction of it. So i dont know whats going on here.
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
I also have this problem, but for iscsi to work with 10GB I have to create a Bridge Interface, which has no selection in iscsi Portal, so just set to 0.0.0.0 If I run a ping from Freenas to the iscsi portal IP I start to see increased ping times then the console message appears. This Freenas specs below. I still suspect The Bridge in Freenas as the problem and want to try MPIO with a switch, but 10Gig Switches ain't cheap.
Build FreeNAS-11.1-U5

Platform Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz

Memory 65387MB
 

James S

Explorer
Joined
Apr 14, 2014
Messages
91
I also have this problem. The setup is Freenas 11.1-U5 (platform below) with two direct connections to a Vmware server (6.0) following John Keen's approach: https://johnkeen.tech/freenas-11-iscsi-esxi-6-5-lab-setup/#comment-41 It has been performing fine under a range of loads until the last few days when I've had two errors (1 for 1 nic and now for both) as:
> WARNING: 10.0.1.2 (iqn.1998-01.com.vmware:5b8bfc
4f-89f9-4faf-6621-38d547021ae0-19a48e7b): no ping reply (NOP-Out) after 5 seconds; dropping connection
> WARNING: 10.0.0.2 (iqn.1998-01.com.vmware:5b8bfc4f-89f9-4faf-6621-38d547021ae0-19a48e7b): no ping reply (NOP-Out) after 5 seconds; dropping connection

The Freenas security message reported at 03.01(am) - assuming this coincides with the actual event the system reports show:
System is an average of 99% idle, 750mb free ram, network traffic in the Bits/s range
So far reports suggest (1) looking at hardware (but mine doesn't seem to be stressed?) (2) adjusting MTU settings to make them consistent accross the VM and Freenas at 9000MTU (which mine are) or reducing to 1500MTU (but what is the rational?) (3) checking consistency of network settings (in this case I'm directly connected)

Any suggestions about how to diagnose this would be much appreciated.

Platform: 2x AMD Opteron 4122 32729MB
 

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
I also have this problem. The setup is Freenas 11.1-U5 (platform below) with two direct connections to a Vmware server (6.0) following John Keen's approach: https://johnkeen.tech/freenas-11-iscsi-esxi-6-5-lab-setup/#comment-41 It has been performing fine under a range of loads until the last few days when I've had two errors (1 for 1 nic and now for both) as:
> WARNING: 10.0.1.2 (iqn.1998-01.com.vmware:5b8bfc
4f-89f9-4faf-6621-38d547021ae0-19a48e7b): no ping reply (NOP-Out) after 5 seconds; dropping connection
> WARNING: 10.0.0.2 (iqn.1998-01.com.vmware:5b8bfc4f-89f9-4faf-6621-38d547021ae0-19a48e7b): no ping reply (NOP-Out) after 5 seconds; dropping connection

The Freenas security message reported at 03.01(am) - assuming this coincides with the actual event the system reports show:
System is an average of 99% idle, 750mb free ram, network traffic in the Bits/s range
So far reports suggest (1) looking at hardware (but mine doesn't seem to be stressed?) (2) adjusting MTU settings to make them consistent accross the VM and Freenas at 9000MTU (which mine are) or reducing to 1500MTU (but what is the rational?) (3) checking consistency of network settings (in this case I'm directly connected)

Any suggestions about how to diagnose this would be much appreciated.

Platform: 2x AMD Opteron 4122 32729MB
I tried the 9000 across the board, My switch is netapp 10G, I also tried bridge mode in Freenas same error which is what brought me to switch Option. 2 weeks ago I changed to 1500 across all connections, have not seen a ping time out since. ALSO all my devastation occurred early in morning so I backed off on how much smartd tests I would run on the disks to reduce server load, not sure if there was a connection but it was strange how all the ping timeouts would be in the early AM same times I was running smart tests. Many will say in this current time with hardware MTU 9000 is not worth the trouble, I have found this to be true, I am error free since the change to 1500, Just my opinion...
 

James S

Explorer
Joined
Apr 14, 2014
Messages
91
I tried the 9000 across the board, My switch is netapp 10G, I also tried bridge mode in Freenas same error which is what brought me to switch Option. 2 weeks ago I changed to 1500 across all connections, have not seen a ping time out since.
I've just applied an update so I'll give it another day or two to see how it behaves. If it is still throwing the error I'll change the MTU. Thanks
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
@James S the guide you linked to was decent but missing some info. Of note was that you should have the option set to "disable physical block size reporting" on your iSCSI extent in FreeNAS which he doesn't mention. Also, his VMware disk claiming rule is targeting a specific device. It would be better to have an automatic claiming rule set. You can use this command on ESXi (reboot of ESXi host to reclaim the disks after running this): esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "FreeNAS" -M "iSCSI Disk" -P "VMW_PSP_RR" -O "iops=3" -c "tpgs_on" -e "FreeNAS arrays"
While FreeNAS doesn't explicitly have ALUA support because its a single controller (CTL in FreeBSD does support it so maybe its in TrueNAS), this rule works well for it as it will only find one TPG id which is the Active/Optimized path anyways. Also...be sure you're not using VMkernal port binding on your iSCSI connection in ESXi. This isn't the correct method to connect to FreeNAS with iSCSI. Only dynamic/static targets should be used.

Edit: More info about FreeBSD ALUA support with CTL.
 
Last edited:

Dudleydogg

Explorer
Joined
Aug 30, 2014
Messages
50
@James S the guide you linked to was decent but missing some info. Of note was that you should have the option set to "disable physical block size reporting" on your iSCSI extent in FreeNAS which he doesn't mention. Also, his VMware disk claiming rule is targeting a specific device. It would be better to have an automatic claiming rule set. You can use this command on ESXi (reboot of ESXi host to reclaim the disks after running this): esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "FreeNAS" -M "iSCSI Disk" -P "VMW_PSP_RR" -O "iops=3" -c "tpgs_on" -e "FreeNAS arrays"
While FreeNAS doesn't explicitly have ALUA support because its a single controller (CTL in FreeBSD does support it so maybe its in TrueNAS), this rule works well for it as it will only find one TPG id which is the Active/Optimized path anyways. Also...be sure you're not using VMkernal port binding on your iSCSI connection in ESXi. This isn't the correct method to connect to FreeNAS with iSCSI. Only dynamic/static targets should be used.

Edit: More info about FreeBSD ALUA support with CTL.
This is very good information, I did not know this but it makes perfect sense. Thank you for posting. (Freenas does not allow multiple Nic's in same subnet)
 

James S

Explorer
Joined
Apr 14, 2014
Messages
91
Of note was that you should have the option set to "disable physical block size reporting" on your iSCSI extent in FreeNAS which he doesn't mention.
Thanks - this seems a straightforward fix
It would be better to have an automatic claiming rule set.
Thanks for the suggestion - can this be run on the current configuration or should something else be done as well? On ESXI reboot do I need to re-run it?
Also...be sure you're not using VMkernal port binding on your iSCSI connection in ESXi. This isn't the correct method to connect to FreeNAS with iSCSI. Only dynamic/static targets should be used.
The above seems like tuning. It seems the connection configuration is the heart of my problem. The John Keen's approach (blog post) sets fixed IPs on different subnets for each direct connection to the datapool to workaround FN's restriction on having mutliple NICs within the same subnet . I've configured 2 this way (he had 4). Reading your link to VMware it does seem to violate the approved methods for connecting storage (unless I'm reading this wrong - essentially this is 'bad practice'?). After the timeout I'm seeing the kind of errors they flag (e.g., storage registered as "dead"). The odd thing is this occurs when the system isn't under load.
The setup I used also includes roundrobin (set on ESXi but not on the FN side). Whether this is liable to create a hiccup I'm not clear.
Sorry a naive question - more links should increase datatransfer - with my setup I'm seeing 200mbs/ on both links. Does this mean a straight sum i.e., transfers at 400mbs? FN won't configure NICs within the same subnet while Vmware states, "Array Target iSCSI ports are in a different broadcast domain and IP subnet" as a violation. I've also read it is best to avoid link aggregation in this kind of setup. This starts to feel like being caught between a rock and a hard place! One solution is to revert to a single link but presumably multiple connections are what creates a workable level of performance here?
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
Thanks for the suggestion - can this be run on the current configuration or should something else be done as well? On ESXI reboot do I need to re-run it?
No, you can just run this one time and then reboot the host. The setting is persistent. There are ways to do it without rebooting (add new claiming rule, unclaim the disk, then re-run claim rules...fyi, this method WILL bring the device offline for a moment) but rebooting is probably safer for you.

The John Keen's approach (blog post) sets fixed IPs on different subnets for each direct connection to the datapool to workaround FN's restriction on having mutliple NICs within the same subnet . I've configured 2 this way (he had 4). Reading your link to VMware it does seem to violate the approved methods for connecting storage (unless I'm reading this wrong - essentially this is 'bad practice'?).
No, the dynamic or static target setup is correct. You just want to be sure that you don't have iscsi port binding configured if using multiple interfaces because this is not the proper method for connecting to FreeNAS storage.

The setup I used also includes roundrobin (set on ESXi but not on the FN side). Whether this is liable to create a hiccup I'm not clear.
Shouldn't be any issue at all.

Sorry a naive question - more links should increase datatransfer - with my setup I'm seeing 200mbs/ on both links. Does this mean a straight sum i.e., transfers at 400mbs?
No. With multipathing you're increasing your throughput and fault tolerence, not bandwidth. You're seeing that behavior because you've changed the PSP to round robin and iops=1 so every io will use alternating paths.

FN won't configure NICs within the same subnet while Vmware states, "Array Target iSCSI ports are in a different broadcast domain and IP subnet" as a violation.
It's a "violation" in VMware if your network setup is configured like that and you're using vmkernel port binding. (vSphere 6.5 added ability to use iscsi port binding in different subnets). Regardless, with FreeNAS you SHOULD'NT be using iscsi port binding. Verify that you aren't.

I've also read it is best to avoid link aggregation in this kind of setup.
Correct. With iSCSI multipathing you should not be using any form of LAG.

I'm not sure what your vSwitch and port group layout looks like, but if you configured your setup like the article you followed, you should be ok. It was a bit of an overly complex config though. You really only need one vSwitch (multiple uplinks ok...either active/active or active/stanby...I prefer active/standby in setups such as yours), a port group for each vmkernel interface used for iscsi, and then on each port group used for iscsi you would set the teaming and failover policy override to be using a SINGLE active adapter (each iscsi port group would use a different vmnic set to Active)...you should NOT have multiple active vmnic's on these iscsi port groups. Any other vmnic should be set to unused!
 

James S

Explorer
Joined
Apr 14, 2014
Messages
91
Thanks the detailed reply. :)
As you guessed I've got one physical NIC per VMKernel port.
VMware isci switches.JPG

What is less clear is use of iscsi port binding. The summary window says it is "enabled". However the check mark option is greyed out. From the reading / search I've done it is only active if another interface is present to bind to?
vmkernel nic settings.PNG

I want to get this set before running the automic rule claiming set.
I've also corrected the MTU settings - set at 9000 in the port group but not in the VMKernal Port :oops:
 

bigphil

Patron
Joined
Jan 30, 2014
Messages
486
What is less clear is use of iscsi port binding.
You'll set the iSCSI port binding on the iSCSI software adapter settings.

I've also corrected the MTU settings - set at 9000 in the port group but not in the VMKernal Port
Not set on the port group, but the vSwitch and vmk interface.
 

James S

Explorer
Joined
Apr 14, 2014
Messages
91
I'm still working away on this problem.
(1) As far as I can understand port binding is not configured. However I don't understand in exsi why it shows the box ticked but greyed out. I've posted on the vmware forums and not got a clear answer / one I can follow. I have dedicated NIC port for each connection on a seperate vmkernal nic. As far as I can see I've not got a default of port binding?
(2) The connection drops increase dramatically with a FN pending but not activated update.
(3) There is a bug (https://redmine.ixsystems.com/issues/26695) and I guess this is the root of the problem - of dropped connection? Given this bug I am assuming I should reset the MTU to 1500?
 
Status
Not open for further replies.
Top