ESXi VMs using NFS from FreeNAS hosted by ESXi

Status
Not open for further replies.

TheEmpty

Cadet
Joined
Jul 25, 2018
Messages
6
Alright, so I was following this thread's network setup, https://forums.freenas.org/index.ph...04-x10sdv-tln4f-esxi-freenas-aio.57116/page-3 but haven't been able to make the final jump of adding the datastore. I did have it working previously, but that was using the (physical) router and was painfully slow.

upload_2018-8-6_13-15-23.png


upload_2018-8-6_13-15-42.png

upload_2018-8-6_13-15-50.png

upload_2018-8-6_13-16-20.png


On SSH (ESXi) I can ping 10.55.0.1 (kernel) and 10.55.1.2 (freenas). I've also tried it removing the authorized hosts and have the same issue. On the vSwitch topology, should I see a black line connecting the Storage Network and Storage Kernel?
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Any reason your using NFS? Its slow and crappy.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
What's a storage kernel? That's just a portgroup and the object under that is a virtual network interface used by the hypervisor to connect to a particular network. I have never configured something like that but each portgroup may be in its own L2 broadcast domain even in the same vlan. If that's the case, you would still need a link between those port groups.
 

TheEmpty

Cadet
Joined
Jul 25, 2018
Messages
6
What would you recommend instead of NFS? Just trying to "share" the HDD while having FreeNAS be responsible for managing it.

What's a storage kernel? That's just a portgroup and the object under that is a virtual network interface used by the hypervisor to connect to a particular network. I have never configured something like that but each portgroup may be in its own L2 broadcast domain even in the same vlan. If that's the case, you would still need a link between those port groups.
Storage Kernel is a VMKernal NIC.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Setup a zvol and share it via iSCSI.
What the output of ifconfig on FreeNAS?
 

TheEmpty

Cadet
Joined
Jul 25, 2018
Messages
6
I'll checkout iSCSI in parallel here, I have never worked with it and doesn't seem "straight forward" (eg not as plug and play as NFS or Samba).

Code:
freenas# ifconfig
em0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=98<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
		ether 00:0c:29:13:34:84
		hwaddr 00:0c:29:13:34:84
		inet 192.168.86.5 netmask 0xffffff00 broadcast 192.168.86.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
em1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 9000
		options=9b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM>
		ether 00:0c:29:13:34:8e
		hwaddr 00:0c:29:13:34:8e
		inet6 fe80::20c:29ff:fe13:348e%em1 prefixlen 64 scopeid 0x2
		inet 10.55.1.2 netmask 0xffff0000 broadcast 10.55.255.255
		nd6 options=23<PERFORMNUD,ACCEPT_RTADV,AUTO_LINKLOCAL>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
		options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
		inet6 ::1 prefixlen 128
		inet6 fe80::1%lo0 prefixlen 64 scopeid 0x3
		inet 127.0.0.1 netmask 0xff000000
		nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
		groups: lo
vlan0: flags=8842<BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		ether 00:0c:29:13:34:84
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
		vlan: 1 vlanpcp: 0 parent interface: em0
		groups: vlan
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		ether 02:f3:f9:05:8d:00
		nd6 options=1<PERFORMNUD>
		groups: bridge
		id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
		maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
		root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
		member: vnet0:1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 5 priority 128 path cost 2000
		member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 1 priority 128 path cost 20000
vnet0:1: flags=8942<BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		description: associated with jail: plex
		options=8<VLAN_MTU>
		ether 02:ff:60:14:fa:09
		hwaddr 02:59:d0:00:05:0a
		nd6 options=1<PERFORMNUD>
		media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
		status: active
		groups: epair
 
Joined
Dec 29, 2014
Messages
1,135
I'll checkout iSCSI in parallel here, I have never worked with it and doesn't seem "straight forward" (eg not as plug and play as NFS or Samba).

Speaking as a grumpy old Unix guy, I find NFS much easier to understand as well. Depending on a number of variables, you MIGHT be able to get some utilization out of multiple NIC using a layer 2 LAGG and NFS. I have seen a number of threads about multi-pathing not working correctly with iSCSI, so that is a consideration. The other thing is that ESXi uses synchronous writes on NFS shares, so you would need some kind of fast SSD/NVMe as an SLOG (dedicated ZFS intent log) to get maximum write performance out of that. See the following thread for details about that.
https://forums.freenas.org/index.php?threads/slog-benchmarking-and-finding-the-best-slog.63521/
I don't think ESXi forces synchronous writes over iSCSI, but don't hold me to that. It would also be helpful to know exactly what hardware you have, what version of FreeNAS you are running, and what kind of environment you have/the kind of services you are trying to provide.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
I have seen a number of threads about multi-pathing not working correctly with iSCSI
Works fine for me. ESXi defaults to either most recently used (ie failover) or fixed. I don't remember. Either way I always set round robin but this still wont gain much if anything without tuning and some black magic.
I don't think ESXi forces synchronous writes over iSCSI
In my testing, it does.
I have never configured something like that but each portgroup may be in its own L2 broadcast domain even in the same vlan.
Tried between VMs and it works fine. Now to test with a vmkernel port...
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
all right, so I was following this thread's network setup, https://forums.freenas.org/index.ph...04-x10sdv-tln4f-esxi-freenas-aio.57116/page-3 but haven't been able to make the final jump of adding the datastore. I did have it working previously, but that was using the (physical) router and was painfully slow.

View attachment 25076

View attachment 25077
View attachment 25078
View attachment 25079

On SSH (ESXi) I can ping 10.55.0.1 (kernel) and 10.55.1.2 (freenas). I've also tried it removing the authorized hosts and have the same issue. On the vSwitch topology, should I see a black line connecting the Storage Network and Storage Kernel?

This is the post where I go through the Storage Network/Storage Kernel setup

https://forums.freenas.org/index.ph...n4f-esxi-freenas-aio.57116/page-2#post-401885

I do note that in my post there is a cross-bar joining the two networks, and yours does not have that... not sure if that's an issue or not. I'm still running ESXi 6.5.

I'd double check that your NFS store is being shared out on the nic in FreeNAS which matches the MAC shown in ESXi. Also, double check the IP/Network settings etc on both sides.
 
Status
Not open for further replies.
Top