bhyve: Using internal storage for VMs (network)

Status
Not open for further replies.

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hi guys,

I am looking to go with a new (used) system, and want to use a bhyve VM running zoneminder or shinobi for home security. I will buy WD Purple or Golds for that purpose and put them in RAID1/Mirror. As the VM is on the same host as the storage, how could I use that storage inside the VM? I can mount of course a filesystem with NFS or SMB in that VM, but is this then send all over my network or will it be distributed in bridge0? Or is this using an internal 10GBit bridge interface so data will not leave the host?

I am not at home to check my system right now, so may someone can heöp me out :)

Cheers,
IceBoosteR
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
Share the pool (or dataset) from your WD Purples over NFS.

Connect from the VM to the NFS share (mount it to something like /mnt/shinobi), then tell shinobi to do its work there in that path.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Share the pool (or dataset) from your WD Purples over NFS.

Connect from the VM to the NFS share (mount it to something like /mnt/shinobi), then tell shinobi to do its work there in that path.
Yes, thats the only way how to mount the storage, but is it using an internal bridge, or is the network interface used for data exchange. If it is the last, then all other user will have an impact due to limited bandwith.
 

sretalla

Powered by Neutrality
Moderator
Joined
Jan 1, 2016
Messages
9,703
If you run ifconfig you will probably see that you are on a VNET adapter bridging the host and guest at 10 Gbits.
The data won't necessarily leave the server, but will leave it to your experience to confirm that (it's my understanding that it won't).
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
If you run ifconfig you will probably see that you are on a VNET adapter bridging the host and guest at 10 Gbits.
The data won't necessarily leave the server, but will leave it to your experience to confirm that (it's my understanding that it won't).
Ok well, then I will have to wait until I am home. I will share my answer of course.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Hi,

back at home I have taken a look on my devices. Therefore the bridge acts as the communication part between the VMs and the jails - so its nothing more as a piece of software which is running over the hardware layer (network interface; em0).
FreeBSD doc

For each running VM a tap interface is created and added to the bridge. In my opinion, thius is nothing more as the communication part between the bridge and the VM itself, not providing any "build-in-bridging-features":
Documentation

An last but not least, the epair0a interface, which belongs to the jail. This is 10GBit internal network, without saturating the em0 interface:
Link to docs

Maybe I'm wrong with this as I misunderstand the docs here, but I am now sure that any traffic from the VM will be directed to the bridge, which will use em0 to handle the traffic. Em0 is now sending to the default gateway. I am not sure if this is now Layer 2 or Layer 3. If its 2, then the next switch will know about the MAC which should receive the traffic, and send the traffic back to em0, but this time not to a VM, this time to FreeNAS itself.

Code:
root@freenas:~ # ifconfig
em0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=2098<VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC>
		ether 18:66:da:xx:xx:xx
		hwaddr 18:66:da:xx:xx:xx
		inet 192.168.178.100 netmask 0xffffff00 broadcast 192.168.178.255
		nd6 options=9<PERFORMNUD,IFDISABLED>
		media: Ethernet autoselect (1000baseT <full-duplex>)
		status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
		options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
		inet6 ::1 prefixlen 128
		inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
		inet 127.0.0.1 netmask 0xff000000
		nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
		groups: lo
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
		ether 02:55:07:f0:dc:00
		nd6 options=9<PERFORMNUD,IFDISABLED>
		groups: bridge
		id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
		maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
		root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
		member: tap1 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 6 priority 128 path cost 2000000
		member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 5 priority 128 path cost 2000000
		member: epair0a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 4 priority 128 path cost 2000
		member: em0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
				ifmaxaddr 0 port 1 priority 128 path cost 20000
epair0a: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=8<VLAN_MTU>
		ether 02:c5:50:00:04:0a
		hwaddr 02:c5:50:00:04:0a
		nd6 options=1<PERFORMNUD>
		media: Ethernet 10Gbase-T (10Gbase-T <full-duplex>)
		status: active
		groups: epair
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=80000<LINKSTATE>
		ether 00:bd:78:9a:fd:00
		hwaddr 00:bd:78:9a:fd:00
		nd6 options=1<PERFORMNUD>
		media: Ethernet autoselect
		status: active
		groups: tap
		Opened by PID 13594
tap1: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
		options=80000<LINKSTATE>
		ether 00:bd:c7:a7:fd:01
		hwaddr 00:bd:c7:a7:fd:01
		nd6 options=1<PERFORMNUD>
		media: Ethernet autoselect
		status: active
		groups: tap
		Opened by PID 13849



Did you find the answer to your question?
I guess now I did find an answer.

-IceBoosteR
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Do we need this storage to be directly available to the VM AND FreeNAS? If not, why not just set up a zvol and add that as a disk to the VM? If you do need file level access from FreeNAS, use NFS with virtIO. The NFS wil add some CPU overhead but I can't say how much as I don't know how the driver works. If it simulates the full NIC down to the PHY then it will be slower than say VMXNET3 which maps directly to memory and uses it own network drivers to put the frames on the wire if needed.
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Yeah SDN can be a mind funk unless you know exactly how the software emulate/simulates/maps the "packets". It's good that your thinking about the layers and how/where this are forwarded. In you case if the VM and FreeNAS are on the same subnet, the traffic should stay interneral and not touch your NIC. The VM is connected to the bridge interface (Layer two forwarding) and so is the host. Once the bridge learns the MAC of FreeNAS and the VM it will forward the frames from one driver to the next and not pass through the card at all.
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Do we need this storage to be directly available to the VM AND FreeNAS? If not, why not just set up a zvol and add that as a disk to the VM? If you do need file level access from FreeNAS, use NFS with virtIO. The NFS wil add some CPU overhead but I can't say how much as I don't know how the driver works. If it simulates the full NIC down to the PHY then it will be slower than say VMXNET3 which maps directly to memory and uses it own network drivers to put the frames on the wire if needed.
Basically, yes. FreeNAS (and the root user especially) should always have full control over the files. I don't want to spread files which different permissions with different access-levels across the systems. In general, if there would be a direct way of mounting the storage (as with jails) this would be great, but the previous question was, if NFS/SMB shares could also do this, with internal trafffic routing - which is not the case hre.
I do appreciate your suggestion on this (really, it is not just a phrase), but I am very sure that this implementation will not find it's way in the next weeks or month in my config ://
 

IceBoosteR

Guru
Joined
Sep 27, 2016
Messages
503
Yeah SDN can be a mind funk unless you know exactly how the software emulate/simulates/maps the "packets". It's good that your thinking about the layers and how/where this are forwarded. In you case if the VM and FreeNAS are on the same subnet, the traffic should stay interneral and not touch your NIC. The VM is connected to the bridge interface (Layer two forwarding) and so is the host. Once the bridge learns the MAC of FreeNAS and the VM it will forward the frames from one driver to the next and not pass through the card at all.
That sounds interesting. But I guess even then, the em0/bridge capacity is used for the traffic, as not a virtual interface is handling the traffic :(
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
the em0/bridge capacity
This may only be limited my CPU and memory speed. Again, I don't know enough about the drivers to say. You can always play around with iperf and find out for sure. Set Up a junk VM using the same distro as you will for zoneminder and test! From VM to FreeNAS, PC (wired) to FreeNAS, and PC to FreeNAS AND FreeNAS to VM at the same time.
 
Status
Not open for further replies.
Top