I see your point, but still don't see a reason why VMS are more secure. Let me elaborate. An attacker that has access to the guest OS will immediately be able to tell you what hardware or virt method the guest is running on.
So... what? Guess what. I can guess that you're running on some sort of i386 or amd64 platform, and that you have some MB's or GB's of RAM, and that you seem to have some HDD, and that you booted from BIOS or EFI, etc., and I've got a virtually 100% chance of being correct. Yet this information does not allow me to jump out of your screen and sit in your physical chair. Likewise, for VM's, knowing these things are not helpful to escaping the environment.
Dmidecode, virt-what, even windows device manager will tell you all about the virtualized devices and thus reveal the nature and even version of the hypervisor.
This is generally useless information for an attacker, and actually useful information for the user of a VM to be aware of, at least sometimes. If it's really a problem, you can actually mask things like the
CPU type, but I rarely see this done.
Then we speak of kernels, yes they are not shared in a VM as they are on an os-based virt, but they are pretty much based on the same code, rhev (kvm based), xen and even VMware hypervisors are based on the same kernel code and I am pretty sure they are not going to rewrite openssh completely to make sure the hypervisor is accessible. AWS was running on xen, now on kvm, again pretty sure they are running on a heavily modified Linux kernel, but based on the same source.
What does this have to do with anything? Lots of things are based on FreeBSD or Linux. This is a matter of degree of compartmentalization.
To put a long story short, I still don't see a valid security reason to run on VMS instead of jails (or vice-versa). Especially now when with a single CPU bug all sorts of virt technology is impacted at once.
There's lots of reasons. VM's offer significantly better compartmentalization. Now, bear in mind that I was one of the sysadmins who worked with phk early on with jails, and I still use the technology, I just don't use it to build highly complex single-platform jail systems anymore, even though I like the technology. I am keenly familiar with both the upsides and pitfalls.
When you create a FreeBSD jail, you are changing the configuration of an existing FreeBSD system, and a problematic configuration change can have significant impacts on other jailed applications, such as making the whole jail subsystem inoperable, causing problems with networking, etc.
Part of the original goal of jails was to allow sharing of cached libraries and code, an admirable goal back in ~2000, but it leads to certain problems like creating difficulties in doing upgrades or updates of the base system without also requiring all the jails to be updated or upgraded. The general solution of using independent filesystem trees eliminates the sharing benefits and moves you closer to a VM-style resource consumption model.
It is difficult to scope network access for jails. I routinely make complicated virtual machines and can define the network scope by creating interfaces on various networks to control the access a VM has. I can have some VM's with one leg "live" on the Internet and another interface on a DMZ, without any significant concern that an intruder could gain access to more of the network than the DMZ. Explicitly setting the ethernet interfaces is preferable. It is *possible* to do this with jails, but really requires a bunch of implicit firewall hacking to protect the base system from the Internet and to enforce other policy. Because the system firewall affects all jails, it is pretty easy to mess this up, and so there is a significant security risk there.
Further, with true VM's, each VM can have its own firewall, and is not limited to using whatever the administrator of the host system chose for firewall technology, so you can be running VM's with ipfw, pf, and even ipf (for those oldsters) as viable choices. Each VM gets its own actual IP stack, you can arrange for things like PCIe passthru for a VM in order to be able to use host VIF's, and the networking resources of the host system, especially including the routing table and administrative networks, are in no way involved in the VM.
With true VM's, there is no sharing of the host system's filesystem, and therefore no realistic chance of inadvertent alteration of files between jails, which often use a symlink or hardlink design to cause different jails to share content. This is a security blessing but a resource consumption nightmare.
With true VM's, you can run multiple versions of FreeBSD. The cluster here has FreeBSD 6, 7, 8, 9, 10, and 11 VM's running in various roles. I am not obligated to update everything just because I take a jail host from FreeBSD 10 to FreeBSD 11. This allows me to gain the security provided by newer patched versions of the OS, while not forcing me to update *everything*, including stuff where there's no realistic risk of an exploit and/or where newer versions of the software running on the VM aren't even available.
With true VM's, you can run multiple operating systems, so there are Ubuntu, Debian, Solaris, SUSE Enterprise, Windows XP/7/10, and a smattering of other operating system types all running here. Not every task runs (or runs well) on every platform, so a good security policy is to pick a platform a package was designed to run on, rather than trying to hammer on it until you finally make FreeBSD support it.
With VM's, I can vMotion a running VM from one hypervisor to another without shutting the VM down. This is great for host maintenance and patching. Many jail hosts run the risk of needing to patch and impacting the workload by needing to reboot, and for significant updates, then there's also the risk that the update may have broken the jail. With a VM, you just move the workload off the host.
With VM's, I can Storage vMotion my running VM's off a backend NAS so that it may be shut down for a firmware upgrade, also a security consideration.
Try creating an NFS server in a jail. You cannot do it. It's trivial to make an NFS server in a VM environment, just install any BSD or Linux. Creating separate fileservers allows for strong compartmentalization and resiliency, so that you can have one VM NFS server on SSD for your working documents, while storing larger files on a different one based on HDD, both of which replicate to your FreeNAS for long term storage and redundancy, each of which is only visible on appropriate networks. This is more secure.
With a VM, it is possible to arrange private access to backend SAN or NAS storage that are not available to other guests. The ability to arrange private access to nonlocal storage is tenuous at best on a jail platform, and very likely requires the intervention of the jail host's administrator.
VM's allow you to create specialized systems that are hyper-focused on doing a single task, and doing that one task well. This is easier to build, easier to understand, easier to audit, easier to reproduce, all of which are favorable security qualities.
VM's are easier to monitor, and it is easier to catch things like runaway workloads or a VM being hit by some sort of DoS attack because comprehensive monitoring tools come by default in vCenter.
When your jail host finally runs out of oomph and you need to split the workload onto a second one, this can be a major adventure even for a senior sysadmin. Because VM's are inherently compartmentalized, all you need to do in a VM environment is add a new hypervisor and rebalance the workload with DRS, or even just move some VM's manually. This probably is outside the scope of your question which was about security benefits. So while I have a bunch of other upsides to VM's, I may have run out of the ones that have a clear security advantage.