That falls squarely into the "just because you can, doesn't mean you should" category.
Depends on the purpose.
ARM has been trying to beat its way into the data center for compute for some years, now. Years ago, Intel was on top because of Microsoft Windows workloads, so virtualization was focused in part on trying to create a Windows-friendly environment. Windows licensing, however, is rather onerous and unfriendly in the data center, and there's been a trend towards building on Linux or FreeBSD. In many cases, a software stack that would compile on I386 or AMD64 will compile and run just fine on ARM as well. In fact, if you don't have Windows workloads or things that actually require Intel architecture, there is the potential to avoid
the Intel CPU tax and the need to support all sorts of legacy architectural stuff, which is still a problem that AMD and its Intel-style architectures face.
ARM has been propelled by the mobile device market to generate high performance, low power designs that are rather different from Intel's designs. One aspect of server design that has happened over the last 25 years is that it is easier to build distributed designs, so that you do not need to build a monster single point of failure server to handle all your processing. Some of us have been doing this longer than others; one of my reasons for running 386BSD back in the day was that PC hardware was cheap, even if sometimes unreliable, and running two PC's was still cheaper than running a single Sun server. Redundant array of inexpensive servers ("RAIS").
One of the problems in the data center is that it is generally inefficient to size bare metal hosts to workloads, and many workloads run relatively idle a large part of the time. Virtualization has been very successful at creating isolated spaces for workloads on shared compute.
Most of the infrastructure VM's I build are built on FreeBSD, and the vast majority of them run on 256MB with one or two cores. You don't need more than that for many tasks. Basic OSPF routers, NAT gateways, DHCP servers, NTP servers, basic Web servers, MTA/MX hosts, SSH gateways, DNS servers, NFS servers, server load balancers, and VPN servers all generally work fine as small footprint virtual machines.
So an 8GB Raspberry Pi is probably not good for a heavy environment that needs lots of big VM's running, but it is fine as a demo environment, and it could probably do very well at hosting general network services for a small home network. What will be more exciting is if some larger hypervisor-optimized platforms become available at a reasonable cost.