I'm just going to dive on in with my $.02 here:
virtio-fs (or virtfs as
@ptyork refers to it) might not have been around for very long (since 2018 according to this presentation from Stefan Hajnoczi, Senior Principal Software Engineer, RedHat:
slides (pdf)), but it looks like that it was built on top of virtio-9p, where also according to the same presentation, states that active development of 9p ceased in 2012.
Thus, to the question of "is it production ready?" - I don't know if there are companies that are actively using this, but being that it is developed by the folks at RedHat, I would guess that there might not be very many reasons why it
can't be used for production.
Again, referencing the same presentation and also a presentation that Stefan gave at FOSDEM a year later, (
video), he also talked about some use cases beyond a home lab/NAS environment, which might suggest that either they are using it themselves or that they have clients who might be using it.
I read through the thread here.
And whilst I understand how some people would most definitely WANT to keep the VMs completely separated and isolated from each other, all the way to the CPU cache levels, there are other use cases (not only for the homelab, but also outside of the homelab environment) where companies and/or people might want to use this.
I can't speak for the companies, so I'll narrow down the scope by speaking for myself:
I'm currently in the middle of a massive migration project where I am consolidating 5 different and separate NAS server down to one.
Right now, I am using Oracle VirtualBox as my "hypervisor" (running on top of a Windows 11 host, which runs on a Beelink mini PC). The Oracle Virtual Box VM disks are hosted on one of my QNAP NAS units.
Media is stored on another (which also runs Plex) and applications run on another, etc. etc. etc. You get the idea.
So, right now, this massive migration projection (it's massive to me), is to pull/suck all of that up into a single 36-bay 4U server, and all of those things that I used to do with my QNAP NAS systems will now run as VMs.
And then on top of that, I'm also virtualising my gaming tower systems as well, with GPU passthrough.
In my homelab environment, all of the systems talk to the various NAS systems, fairly regularly, so they all talk to each other.
When I migrate the QNAP systems over, whilst I COULD have set up a NFS and/or a SMB/CIFS share and then have it route through the virtio NIC (which is a 10 Gbps NIC), if I can SKIP the network stack entirely, that would make things run faster by having a more direct access to all of the data that the hypervisor host is now hosting -- i.e. TrueNAS SCALE.
It's a NAS first and the hypervisor is built on top of it so that I can create one ZFS pool (which also leads to better overall utilisation of my available hard drive space vs. it being dispersed into 5 separate NAS servers), and then have the VMs interact with said ZFS pool directly.
This is where virtio-fs comes into play.
As a part of this massive server migration project, I've been testing out xcp-ng, Proxmox, and TrueNAS SCALE.
xcp-ng can't do virtio-fs at all (despite it's CentOS origins), so that got eliminated.
Proxmox can do both virtio-fs and GPU passthrough, but it's not exactly straightforward, but there are enough forum posts that you can eventually cobble together the "How To" deployment instructions.
TrueNAS SCALE made GPU passthrough practically a trivial task, but unfortunately, because I have no direct way of interacting with the ZFS storage pool that was set up by TrueNAS -- the lack of immediate virtio-fs support made it such that, unfortunately, TrueNAS SCALE eliminated itself from this competition between the different solutions for this server consolidation/migration project.
I was hoping that, since I was already learning how to enable virtio-fs in Proxmox, that I was going to be able to take the <<VMID>>.conf files and be able to use some of the stuff that's in there to be able to enable virtio-fs in TrueNAS SCALE. And then that's when I smacked the wall of the
websocket API protocol.
Whilst I understand why there are use cases where you wouldn't WANT to allow this to happen, but the way that the current Hypervisor is set up in TrueNAS SCALE also makes it very difficult to enable virtio-fs if you aren't a programmer/developer who understands how the websocket API works (which I, unfortunately, don't).
So, it would be nice that for those of us who want to consolidate multiple servers down to a single server like this, enabling virtio-fs (or at least giving us this option to enable it, if we want to), would make TrueNAS SCALE an even easier platform to use than Proxmox.
In my testing of Proxmox, I had to create the ZFS pool via CLI myself. (Because if you do it through their GUI, you can only put certain types of content in it for some inexplicable reason.). And then I also had to set up the iSCSI target via the CLI as well, along with a NFS export, and also a SMB/CIFS share.
These are all of the things that TrueNAS is already GREAT at doing.
Now if I can just access a ZFS pool made of NVMe SSDs at speeds faster than 10 Gbps (because that's the fastest speed that the virtio NIC allows), that would've made TrueNAS SCALE amazing.
And from reading the forum posts, it would appear that I am not the only who wished that TrueNAS (SCALE) enabled virtio-fs.
(And from my testing with it in Proxmox, it seems quite stable. I haven't noticed any major issues with it so far. I need to test out the Windows regkey edit that Xiaoling Gao (virtio-fs team, RedHat) sent to me so that I can get the Windows 10 VM to "auto-mount" the virtio-fs share on something OTHER than the Z drive.)
But beyond that, for the systems where it works, it's been working great! I was getting, I think something around like 236 MB/s writes in an Ubuntu VM running on Proxmox when it was interacting with a virtio-fs share on the Proxmox host on a single HGST 3 TB SATA 3 Gbps HDD. (Core i7-6700K, Asus Z170-E motherboard, 64 GB of DDR4-2400 RAM).