Proxmox W/TrueNAS vs TrueNAS SCALE?

oguruma

Patron
Joined
Jan 2, 2016
Messages
226
Looking to build a single do-it-all box for a small business. Box would serve as both a file storage NAS and platform of a handful of VMs.

I like the idea of being able to do it all from TrueNAS SCALE, having a single UI to handle everything.

That said, Proxmox has a better UI for handling VMs (the ability to snapshot the VM and its meta at the same time is particularly nice).

So, I'm trying to decide between doing everything with SCALE versus using Proxmox and putting TrueNAS in a VM for managing the NAS side of things.

Anybody have any input on this?
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Proxmox is an immature hypervisor and its PCIe passthru is still considered experimental. Running TrueNAS under Proxmox is not recommended and has been problematic for some users. This may not be a thing to do if you value your data.

Do consider following the guidelines posted at


The age of the article does not change the wisdom of the advice.
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
Beg to differ. Proxmox is quite mature and has been around since 2008. You get HA and clustering using battle-tested open source technologies, and you can get pro support with 2hr response time. Pass-through has been officially experimental for many years but for what it’s worth it has worked as well as or better for me than in ESXi that I used prior. One of Proxmox’s problem is that it tries to support a very wide range of hardware, in comparison with ESXi which as we know is extremely picky.

Also note that OP is asking about Proxmox vs TrueNAS Scale. The latter isn’t even out of beta yet.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Beg to differ. Proxmox is quite mature and has been around since 2008.

Beg all you want. Doesn't change facts. Proxmox's own support pages tell the truth.


Introduction​

[...]
Note:

PCI passthrough is an experimental feature in Proxmox VE!

This is a boring conversation and I'm not interested in finding the exact date that this was added, but it was added sometime around 2018 IIRC, and not really considered stable enough to be potentially usable until 2019-2020.

By way of comparison, ESXi has been doing this for much longer, and it is not considered particularly experimental. It's stable on most Supermicro systems Sandy Bridge or newer, and on most servers generally Haswell or newer, and has been since ESXi 4. That puts it at a minimum of a decade of maturity. I may not be the right person to be arguing with, as I'm one of the people who've specialized in virtualization support on these forums during that time, so I've actually watched the evolution (or lack thereof), and have talked with a LOT of people who have done a lot of things. My strong impression is that Proxmox isn't quite ready for prime time when it comes to handling TrueNAS.

Also note that OP is asking about Proxmox vs TrueNAS Scale. The latter isn’t even out of beta yet.

Correct, but I think you missed the context of the question. The point is that it'd be better to do your VM's on Scale directly, because doing the NAS stuff with Scale as a VM on top of Proxmox would be dodgy.
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
You recommend running VMs on Scale that has not even reached release status yet, rather than Proxmox which was released in 2008, on the basis of Proxmox being immature?

And yes, I’m aware of the pitfalls of running TrueNAS virtualized. And yes, I realize that if you run Scale you avoid that part altogether, but other than that I really struggle to see how Scale could be recommended as a hypervisor *particularly* in the context maturity of platform.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Scale could be recommended as a hypervisor *particularly* in the context maturity of platform

Ah, I see your confusion.

Scale and Proxmox both use KVM as the actual hypervisor, and ZFS for storage.

It helps to understand the history of hypervisors. I don't have anything particularly thorough written up, but one of my clients is an OnApp customer and I remember this article pretty well:


It describes the evolution of both Xen and KVM in some detail, and provides some comparison and contrast. Unfortunately it does not include ESXi.

KVM is a type 2 hypervisor that has in perhaps the last five years started to make significant headway and pull ahead of Xen, previously considered by many to be the "second place" winner in the hypervisor wars, behind ESXi. However, both Xen and ESXi are "type 1". For arguable values of what that means.

Now, your confusion probably stems from the fact that while Scale hasn't even seen a release, this must mean that the hypervisor capabilities must be equally untested and questionable. However, since KVM is mature, just as ZFS is mature, these underlying technologies can be acknowledged to be relatively stable and known qualities. I wouldn't be afraid of Scale or Proxmox losing a ZFS pool, for example. They're all using the same underlying code.

I am fine with conceding that Proxmox has more polished VM handling, as it was designed to be a hypervisor platform.

However, Proxmox lacks filesharing capabilities. Let's give Proxmox a 10/10 for virtualization and a 0/10 for filesharing. Total score, 10/20.

So, Scale has impressively powerful and well-polished multiprotocol filesharing capabilities, but kinda weedy support on the VM front. There's nothing particularly dangerous about the VM support, since it is based on KVM, but it is more clumsy and less complete than Proxmox. Let's give Scale a 10/10 for filesharing and a 4/10 for virtualization. Total score, 14/20.

Scale wins.

Oh, I didn't give Proxmox any credit because you can do filesharing as a VM on top? Well, fine. However, we've had enough people come through here with Proxmox issues that I could no longer give it a 10/10; charitably I'll give it an 8/10 for virtualization, and I'll give Scale as a VM only a 4/10 because of all the wasted space and poor performance. Layering ZFS-on-top-of-ZFS is going to be a bad thing, and yet what a lot of users will want to do. That's still a total score of 12/20.

So, in the context of the OP's question, I feel it's a reasonable answer.
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
Thanks. No confusion. I’m very much aware that Proxmox and now also TrueNAS (Scale) are based on Debian (11) and KVM. But there’s more to a hypervisor than that - just as there’s more to TrueNAS Core than FreeBSD and ZFS. To start, Proxmox pulls in its own kernel (in turn based on Ubuntu) and various other packages too. And builds a whole system around it - just as Scale - with features and potential bugs. But I think all has been said, and plenty of info in this thread for OP to hopefully make an informed decision.
 

rungekutta

Contributor
Joined
May 11, 2016
Messages
146
(Could add as well that the Type 1 vs 2 discussion in the link you provided is very moot. KVM runs inside the kernel and there is no OS in between KVM and the hardware that emulates hardware services. So by all intents and purposes therefore Type 1, even though the VMs run as processes in the OS (Type 2?). I.e. that terminology is obsolete.)
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
(Could add as well that the Type 1 vs 2 discussion in the link you provided is very moot. KVM runs inside the kernel and there is no OS in between KVM and the hardware that emulates hardware services. So by all intents and purposes therefore Type 1, even though the VMs run as processes in the OS (Type 2?). I.e. that terminology is obsolete.)

We had that discussion just a few weeks ago.


KVM most definitely does have an emulation layer which sits between the VM and the hardware; true type 1 hypervisors (from the classic definition, not the "modern" contortionist one) would not be using virtual disks or ethernets, which are abstractions offered by the hypervisor stack. The classic argument about this goes to "what is sitting directly on the bare metal", and depending on who is twisting words to mean what, you could argue that KVM as a module of the Linux kernel which has direct access to the bare metal qualifies as type 1, yet through providing userland-grade services such as access to filesystems, presents itself more as a type 2. We started muddying the waters about 1 vs 2 the moment it was convenient to oversubscribe and share resources like disk. You could still get to something that looks like a true type 1 using network SRIOV VF's for network access and for storage controllers, and running on a minimalist Linux busybox with KVM. ESXi would very much like to think of itself as such, but it still has all the other crap in there. Essentially the hypervisor and kernel modules on these things run alongside each other with significant interdependencies, and the "which sits on the hardware, the hypervisor or the kernel" is like the age old question, "which came first, the chicken or the egg"...
 
Top