Truenas in VM or not?

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Today I have 2 servers, an i7-4790k with 32GB of ram, running Truenas + docker and 20 containers.
And another machine is an i7-3770 with 16GB of ram, running XCP-ng, running 4 VMs.

Here where I live, the price of energy is very high, I’m thinking about putting XCP-ng on the i7-4790k and running Truenas as a VM with 16GB, passing the disks directly to the Truenas VM via PCI passthrough.

What do you think of this, will I regret it?
Or is it worth trying?
 

nickspacemonkey

Dabbler
Joined
Jan 13, 2022
Messages
22
I'm running an identical CPU and memory config lol (4790k). I've toyed with the idea of this, but no, don't. It's not recommended and when you run into issues, help will be hard to find. I've found it best to just leave the NAS as is, standalone, no apps running on it. Tune that ARC to use 22GiB of your RAM
Code:
echo 23622320128 >> /sys/module/zfs/parameters/zfs_arc_max
It's safe to do so now.

Can I ask what VMs you are running and why? Is it not possible to run those VMs using Truenas as the hypervisor? Ignoring my ARC tuning suggestion ofc.

I personally like the separation of the NAS from the rest of my host machines. Which are all low power NUCs/Pis running different things like Jellyfin, Plex, Homeassistant etc.... Through docker ofc which all backs up to the NAS.
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Can I ask what VMs you are running and why? Is it not possible to run those VMs using Truenas as the hypervisor? Ignoring my ARC tuning suggestion ofc.
These are web hosting management.
ISPConfig, DNS1 and DNS2, and one more for the DB and the client page files.
 

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10

melloa

Wizard
Joined
May 22, 2016
Messages
1,749
I tested it all: Baremetal, FreeBSD and BHYVE for my VMs, VMWare with *NAS VM, back to baremetal after got a better server for my VMs. Bottom line never had a problem virtualizing it. Did notice a better performance to access my data with beremetal x ESXi. In any circumstance, always did my tests on a test box ...
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
These are web hosting management.
ISPConfig, DNS1 and DNS2, and one more for the DB and the client page files.

Are these paying customers? Just first thing I think is running client systems on old dated hardware..
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
[..] passing the disks directly to the Truenas VM via PCI passthrough.
This is contradictory. Do you want to pass through the disks or the controller?

What do you think of this, will I regret it?
In this combination, yes. It is one of those topics that for me fall into the category of "if you need to ask, you should not do it".

To put things into perspective: I am also suffering from high energy prices and the fact that I run my NAS not virtualized costs me around 300-400 Euros per year. But I am willing to swallow that, because of the increased risk and operational effort.

Or is it worth trying?
Well, risking business critical data, at least that is how I interpret this thread, is not something to play around with.

The critical part for the whole discussion is this: As long as things go smooth, everything is fine and also virtualization is ok. But what happens if there is a small fluctuation in the voltage of you electrical power? What if a cable is not connected perfectly and moves a little bit if a heavy truck moves by your house? What if that move triggers some sort of issue?

If you run into issues it can be orders of magnitude more difficult to diagnose the root cause on a virtualized system. If, in addition, you need help from others (like the forum), this makes things even more difficult. You basically need to decide if that is a risk you are willing to accept.
 
Last edited:

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Thank you everyone for your advice and experiences, they were all taken into consideration.

And at the same time, I got a second machine to do a test, practically a clone of my main one, I did all the tests virtualizing truenas, I had some problems getting the PCI through, but I managed to work.

Everything worked as expected, without major problems, I didn't experience significant performance losses, maybe 10~15%.
I worked on it for a month in total, under stress and extensive copies.

I simulated some problems, such as power outages and sudden shutdowns, in addition to removing one of the 4 HDS and redoing it with a new one, to see if I would lose data and the time it would take for ZFS to recover.

Well, what I can say is that it works, but the troubleshooting time has always been much more than double, without saying that we never know if the problem occurred in Truenas, in the VM/virtualization or in the hardware itself, which amounts to be boring, tiring and time consuming, until you find out exactly where the problem is.

The verdict is that I'm going to stay with baremetal.

But I say that the evolution of virtualization is really good, I believe that in a few more years, we could easily run in virtualization, just passing the disks/PCI to the VM.
But not yet today, due to the cost and time spent on maintenance and correcting problems.

Well that's!
 

Davvo

MVP
Joined
Jul 12, 2022
Messages
3,222
Troubleshooting always sucks, but nailing the issue can be pretty satisfying. Thanks for reporting your experience, could you please elaborate a bit more about the difficties you encountered?
 

Finallf

Dabbler
Joined
Jul 27, 2023
Messages
10
Troubleshooting always sucks, but nailing the issue can be pretty satisfying. Thanks for reporting your experience, could you please elaborate a bit more about the difficties you encountered?
Basically my 2 biggest problems were:
First, in passing something to the VM, like my UPS manager, which is USB, I tried a lot and couldn't, the PCI/E pass-through part is very good, but when we try to pass a USB device to the VM, the fight starts.
This made me very nervous, and that was the decisive factor for me to remain baremetal at the moment.

Second, it was the time management for starting the VMs, if it is not started in the correct order, the shares simply will not start on their own after a restart.
To resolve this, you have to start the SMB and NFS shares manually, or create a script with VMs checks up or by time, for everything to work.

The second problem wouldn't even be that critical, as it would be easily overcome as I said, but the first is quite complicated, and from what I studied, depending on the hardware used there is no easy solution.
 

MrGuvernment

Patron
Joined
Jun 15, 2017
Messages
268
For USB, perhaps a PCIe USB card may work best, to pass through the entire card to the VM needed...

You can usually set start up order on the hypervisor and configure how long to wait between VMs starting up and shutting down..normally..
 
Top