Virtualize Truenas Core

Joined
Mar 27, 2014
Messages
5
Dear all,

I would like to know if it make sense virtualize truenas core into esxi to provide a datastore to the same esxi from (virtualized)truenas.

Or if it's better to use a big esxi datastore, virtualize truenas and provide for external services, storage from virtual esxi disk, shared by truenas.

Or if it's better (again) to use truenas on other hardware and use nfs/iscsi as datastore to esxi.

Sorry for my low competence and thanks for your ideas and opinions.
 

Patrick M. Hausen

Hall of Famer
Joined
Nov 25, 2013
Messages
7,776
This resource might prove helpful:

Summary: the first option is the way to go but there are certain constraints you need to consider. Never put TrueNAS data on virtual disks. As a boot device they are of course ok - you need to bootstrap somehow.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
The first option is what most people here (including me) went with. Just be sure to do it the proper way to avoid a lot of pain and tears later on. I myself use Proxmox, but ESXi is actually the one that's recommended by ixSystems themselves as well as the above-posted guide.
 
Joined
Mar 27, 2014
Messages
5
Thanks to all people,
but what I meant, apart from whether it is technically possible (i/o passthrough and so on), if it is really convenient to virtualize truenas and then re-share a storage for esxi where the VMs would be stored.

So the scenario would be this as an hypotesys:
- a bare metal server, a bounch of ram, an hba whith a certain number of hdd, a device act a esxi boot device and small datastore;
- esxi booting from a small dedicated device;
- truenas vm is stored on local esxi datastore (I mean same device where esxi boot from);
- truenas vm boot and through passthrough, it has access to the bounch of disks mounted on the server;
- Truenas creates the pool with these disks;
- the pool thus created is shared via iscsi or nfs to the same esxi that is running truenas;
- esxi create/run vm inside pool shared by truenas.

Aren't there any contraindications in terms of performance and security of the data and/or files of the vm?
First problem I see is that esxi can't see the main datastore until freenas have finished their boot...

Apart from consolidation, I don't see any other advantages, but given my experience, I'm most likely wrong....

Wouldn't it be better to go the classic approach with a separate esxi server and truenas storage?

Thanks again!
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's possible but inconvenient. See warning point #4:


Since you already noted this,

First problem I see is that esxi can't see the main datastore until freenas have finished their boot...

you have a better than average chance of working out the hackery needed to make this work. iSCSI works somewhat better than NFS.

a bounch of ram,

Be aware that you need north of 64GB RAM to do a halfway decent job of this; 64GB will get you a handful of VM's, but you might need more like 128GB or even more if you are doing a substantial amount of I/O.


Wouldn't it be better to go the classic approach with a separate esxi server and truenas storage?

It would certainly be easier to maintain and manage separate server and storage instances, yes. The other thing is, depending on your environment and VM inventory, the cost of a high quality local RAID controller for ESXi DAS storage may just be a better option. A Dell PERC H740P is about $300, two 4TB Samsung Evo's are about $400/ea, meaning you can have 4TB RAID1 high performance read-optimized SSD storage for about $1100. Most people who are trying for a hyperconverged platform for TrueNAS/ESXi seem to be homelabbers who might not have more than a single hypervisor, and the onerous resource requirements for ZFS ARC and pool free space, plus also the "all traffic runs over IP networking" thing is a real performance killer. Now if you have some old X9 generation gear and you can find RAM at $469 for 512GB, and other similar deals, then by all means, the economics might be all different for you. But ESXi is deprecating Sandy/Ivy support in ESXi 8, so beware hidden future costs of hyperconverged virtualized TrueNAS instances.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I was in a bit of a rush, and wanted to circle back around to this. My own opinion, and I hope since I've written lots of the docs on this it should be clear I've been doing this for quite some time, is that it is far better to virtualize TrueNAS for general filesharing purposes, not VM hyperconverged hairpin storage.

Maybe it's just me 'cuz I'm old and no longer enjoy hefting 150# servers into racks by myself, but over the years I've kind of settled into building on the Supermicro SC826BA style chassis. This allows you to put in an LSI 3008 to handle 8 of the hotswap bays, and either mainboard SATA or mainboard PCH SCU to handle the other four, which means that you can actually have one or two virtualized TrueNAS hosts on the platform. This makes it easy to do independent primary, backup, and offsite filers with separate virtual instances, rather than playing the "how do we map replication targets" game.

For VM storage, as previously hinted at, I've had good luck with the Dell PERC controllers, and we started replacing the ESXi-deprecated LSI 6Gbps controllers with PERC H740P about two years(?) ago. The trick is that the card load we have is pretty dense, and no full length cards, so we mount a M.2 SATA SSD module inside behind the fan bulkhead capable of holding 4 or 8 M.2 SSD's. Looks like this

upper-deck-storage.png

This is a very old picture, as you can tell from the PERC H200, Solarflare SFN6122's, LSI 9271CV-8i, and Addonics cards (2x SATA, 1x NVMe on each card).

Our more modern card stack replaces the PERC H200 with an LSI 3008 (Supermicro AOC-S3008L-L8i), replaces the two deprecated Solarflares with a single quad-port Intel X710-DA4 (Supermicro AOC-STG-I4S), pulls the two Addonics and quad M.4 module in favor of a carefully redesigned Silverstone SDP11's stacked with standoffs for a total of eight SATA M.2 SSD's in about the size of a half height 5.25" HDD, and then with the remaining three PCIe slots now freed, some nice Supermicro AOC-SLG3-2M2 cards.

This lets you get six nonredundant NVMe SSD's and eight SATA SSD's in RAID1 inside a 2U chassis IN ADDITION TO the twelve drives that are part of the NAS VM (/VMs).

It's mostly standard hardware except for the Silverstones, which need some finicky spacers to get the right spacing to stack. You do have to drill three holes to attach them as well, but if anyone cares, I do have a drilling template.

What? You say I'm crazy? Well, that's as may be...
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824
What? You say I'm crazy? Well, that's as may be...
Yeah, you're crazy alright.... for posting a full 9.8 MiB image on a forum post! It took me FOREVER and a day to load that crazy image!

I do appreciate the ability for me to zoom in on all the intricate sexiness of the cards and the board.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Yeah, you're crazy alright.... for posting a full 9.8 MiB image on a forum post! It took me FOREVER and a day to load that crazy image!

I do appreciate the ability for me to zoom in on all the intricate sexiness of the cards and the board.

Whattteva.
 

Whattteva

Wizard
Joined
Mar 5, 2013
Messages
1,824

infraerik

Dabbler
Joined
Oct 12, 2017
Messages
24
Thanks to all people,
but what I meant, apart from whether it is technically possible (i/o passthrough and so on), if it is really convenient to virtualize truenas and then re-share a storage for esxi where the VMs would be stored.

So the scenario would be this as an hypotesys:
- a bare metal server, a bounch of ram, an hba whith a certain number of hdd, a device act a esxi boot device and small datastore;
- esxi booting from a small dedicated device;
- truenas vm is stored on local esxi datastore (I mean same device where esxi boot from);
- truenas vm boot and through passthrough, it has access to the bounch of disks mounted on the server;
- Truenas creates the pool with these disks;
- the pool thus created is shared via iscsi or nfs to the same esxi that is running truenas;
- esxi create/run vm inside pool shared by truenas.

Aren't there any contraindications in terms of performance and security of the data and/or files of the vm?
First problem I see is that esxi can't see the main datastore until freenas have finished their boot...

Apart from consolidation, I don't see any other advantages, but given my experience, I'm most likely wrong....

Wouldn't it be better to go the classic approach with a separate esxi server and truenas storage?

Thanks again!
There are a number of use cases where this makes sense and a lot where it doesn't. From a pure performance perspective, there's no real advantage of doing this. However, if you want to leverage some of the features of ZFS then it makes sense. For example, VMware snapshots are pretty heavyweight operations and you can only have 32, whereas you can take regular snapshots at the ZFS level if this is useful to you.

One use case (running for close to 10 years for me, including several server upgrades) is if you have a standalone ESXi server, but you also want data protection without investing in a VMware specific backup solution. I have a hosted ESXi in a colo with this configuration and it takes hourly snapshots that are then replicated to my office. If the colocated machine dies, I can easily mount the datastore at the office to another ESXi, register the VMs and start them up.

Of course you can get these benefits with a separate TrueNAS server for storage and other machines for hosting the virtual machines, but if you only have space or budget for one server, integrating everything can be helpful. This is definitely an edge case, but if you're in this situation it can make sense.
 
Top