ESXi Datastore Location Question

Status
Not open for further replies.

ViciousXUSMC

Dabbler
Joined
May 12, 2014
Messages
49
I have Freenas currently installed bare metal on my Dell R710.
Works like a charm. However I can't get the performance I need in my VM's (trying to run a NVR system cant get the cpu in Freenas over 20% but in the VM its maxed to 100%)

So thinking of converting to a ESXi setup and virtualize my Freenas install, then offload all my VM's and maybe my Jails to ESXi VM's to better allocate the resources my server has available.

Outside of the normal "should you virtualize freenas" conversation. My question is actually related to the Datastore to be used for all the VM's
Most all in one builds have ESXi on a USB Stick and a local drive of sorts that will hold the Freenas VM Datastore, from there all the other VM's are usually created from the ZPool storage shared out from the Freenas VM itself.

I was thinking, is that really the best practice? If my array is a 6 Disk RaidZ2 setup made to run Plex, Torrents, Data storage, and also be used for a NVR would the constant I/O of several VM's thrash those disks and possibly result in bad performance, or can the array soak up that workload without any issues and its the best practice because my VM's will have all my redundancy and speed of my array.


I think the 2 setup options I have are:
500GB SSD holding ALL VM's OS drives
6x8TB HDD RaidZ2 for all Data Drives
-or-
250GB SSD for Freenas VM
6x8TB HDD RaidZ2 for all other VM's and all Data Drives

I may even spring for a second raid controller so I can have hardware raid for ESXi and do a Raid 1 SSD setup for the VM OS Datastore
My H200 will be using the VTx PCI passthrough feature to feed the drives directly to Freenas for the ZPool.

Thoughts, Opinions, Real World Adventures?
 

BigDave

FreeNAS Enthusiast
Joined
Oct 6, 2013
Messages
2,479

ViciousXUSMC

Dabbler
Joined
May 12, 2014
Messages
49
That is actually one of the threads I have read before, but I really do not see any of the information I was seeking in it. It was more of a experience in building an ESXi box in general without the specifics of performance penitential or best practice thoughts in having dedicated VM storage or bundling it into your Freenas storage.

Of course if you had the right setup you can do both together, have more than one array in Freenas but my current hardware configuration does not allow for that.
 

Chris Moore

Hall of Famer
Joined
May 2, 2015
Messages
10,080

ViciousXUSMC

Dabbler
Joined
May 12, 2014
Messages
49
Here is another example:

Build Report: Node 304 + X10SDV-TLN4F [ESXi/FreeNAS AIO]
https://forums.freenas.org/index.ph...node-304-x10sdv-tln4f-esxi-freenas-aio.57116/

Have you seen this one?

No, have not. Reading it now.
Very nice build, really expensive too!

I see the idea of using a SLOG, maybe that is something I can look into. I know those new Intel Optane were creating some headlines, but I would need to work around that M2 formfactor.

Another interesting thing I got out of that thread was that he was able to boot his Freenas VM in ESXi but also create a clone of the VM and boot bare metal.

That means I can do the reverse, I can take my bare metal install and convert it to a VM without needing to export all my data out and rebuild it all from scratch. I have about 15TB of data right now so I had nowhere to offload it :)

Most likely will try to boot and install ESXi to USB, create my Freenas VM on my SSD, do the necessary basic configuration (like VTx pass-through) and then once Freenas is up and going either try to import my volume and see if it finds it without issue, or just import my old configuration file.

Going to lose my SSD's as Freenas storage, currently holding my VM's and that will be moved into ESXi as a Datastore so there will be a bit of loss in the conversion, but if it works it should be worth it.

My only concern right now is losing redundancy for the VM's if they are on a single SSD, I wish ESXi had a way to mirror like Raid 1 without relying on hardware raid.
 
Last edited:
Status
Not open for further replies.
Top