One server - VMWare & TrueNAS

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Hi All,

There is a lot of old threats on members using VMWare ESXi and having FreeNAS/TrueNAS installed on a VM with direct access to HBA/Disks.
Is this still an option today? I'm assuming it is but I'm just wondering if there are people doing it today that can comment on my would-be setup.
The system will be:
  • Fractal Design Node 804 Black Micro ATX Case, Window, No PSU
  • Corsair CV550 550W Bronze Power Supply
  • Intel M10JNP2SB Motherboard
  • Intel Xeon E-2226G
  • Noctua NH-L9i Low Profile Intel CPU Cooler
  • Kingston 16GB 2666MHz ECC Unbuffered DDR4 X 2
  • Crucial BX500 120GB 2.5" SATA SSD X 2
  • Western Digital WD Red / Red Plus NAS 8TB HDD, WD80EFAX X 8
  • LSI Internal SAS SATA 9211-8i
So basically, my thinking is to mirror the 2 Crucial 120GB SSD's and install VMware ESXi on them.
This will also house the default datastore where I will install the TrueNAS VM, either passing through the HBA or using RDM (pro's & con's?) for a RaidZ2 pool.
I think I'll start out with 2 vCPU's and 16GB Ram for the TrueNAS VM. It will be mainly for pure file storage (Nextcloud), and then Plex and a few other things.
Is this a valid setup? In theory I don't think there is anything wrong with it but am just wondering if this is being done in the community currently or if there are any 'gotcha' things I need to be aware of.

Thanks.
Terry
 
Last edited:

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
LSI Internal SAS SATA 9211-8i
Maybe get the LSI 9207-8i instead since it supports PCIe 3.0 (as does your board) whereas 9211 supports PCIe 2.0
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Maybe get the LSI 9207-8i instead since it supports PCIe 3.0 (as does your board) whereas 9211 supports PCIe 2.0
Unfortunately that is the only piece of hardware that I already have :smile:
I don't believe I will come anywhere close with saturating the PCIe 2.0 slot with my 8 disks anyway so it shouldn't be too much of an issue.
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Unfortunately that is the only piece of hardware that I already have :smile:
I don't believe I will come anywhere close with saturating the PCIe 2.0 slot with my 8 disks anyway so it shouldn't be too much of an issue.
No it shouldn't. If you already have it then yes, just use it.
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
not too sure how you gonna plan to mirror esxi boot disks. As far as I know, there is no software mirroring in the free license for esxi 6.7

maybe spend a bit more capacity for the boot disks. Maybe you will want to start other vms in parallel to freenas, like pfsense or else.
would propose to increase 120gb to 500 or else.

for my passed back to esxi VM storage I took SSDs in order to have a fast and smooth working environment.
maybe check if it is more suitable to take 6x 10tb and 2x VM pool SSDs.

depending on your needs RAM could be increased. got 64 and spend 48 for freenas.
feel free to chck my signature. probably our setup is not too different.
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
I've already bought the hardware. The board supports software raid so I'm assuming I can build a mirror with the 2 SSD's. As far as 'startup' VMs needed it would most likely just be TrueNAS and maybe pfsense so I think the ~100gb should suffice.

If I can get some bigger SSD's on the day without waiting a ridiculous amount of time I may do that when I go to pick up the parts. I need to stick to the storage disks though as the price point jumps up considerably more since I bought the disks on black Friday.

How do you find the performance etc.? So you pass-thru the HBA or individual disks via RDM to ESXi? Anything I need to worry about?
 

John Doe

Guru
Joined
Aug 16, 2011
Messages
635
performance on freenas is maxing out the network on write and read. VMs are very smooth.
I am happy with the performance.

i passed through the entire HBA with 6x 10tb and 2x 1tb SSD to Freenas and created 2 pools.

there is nothing in particular to worry about.
Just make sure you have enough time to set up the all the networks and basic stuff in ESXi before installing VMs and just dont give up.

e.g. i have many different networks sometimes they are connected, sometimes independent.
for instance:
LAN network
WAN network
virtual LAN network for nfs storage passed to ESXi
DMZ Network for server
Network for internal storage like plex to shares on holiday videos

i hope to increase security with this.

lessons learned for me was: draw a map of your networks, IPs and MAC adresses and create them in the first place. It is easier to deactivate them instead of changing the virtual switches (at least for me it was a pain to modify it later).
Mac addresses are important to identify the different virtual NICs within the VM. otherwise you are spending a lot time pining.

without any knowledge in ESXi it took me 3 complete days. I was very close to just drop this idea. but it is worth spending this time.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Hey @Tabmowtez,

You wrote your power supply twice. Is it that you will have 2 of them or is it a mistake and you have only 1 ? At 550W, I would say it is an average one. More powerful than the 300W but some are bigger. What is important is to be sure that it will be powerful enough for your 10 drives.

With 32G of RAM, it is well over the minimum of 8 required by FreeNAS. Still, for a pool of 48T usable space, that may end up on the low side. For that reason, I would go with FreeNAS on metal here. ESXi would need some RAM for itself and a VM takes more RAM than a jail. With FreeNAS on metal and a maximum of services as jails instead of VMs, you would save significant RAM.In all cases, be sure to run as few VMs as possible.

The same principle will apply to CPU time. You have only 6. Do you know about CPU Ready ? When a VM has X CPU configured for it, ESXi can not run it before all of them are free and ready to work, even if the host has nothing to do with them. If you run a FreeNAS server with 2 CPU, ESXi must keep it waiting until 2 CPUs are free before giving some time to it. Depending of how many VMs you run all at once, CPU Ready can end up wasting you a lot of time and a lot of performance. With jails, that would not happen.

Up to you to do it as you wish, but with 6 Core and 32G of RAM, I would do FreeNAS on metal + jail and VM as last resort instead of ESXi + FreeNAS VM + More VMs.

Have fun with your setup,
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Sounds good. My setup will be a lot more simple and straightforward I think, but I will do some mapping before I go and configure everything. The last time I tinkered with VMware was in the 5.x days when I was certified, I know it has came a long way since but I'm sure a lot of the principles are the same so shouldn't take me too long to get everything sorted. That's good news performance wise, I'll see how I go.

Only 1 PSU, was a typo. I ran the numbers and it should be plenty for what I need. I am thinking about upgrading to 64gb of RAM straight off the bat which should give me a lot of future proofing in TrueNAS and some more headroom for VM's.

I'm not too worried about resource scheduling and VMWare I'm not going to be running anything that requires heavy processing power and I doubt I will need to oversubscribed the CPU too much where it will be a problem.
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Not sure if it is blasphemous to mention it here but I was doing some reading up on Proxmox, I wonder if that is a valid alternative as well. I've always been more of a BSD person over a Linux person but seeing how ZFS converged and I'm using Linux more day to day in my $work life it may be an option to look into for what I need. Anyone has any experience with it?
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
Anyone has any experience with it?

Every time Proxmox is mentionned, people are reminded that FreeNAS is way better on metal than virtual and that ESXi is the only hypervisor with a minimum level of experience and knowledge related to running FreeNAS.

I would not do something like virtual FreeNAS on Proxmox for anything else than tests. And even there, why would I test something I do not trust enough for my prod ?
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Every time Proxmox is mentionned, people are reminded that FreeNAS is way better on metal than virtual and that ESXi is the only hypervisor with a minimum level of experience and knowledge related to running FreeNAS.

I would not do something like virtual FreeNAS on Proxmox for anything else than tests. And even there, why would I test something I do not trust enough for my prod ?
From what I can tell you wouldn't need to run TrueNAS if you used Proxmox as it supports ZFS out of the box. That's what I was thinking anyway if I would PoC it out at some point.
 

Heracles

Wizard
Joined
Feb 2, 2018
Messages
1,401
You can run ZFS from other than FreeNAS, that's for sure. Here, I chose FreeNAS to manage it with TrueCommand, because each of my backend server is that, only a backend server, and that I like all the administration interface to configure snapshots, replication and more.

Should you wish to run ZFS from Proxmox, you will be better supported in another forum though...
 

Inxsible

Guru
Joined
Aug 14, 2017
Messages
1,123
Not sure if it is blasphemous to mention it here but I was doing some reading up on Proxmox,
Nothing blasphemous about it. You should use the tools that best serve you and your use case.

I use Proxmox and recommend it if someone is more into VMs etc rather than data storage. FreeNAS/Bhyve is just not where Proxmox and ESXi are in terms of virtualization/containerization.

I had a FreeNAS server with a few plugins and eventually I wound up with a Proxmox server and moved all but the Emby jail over to Proxmox containers.

I also built a Proxmox server for a friend of mine who didn't have or want a separate server for data storage. So his Proxmox server now serves as a VM server and also has a separate RAIDZ2 array which is used for his data storage needs.
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
It's definitely interesting, the difference between ESXi/TrueNAS vs. Proxmox is really just the 1 extra VM doing the storage if I go with the former.
Having never really used Proxmox I'm somewhat reluctant to give it a try on my main server but once I migrate my data from my existing NAS I might re-purpose that as a Proxmox server and see how I go.
 

ChrisRJ

Wizard
Joined
Oct 23, 2020
Messages
1,919
I've already bought the hardware. The board supports software raid so I'm assuming I can build a mirror with the 2 SSD's. [..]
Unfortunately, you can't. These things require drivers and usually work for non-boot drives on Windows only. And even if that limitation does not exist, the reliability question is still open. Using "software RAID" that comes with a motherboard simply means asking for trouble. You will find plenty of additional details using Google et al.
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
Unfortunately, you can't. These things require drivers and usually work for non-boot drives on Windows only. And even if that limitation does not exist, the reliability question is still open. Using "software RAID" that comes with a motherboard simply means asking for trouble. You will find plenty of additional details using Google et al.
Ahh true, thanks for this information. I checked the Intel site and indeed it only has drivers for Windows. I guess my options are buying a hardware raid card or just installing on a single disk.
 

Herr_Merlin

Patron
Joined
Oct 25, 2019
Messages
200
if you build a raid1 for you ESXi boot OS as well as some VMs that have to start prior to TrueNAS thats fine.
But ESXi will only notice "proper" RAID chipsets thus you might need to add an RAID Controller.
Additionaly I would use the RAID1 for the TureNAS boot VMDK as well as an plugged in USB drive so you can mirror the boot drive from TrueNAS within the guest. This will give you the option to pick the USB drive as well as the HBA including disk and recover it from pretty much any hardware...
 

Tabmowtez

Dabbler
Joined
Nov 12, 2020
Messages
36
So, I gave myself a Christmas present and upgraded my CPU and RAM to 64GB. I've ran some memtest tests, upgraded my LSI card to latest firmware and IT mode, and am waiting to get down to the shop to get some molex to data power cables so I can power all 8 disks.

I will say that the whole process of upgrading the BIOS & BMC of this Intel board was not fun compared to SuperMicro. It was finicky at best and the BMC remote control is kind of weird, no default user enabled either which means you need console access which is kind of stupid in my opinion. It's all sorted now at least.

ESXi is installed and running, as well as the TrueNAS VM and I've kicked off some SMART tests on the 4 disks that I have power to currently. I'll run some further tests and badblocks testing on all 8 disks once they all have power and are working.

Thanks everyone for the input.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
I'm not too worried about resource scheduling and VMWare I'm not going to be running anything that requires heavy processing power and I doubt I will need to oversubscribed the CPU too much where it will be a problem.
It's not so much processing usage as time slots. And it do get somewhat complicated. Open up the windows client (unless they added it to web interface for later versions) and look for co-stop. It it basically how much you have to wait to run multi core vCPU because other stuff was using some part of the CPU you needed. While it is more lenient with allowing skew nowdays it is still something that can bork performance if you use too many cores.

I use 1 core VMs as much as I can get away with to make things easier.

As for boot you really don't need RAID1 as long as you can redo the config easy enough for ESXi. Or save it. (and you want backup for anything on them anyway).
You can put two SSD in there and have FN boot drive on each to keep that safer.
 
Top