ESXi Question

Status
Not open for further replies.

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
2 840 Evo’s will be a mirrored vdev in FreeNAS, passed to ESXi for other guests to boot from.
I take it they will be their own pool? A vdev is part of a pool. Once you have the new pool you will still need to make a zvol. This is the block device that will be shared using iSCSI.
One guest will for sure be Ubuntu for automation/syncthing/openvpn/plex.
Linux VM are relatively light weight. Feel free to break this up. That way you can reboot and patch one VM without taking down all of your services.
Ubuntu will boot from 840 Evo vdev
Remember vdev is part of a pool and a zol is carved out of a pool.

I cant comment on the exact hardware choices but otherwise dive in and don't expect it to go perfectly the first few times. ;)
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I take it they will be their own pool? A vdev is part of a pool. Once you have the new pool you will still need to make a zvol. This is the block device that will be shared using iSCSI.

Linux VM are relatively light weight. Feel free to break this up. That way you can reboot and patch one VM without taking down all of your services.

Remember vdev is part of a pool and a zol is carved out of a pool.

I can't comment on the exact hardware choices but otherwise dive in and don't expect it to go perfectly the first few times. ;)

Hmm, I guess your right. The SSD’s would be their own pool. My bad. Once again, terminology... I don’t do this enough.

I didn’t realize I had to carve our zvols for iSCSI passthrough, but that makes sense.

Thankfully I do have my OG i7 920 rig as an open test bench on my desk so I can play around with it in there before I go full scale at least to get the basics down.

Thanks for all the help and insight!

9893c1d93895f4bd2b1552c97c22904e.jpg



Sent from my iPhone using Tapatalk
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
It's not pass through, its just over the network.;)
Cool bench build!

I’ll get it one of these days ¯\_(ツ)_/¯.

And thanks. Still works like a champ even though I overclocked it to the moon(ish) for the first 5-6 years of its life. Such a great motherboard/cpu setup that was.


Sent from my iPhone using Tapatalk
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
@kdragon75 so now that I have accepted I will be playing with iSCSI, I have another question, and if my terminology is wrong, well, would you be surprised? Lol.

So, iSCSI is a lower level then SMB/NFS ect. Does thing inherently allow for a more speedy and lower latency connection? My reason for asking is once again syncthing. Would it make sense to run that network share over iSCSI instead of SMB? I am also thinking of using iSCSI for steam library overflow on my gaming RIG, because once you iSCSI, why not iSCSI all the things?

Now I will have to work out on FreeNAS if I can share that volume (zvol?) via both iSCSI AND SMB since my windows PC will want access to the syncthing data and I don’t think two machines can access the same iSCSI (zvol?). If not, like you said I can always create the SMB share within Ubuntu instead. Ideally I think it like be nice if both that Ubuntu VM and my windows RIG could both connect to my /photo directory via iSCSI, but smb has worked so far so since I am fairly sure that isn’t possible, not a huge deal.


Sent from my iPhone using Tapatalk
 

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
Think of each iSCSI share as a hard drive. It will need to be formatted and used as such. This also means only one VM/host/server can use it at any given time unless its a cluster file system like VMFS. With that said, you can still share the files from that disk using NFS/SMB/whatever you like.
if I can share that volume (zvol?) via both iSCSI AND SMB
Nope. For the reason above, you can't have a hard drive plugged directly into more than one machine (you can but let's not go down that rabbit hole) But once Ubuntu mounts an iSCSI drive and formats it as ext3/4/btrfs/whatever you can use SMB to share the folders on the filesystem on the iSCSI drive (Technically it's a LUN be eh... we have enough new terminology here. A LUN or Logical Unit Number is a number that identifies a logical unit of storage. Welcome to software defined storage.)

Personally if you already have all your data on a dataset in FreeNAS I would just use a SMB share mounted to Ubuntu. You shouldn't mix SMB and NFS for the same files unless one is read only as you will eventually end up with grouped data. The two share systems to see or honor each other's locks and you may have a case where a file syncs as you save from windows or some other odd combination.
To be honest there is no one right way to do it. It all depends on how you want to manage your data and how tied to a VM you want it. I think of VMs as disposable and generally keep application data outside of them or at least on a separate VMDK that I can "plug" into another VM if needed.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Think of each iSCSI share as a hard drive. It will need to be formatted and used as such. This also means only one VM/host/server can use it at any given time unless its a cluster file system like VMFS. With that said, you can still share the files from that disk using NFS/SMB/whatever you like.

Got it.

Nope. For the reason above, you can't have a hard drive plugged directly into more than one machine (you can but let's not go down that rabbit hole) But once Ubuntu mounts an iSCSI drive and formats it as ext3/4/btrfs/whatever you can use SMB to share the folders on the filesystem on the iSCSI drive (Technically it's a LUN be eh... we have enough new terminology here. A LUN or Logical Unit Number is a number that identifies a logical unit of storage. Welcome to software defined storage.)

Personally if you already have all your data on a dataset in FreeNAS I would just use a SMB share mounted to Ubuntu. You shouldn't mix SMB and NFS for the same files unless one is read only as you will eventually end up with grouped data. The two share systems to see or honor each other's locks and you may have a case where a file syncs as you save from windows or some other odd combination.
To be honest there is no one right way to do it. It all depends on how you want to manage your data and how tied to a VM you want it. I think of VMs as disposable and generally keep application data outside of them or at least on a separate VMDK that I can "plug" into another VM if needed.

Ok, makes sense. I guess I am just worrying about the overhead of SMB more than I need to be. 10 GB really isn't very slow.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
@LIGISTX
I fully understand that you are having a ton of questions about ESXi and I would advise you to try out VMWare Workstation Player (free) to try out VMWare VMs, if you haven't already used VMWare.

As for all your other concerns, take it in small steps. My advice comes from my home use of ESXi over the past several years and VMWare Workstation Pro for the past 8 years, and I really like ESXi.

1) All of your VMs need to be stored on a datastore, it is best to put all your main VMs on a ESXi datastore drive, not iSCSI or FreeNAS shared device. For example I run FreeNAS, Sophos Firewall, and Ubuntu all the time so these VMs are stored on the bootable SSD datastore that ESXi boots from.

2) If you run VMs from a FreeNAS provided datastore then you MUST shutdown your VMs using these datastores before you can shutdown your FreeNAS VM. Trust me, you want this worked out.

3) Plan your RAM and CPU usage. RAM useage is a huge important thing, while ESXi will swap RAM out, it slows things down.

4) I would not worry about SLOG or any other aspects at this time, these can be added later easily if needed but in the beginning it is just adding something else which can fail or just complicate things when troubleshooting a problem.

5) We here promote passing though the entire controller (HBA) however you can pass though individual drives and I currently do that on my backup FreeNAS system (all of my systems run ESXi) without issue. The instructions are fairly easy but passing an entire HBA is much nicer/cleaner to manage. I'm not the only one here passing individual drives, just probably the only one who will tell you it's very possible and stable.

6) Don't forget about the very important UPS and ensuring that your system will shutdown properly upon power failure. This is not as easy as you might think, well unless things have changed. Remember, if you are running VMs located on a FreeNAS datastore then you must shut down those VMs first, then once those are shut down you can shutdown the FreeNAS VM.

7) Virtual Network Switches and such are fairly easy and are very fast internalVMXNET3 NIC, use those vice always using the E1000 NIC, but the E1000 is very compatible.

8) For FreeNAS you can start with 8GB RAM and then increase it if you need to, do not assume you need more RAM unless you are running iSCSI and then you need lots of RAM, like 32GB or more (just my opinion). Also your pool design is very important, RAM alone gets you so far, speed of the pool is also a huge factor.

Take a look at my system specs, I have more than enough power/resources to do just about anything I want to do.
 
Last edited:

kdragon75

Wizard
Joined
Aug 7, 2016
Messages
2,457
1) All of your VMs need to be stored on a datastore, it is best to put all your main VMs on a ESXi datastore drive, not iSCSI or FreeNAS shared device. For example I run FreeNAS, Sophos Firewall, and Ubuntu all the time so these VMs are stored on the bootable SSD datastore that ESXi boots from.
I get why one may want this and I would agree as budget permits but it not worth spending much on.
2) If you run VMs from a FreeNAS provided datastore then you MUST shutdown your VMs using these datastores before you can shutdown your FreeNAS VM. Trust me, you want this worked out.
YES THIS. In ESXi you can set an auto start order and timers. This is reversed for shutdown. Assuming you have VMware tools installed on all of your guests this should work smoothly.
3) Plan your RAM and CPU usage. RAM useage is a huge important thing, while ESXi will swap RAM out, it slows things down.
I won't go to deep here but keep your total configured RAM under 130% of your physical RAM. If you have a bunch of the same OS running in VMs you should look into Transparent Page sharing for all VMs. This can save TONS of RAM. Also when configuring your VMs, configurer the ABSOLUTE minimum number of cores/vCPUS per VM to do the job. More is almost always slower.
5) We here promote passing though the entire controller (HBA) however you can pass though individual drives and I currently do that on my backup FreeNAS system (all of my systems run ESXi) without issue. The instructions are fairly easy but passing an entire HBA is much nicer/cleaner to manage. I'm not the only one here passing individual drives, just probably the only one who will tell you it's very possible and stable.
If doing something like this, the disk in question will not be available for anything but the VM. You should also use physical RDM mode. This will pass the raw scsi/sata commands to the disk.On a related side note, RDMs are NOT faster than VMDKs and do have some limitations in clustered systems.
6) Don't forget about the very important UPS and ensuring that your system will shutdown properly upon power failure. This is not as easy as you might think, well unless things have changed. Remember, if you are running VMs located on a FreeNAS datastore then you must shut down those VMs first, then once those are shut down you can shutdown the FreeNAS VM.
So you would want to have that startup/shutdown order setup to address this. As for triggering the shutdown, I'm guessing some hackery with NUT would do the job.
7) Virtual Network Switches and such are fairly easy and are very fast internalVMXNET3 NIC, use those vice always using the E1000 NIC, but the E1000 is very compatible.
This topic is VAST and complicated. I wont get into it here.
8) For FreeNAS you can start with 8GB RAM and then increase it if you need to, do not assume you need more RAM unless you are running iSCSI and then you need lots of RAM, like 32GB or more (just my opinion). Also your pool design is very important, RAM alone gets you so far, speed of the pool is also a huge factor.
I would think 32GB is plenty to give 12 to FreeNAS and the remaining 20 to play with a few small linux VMs. More is almost always better. Keep in mind a few extra GB of ARC go a long way for a few small VMs.
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
So you would want to have that startup/shutdown order setup to address this. As for triggering the shutdown, I'm guessing some hackery with NUT would do the job.
Yup, NUT was the trick and it required a bit of research for me to get it to work properly.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I guess my biggest question now is why boot my VM’s from a datastore external to FreeNAS? Would it not be wise to go the iSCSI route for the sole reason that my VM’s boot volumes will be backed up via FreeNAS/ZFS? Or is the argument that the hardware required for non-crappy performance + the hassle isn’t worth it (obviously “worth it” is up to the specific person)?

I am also curious is my planned 28 GB of RAM will suffice. I believe it would, FreeNAS seems to be plenty happy with 20 GB now and that’s with a few jails, I have a running it at 16GB leaving 12 left for ESXi/other VM’s would be fine. I mostly just plan to use Ubuntu server, I don’t know what it’s RAM needs are, but I won’t be doing much at all with them, and if there is an easy way to strip them down I will look into it.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I also need a SAS expander since my HBA only has two SAS ports. Would in Intel RES2SV240 work well for this? Both SAS ports from HBA to expander, then remaining 4 SAS ports on expander to 4x SATA for 16 total drives available to FreeNAS via PCIe passthrough from ESXi to FreeNAS. 10x4TB and 2 120 840 Evo for ESXi VM boot storage.

This doesn't quite sound right.

Each of the SAS ports is probably a 4x multi-lane sas port, so actually you have 8 sas ports. That's enough to directly drive 8 drives (Sata or sas), you just need the right cable/backplane.

If you wanted to connect more than 8 drives to you HBA, then you would need a SAS expander.
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
I guess my biggest question now is why boot my VM’s from a datastore external to FreeNAS? Would it not be wise to go the iSCSI route for the sole reason that my VM’s boot volumes will be backed up via FreeNAS/ZFS? Or is the argument that the hardware required for non-crappy performance + the hassle isn’t worth it (obviously “worth it” is up to the specific person)?

That's the biggest reason. If you just want FreeNAS to be a NAS and store data, then hardware wise the simplest thing is to just use an SSD in ESXi and store all your virtual disks on that. You do have an issue with how do you backup the VMs though. And perhaps the easiest way to solve that is to solve the backup problem individually inside the VM... so for example, I use Veeam backup agent in my windows VM to backup to an SMB share... which is hosted on FreeNAS. Its the same backup solution I use for all my windows machines.

Alternatively, you host the VM disks on a FreeNAS NFS or iSCSI mount. Then you have to deal with the performance issues that generates, but on the possitive side, you get ZFS backed VM disks.

I settled on using iSCSI for my VM disks for the most part, and then i use NFS for ISOs etc that I want to mount to ESXi. The benefit of NFS is you can see the file system hierarchy directly on the pool, ie you can cd/ls inside it from inside FreeNAS etc... so I can just mount my ISOs dataset via NFS to ESXi... BUT iSCSI is more opaque as basically you're just mounting a disk image over iscsi... vs mounting a dataset.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
This doesn't quite sound right.

Each of the SAS ports is probably a 4x multi-lane sas port, so actually you have 8 sas ports. That's enough to directly drive 8 drives (Sata or sas), you just need the right cable/backplane.

If you wanted to connect more than 8 drives to you HBA, then you would need a SAS expander.

Correct, I have two sas->4xsata plugs now, but I have 10 4TB drives, and may plan on using two more SSD’s for VM’s if I go the iSCSI route.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Hmm, I guess your right. The SSD’s would be their own pool. My bad. Once again, terminology... I don’t do this enough.

I didn’t realize I had to carve our zvols for iSCSI passthrough, but that makes sense.

Thankfully I do have my OG i7 920 rig as an open test bench on my desk so I can play around with it in there before I go full scale at least to get the basics down.

Thanks for all the help and insight!

9893c1d93895f4bd2b1552c97c22904e.jpg



Sent from my iPhone using Tapatalk

Interesting desk :)
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
Correct, I have two sas->4xsata plugs now, but I have 10 4TB drives, and may plan on using two more SSD’s for VM’s if I go the iSCSI route.


Sent from my iPhone using Tapatalk

The intel RES2SV240 would be a good SAS expander for this case
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
Interesting desk :)

I mean, its on my desk now. lol. A grass desk would be interesting tho........ Probably super not useful as things would get lost all the time ;)
 

joeschmuck

Old Man
Moderator
Joined
May 28, 2011
Messages
10,995
I guess my biggest question now is why boot my VM’s from a datastore external to FreeNAS? Would it not be wise to go the iSCSI route for the sole reason that my VM’s boot volumes will be backed up via FreeNAS/ZFS? Or is the argument that the hardware required for non-crappy performance + the hassle isn’t worth it (obviously “worth it” is up to the specific person)?
Good question. While many of us can tell you the pros/cons of each way to go about this, I think it's best for you to determine this on your own through your personal use. But I will offer my two cents on this.

I'm still assuming that you want to run FreeNAS on ESXi? If yes then you must have a datastore to place the FreeNAS VM on. So you need at least one local datastore. you could place this all on the ESXi USB flash drive, assuming it fits, but it will be slow. I personally have a SSD datastore that ESXi is installed on and it houses FreeNAS and other VMs. These are my main VMs that I use all the time. I also have setup iSCSI and SMB shares on FreeNAS to offer up storage for other datastores for VMs that I want to play around with. I could establish long term VMs this way and depending on the hardware configuration, these could be very fast VMs, faster than a single SSD datastore. The problem comes when you shut down your computer be it manually or automated due to a power failure and your UPS is telling you it's time to power down. Embeded VMs like this can be a real pain to shut down properly to ensure you do not introduce data corruption. Just because you are using FreeNAS does not mean you can't cause issues with your data.

So since I've shut things down improperly many times and I can't seem to learn from my mistakes, running all my VMs on datastores works great for me.

As for backup up my VM's, I've been using XSIBackup-Free which will backup my VMs to my datastore drives, and to a FreeNAS datastore if I desire, but I don't do that anymore. XSIBackup-Free isn't perfect but it works for me so far.

As for RAM usage, keep in mind that ESXi uses RAM too and CPU resources, just not a terrible amount.
 

LIGISTX

Guru
Joined
Apr 12, 2015
Messages
525
I'm still assuming that you want to run FreeNAS on ESXi? If yes then you must have a datastore to place the FreeNAS VM on. So you need at least one local datastore. you could place this all on the ESXi USB flash drive, assuming it fits, but it will be slow. I personally have a SSD datastore that ESXi is installed on and it houses FreeNAS and other VMs. These are my main VMs that I use all the time. I also have setup iSCSI and SMB shares on FreeNAS to offer up storage for other datastores for VMs that I want to play around with. I could establish long term VMs this way and depending on the hardware configuration, these could be very fast VMs, faster than a single SSD datastore. The problem comes when you shut down your computer be it manually or automated due to a power failure and your UPS is telling you it's time to power down. Embeded VMs like this can be a real pain to shut down properly to ensure you do not introduce data corruption. Just because you are using FreeNAS does not mean you can't cause issues with your data.

So since I've shut things down improperly many times and I can't seem to learn from my mistakes, running all my VMs on datastores works great for me.

That’s a fair enough point. I do plan to use an SSD for ESXi, so I can throw a few VM’s on it as well. I have 2 120 EVO’s I plan to dedicate to this project, if my mobo supports RAID 1, I may do that with them so at least my ESXi and VM’s have redundency.


Sent from my iPhone using Tapatalk
 

Stux

MVP
Joined
Jun 2, 2016
Messages
4,419
if my mobo supports RAID 1

Motherboard raid is actually software raid supported by drivers in windows/Linux etc and will probably not work in ESXi
 
Status
Not open for further replies.
Top