VMWARE Tools and Freenas 8

Status
Not open for further replies.

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You don't need to.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Um, ... explain that answer?

You don't *have* to, but if you'd *like* to, it brings certain benefits to the VMware game, such as being able to manage your storage server's startup/shutdown more completely via the VMware system...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Ok, I'll rephrase to be more accurate..

You can't.. easily.

FreeNAS installation space doesn't provide the option for adding the VMWare tools. Not to mention the fact that the VMWare tools are about useless for FreeNAS. For instance, one of the developers mentioned in the forum months ago that the VMWare emulated network driver is included with FreeNAS. I'm not sure about anything else, but the general concensus by the developers that responded was to not whine about the VMWare tools because they didn't add a sizable benefit.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
They DON'T add a "sizable" benefit, that much is true, but for those of us running vSphere, the ability to have central management is really nice. If you have it, then you don't have to train staff on how to handle this very special VM whose developers think that their stuff is so great that people should have to manually log into it to initiate a shutdown, which in the end just ends up impressing everybody with how poorly it interoperates.

It's about the only reason we run VMware tools on FreeBSD. The vmxnet driver is complete garbage - absolutely fantastic when it works (as in ~10Gbps!) but utter trash when you do something that would be considered normal with a regular interface and instead it decides to stop passing traffic. So we suffer the performance hit of using E1000 emulation and don't worry about that. I guess the graphics integration is supposed to be pretty awesome but of course that's not an issue for FreeNAS...
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Well, I would take the stance that FreeNAS works better when it has direct hardware access. Yes, if you are smart enough and know what you are doing you can pull it off. But the fact that FreeNAS makes it pretty clear they recommend a USB stick pretty much proves they expect their customer base is not using ESXi.

I am of the opinion that if someone comes to the forum and starts asking a question about making FreeNAS work in ESXi they should not be doing it.

Especially as the jail becomes more prevalent and more people start putting more stuff in the jail, using "idle" cpu power isn't going to be particularly beneficial in an ESXi implementation.

Honestly, except for a home use I'm not sure I'd EVER trust ESXi along with some other machines. Most businesses in their right mind that has a use case for ESXi is likely not using FreeNAS as a virtual machine. FreeNAS isn't for everyone. Alot of people would be completely content with using either a spare Desktop and a file share or an old Windows Server for a file server. ESXi is also used for those few people that have a need for a bunch of services that run on different OSes(or that you want to run on different OSes). So the number of people that want ESXi AND use FreeNAS is small. Developer resources are at a premium. Just look at the most recent support ticket I submitted.. I was hoping to have it fixed in early Nov but the developers have more important things to do. I'm okay with that choice too.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Quite frankly, running on bare metal is becoming ever more quaint. We used to spend a lot of money running things like KVMoIP, remote power management, etc., wasting watts on individual servers, but when it comes right down to it, for operations both small and large, virtualization is largely where it's at.

I don't know what the USB stick statement proves. What I've seen are that USB sticks eventually fail, and they're often slow and annoying. Give me a 2.1GB vmdk on a RAID1 vmfs datastore any day and it's better and faster than the USB stick in just about every way.

As far as the jail goes, phk had a very specific vision for what was intended to run in jails, and I'm pretty sure the intended workloads were all things capable of running under FreeBSD. Jails are nice, but have very specific design constraints. There are some things you just can't run in a jail. For home users, Plex comes to mind.

As for performance, having just shoved a machine that was on the bench with a USB stick for FreeNAS debugging, into production as an ESXi host, um, really didn't see any noticeable performance change. Oh, wait, yes, it got faster under ESXi because every write to the config file wasn't forcing a storm of writes onto a slow USB stick... really, when you can shove PCI devices into a VM for direct access, talk about the "overhead" of virtualization sounds like just that... talk.

Seriously, there's some sort of performance hit for virtualization, but it isn't that great, especially if you do it right.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I don't know what the USB stick statement proves. What I've seen are that USB sticks eventually fail, and they're often slow and annoying. Give me a 2.1GB vmdk on a RAID1 vmfs datastore any day and it's better and faster than the USB stick in just about every way.

So what you're saying is.. your machine is up in 60 seconds and mine is up in 2 minutes. Yours also potentially shuts down a little faster than mine. Totally worth virtualization for that :P

As for performance, having just shoved a machine that was on the bench with a USB stick for FreeNAS debugging, into production as an ESXi host, um, really didn't see any noticeable performance change. Oh, wait, yes, it got faster under ESXi because every write to the config file wasn't forcing a storm of writes onto a slow USB stick... really, when you can shove PCI devices into a VM for direct access, talk about the "overhead" of virtualization sounds like just that... talk.

You do realize that only when you are changing settings is the config file written to? My virtual machine has a config file that hasn't changed since I upgraded to 8.3. I have a cron job that backs up my config to my zpool every day at 2300. They are all the exact same size and the same SHA-256 checksum.

Seriously, there's some sort of performance hit for virtualization, but it isn't that great, especially if you do it right.

My concern is less about the performance hit and more about the added complexity and troubleshooting of FreeNAS machines in ESXi. I've seen so many people start threads off with "I'm using FreeNAS in ESXi" and the thread ends with "damn.. I didn't enable that!" How long until a bunch of these people realize that their zpools are failing because they didn't know what they were doing? And the crappy part is they likely won't even figure out that it was their own undoing and blame ESXi or FreeNAS for their lost data.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So what you're saying is.. your machine is up in 60 seconds and mine is up in 2 minutes. Yours also potentially shuts down a little faster than mine. Totally worth virtualization for that :P

Let's put this in a little context.

In 2005, we built some nice 1U storage servers. Opteron 240EE (paid a hellish premium for the 30W EE's!) that, when loaded with 4 7200RPM drives, were protected by RAID5, had a spare disk, and could saturate a gigE. These units ate approximately 100W each.

Along comes 2012. We're rebuilding those. So I set up a nice X9SCi-LN4F, E3-1230, 32GB. Same general setup. They idle around 90W, but if pressed hard go up to around 150W. Box has to be always on. So I bring this online and stick it under vSphere in a cluster. Immediately it becomes clear that the resources are underutilized. Migrate some VM's. Keep going. More. More. Up to 100 watts. Oops. Look, we can actually turn off one of the OTHER VMware hosts, still only be around 40% CPU load, and be saving 70 watts that the other VMware host was burning.

So I could either be burning around 170 watts or around 100 watts to handle storage plus the VM's on those two hosts.

Which is the smarter way to go?

Do you think that this doesn't make a difference when applied to lots of gear?

You do realize that only when you are changing settings is the config file written to? My virtual machine has a config file that hasn't changed since I upgraded to 8.3. I have a cron job that backs up my config to my zpool every day at 2300. They are all the exact same size and the same SHA-256 checksum.

Yes, I do realize that. I also realize that our configs change a little more often than that. I *also* realize that when I am actively configuring the thing, it is EXTREMELY frustrating for an insanely fast machine with insanely fast storage to take multiple seconds to write out each change, because I'm making something like a hundred configuration settings. Seriously. I just got over my last round of that...

My concern is less about the performance hit and more about the added complexity and troubleshooting of FreeNAS machines in ESXi. I've seen so many people start threads off with "I'm using FreeNAS in ESXi" and the thread ends with "damn.. I didn't enable that!" How long until a bunch of these people realize that their zpools are failing because they didn't know what they were doing? And the crappy part is they likely won't even figure out that it was their own undoing and blame ESXi or FreeNAS for their lost data.

That seems like a poor reason not to support VMware Tools, especially if the vmxnet driver is included already.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
That seems like a poor reason not to support VMware Tools, especially if the vmxnet driver is included already.

It's not that I don't support it. It's just that the developers haven't given it much of a priority. I'd argue that if you are choosing to put multiple VMs on 1 physical machine you either are building an overpowered machine or you aren't too concerned with performance.

For most businesses, paying for the electricity is a small price compared to the cost of the technicians having to setup the hardware. Not to mention the cost of the hardware itself. While virtualization has certain potential benefits, for some situations it just isn't a great idea. FreeNAS wasn't designed with the expectation that it is a great idea to virtualize it. If you want good virtualized storage you are probably better off with a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
It's not that I don't support it. It's just that the developers haven't given it much of a priority. I'd argue that if you are choosing to put multiple VMs on 1 physical machine you either are building an overpowered machine or you aren't too concerned with performance.

The problem is that frequently you maintain capacity to be able to provide service under heavy load, but normal loads are substantially lighter. If you don't want performance to degrade when someone is shoveling stuff at you at gigabit speeds, but your average utilization is only a hundred megabits, do you size the server for the gig or the hundred?

VMware makes it easy to scale loads based on demand; vSphere comes with a feature called DRS, which can even go so far as to power machines off and spin them up when load requires it.

For most businesses, paying for the electricity is a small price compared to the cost of the technicians having to setup the hardware. Not to mention the cost of the hardware itself. While virtualization has certain potential benefits, for some situations it just isn't a great idea. FreeNAS wasn't designed with the expectation that it is a great idea to virtualize it. If you want good virtualized storage you are probably better off with a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware.

When the real estate costs $1900/month for just one of your racks, a large fraction of which is due to power, you look to minimize rack footprint and power consumption.

I don't get what "a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware" means, by the way.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
The problem is that frequently you maintain capacity to be able to provide service under heavy load, but normal loads are substantially lighter. If you don't want performance to degrade when someone is shoveling stuff at you at gigabit speeds, but your average utilization is only a hundred megabits, do you size the server for the gig or the hundred?

Nope, but you don't need a very powerful processor to saturate Gb speeds. My first gen i3 can saturate 2 Gb NICs simultaneously. Besides, we're talking about 30-50watts at the most. My i3 system with 16GB of RAM and no hard drives ran at 40w.

Also, from my experience, when you are paying for rackspace on a monthly rate you aren't paying for the power used. Just to "rent" the space.

When the real estate costs $1900/month for just one of your racks, a large fraction of which is due to power, you look to minimize rack footprint and power consumption.

Again though, when you are talking about storing multi-TB of data, you have plenty of other options. Offline backups for instance. Overall, there aren't many situations where throwing a FreeNAS server in a VM is going to provide a significant profound value that would be worth the risk of data loss from misconfigured ESXi. I've seen 2 ESXi systems go down that wiped out all of the data on them despite being allegedly configured correctly. Not sure what the exact failure mode was because IMO when something like that goes THAT bad then its almost always an administration issue. If you assume it is you can't necessarily expect the administrator to identify what he got wrong because he probably doesn't know himself.

We'll have to agree to disagree, and that's fine. I'm not here to defend their thoughts. It might have been as simple as them hating VMWare and choosing not to use their "garbage". I've used VMWare Workstation since version 1.0 and I think it has some great value. But its not good for all circumstances. I'd dismiss you as an idiot if you told me you setup a pfsense machine on a VM. At the end of the day it doesn't matter at all what you or I think of ESXi, virtualization, etc. All that matters is that FreeNAS doesn't really do much to support it. You can take that for what its worth. They obviously have considered virtualization because they specifically mention using it in the manual for a test setup.


I don't get what "a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware" means, by the way.

What I meant is that some OSes are better supported for hardware RAID controllers. Windows Server works decently with hardware RAID, and it somehow has built-in SMART functions without installing anything but the driver for the card. But FreeBSD does not work as well. Don't get me wrong, I'm in the process of looking at upgrading my home server to FreeNAS and getting rid of my aging Windows Server, but some stuff does "just work" under other OSes.

I do have some reservations about ESXi's implementation of disk caching. ZFS makes some big assumptions about any sync write is really truely written to the disk(s). Adding things like RAID controller write caching or ESXi's implementation of caching(if it has one.. I'm betting it does for performance reasons) could have serious dire consequences for ZFS in the event of a loss of power or ESXi crashes. There's a setting you can change to disable sync flushes, and the ZFS guides say that while you will see an amazing performance increase you should never ever ever disable the sync flushes. I'd wager that disabling sync flushes has the same potential consequences as adding RAID controller write cache or potentially ESXi's caching. Why take that risk? More than likely neither of us knows the answers to these complex questions and we can't assume that since it works on your VM right now that its smart to do that.

BTW, I teased you on another thread a few mins ago. Someone with ESXi and a FreeNAS server with problems :P
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Nope, but you don't need a very powerful processor to saturate Gb speeds. My first gen i3 can saturate 2 Gb NICs simultaneously. Besides, we're talking about 30-50watts at the most. My i3 system with 16GB of RAM and no hard drives ran at 40w.

The X9SCi-LN4F has four gigE ports onboard. Nice to have enough muscle to fill a few ports...

Also, from my experience, when you are paying for rackspace on a monthly rate you aren't paying for the power used. Just to "rent" the space.

Doesn't work that way in most serious locations. You don't REALLY want a customer bringing in a bunch of massive blade servers and stacking them all in a rack. Power and space at quality facilities are typically sold separately; at cheap facilities they will maybe give you a single 20 amp circuit as "part of their package", but what that really means is that their space isn't engineered to be able to handle higher heat densities. I'm contractually prohibited from discussing pricing but some other people talk about it anyways, read that link for ballpark a-la-carte pricing for datacenter space and power. In particular it is worth noting that at a facility like Equinix, a rack is just an empty frame, you order power separately, and you can order 120V, 208V, 240V, 20A, 30A, etc., or various DC power options, and they're all basically priced by the kW. Also worth noting that you're "right" that you're not paying for the power USED - you're paying for the POTENTIAL. If you buy a 30A circuit and plug nothing into it, they still charge you.

Again though, when you are talking about storing multi-TB of data, you have plenty of other options. Offline backups for instance.

That'll be great for the ftp server, yeah.

Overall, there aren't many situations where throwing a FreeNAS server in a VM is going to provide a significant profound value that would be worth the risk of data loss from misconfigured ESXi. I've seen 2 ESXi systems go down that wiped out all of the data on them despite being allegedly configured correctly. Not sure what the exact failure mode was because IMO when something like that goes THAT bad then its almost always an administration issue. If you assume it is you can't necessarily expect the administrator to identify what he got wrong because he probably doesn't know himself.

So something went wrong because someone was inexperienced and made a mistake, so therefore it's rarely worth trying to do it, even if you have a clue.

That's your argument?

What I meant is that some OSes are better supported for hardware RAID controllers. Windows Server works decently with hardware RAID, and it somehow has built-in SMART functions without installing anything but the driver for the card. But FreeBSD does not work as well. Don't get me wrong, I'm in the process of looking at upgrading my home server to FreeNAS and getting rid of my aging Windows Server, but some stuff does "just work" under other OSes.

I do have some reservations about ESXi's implementation of disk caching. ZFS makes some big assumptions about any sync write is really truely written to the disk(s). Adding things like RAID controller write caching or ESXi's implementation of caching(if it has one.. I'm betting it does for performance reasons) could have serious dire consequences for ZFS in the event of a loss of power or ESXi crashes. There's a setting you can change to disable sync flushes, and the ZFS guides say that while you will see an amazing performance increase you should never ever ever disable the sync flushes. I'd wager that disabling sync flushes has the same potential consequences as adding RAID controller write cache or potentially ESXi's caching. Why take that risk? More than likely neither of us knows the answers to these complex questions and we can't assume that since it works on your VM right now that its smart to do that.

I think I do know the answer to these complex questions and the answer is that you've got some incorrect assumptions about implementing stuff in ESXi.

You do realize that when you pass a PCI device through in ESXi, that ESXi is basically not involved in any significant way? It certainly isn't in the data path and has no way to do any "disk caching." PCI passthrough is an awesome technology that many people have been using for storage for years.

Put differently, when I switched from a USB stick to ESXi, the following significant changes happened from FreeNAS's point of view:

1) Cores dropped from 4 to 1
2) Memory dropped from 32GB to 8GB
3) The LSI RAID controller disappeared
4) The disks attached to the LSI RAID controller also vanished
5) Some VMware driver cruft appeared
6) A 2.1GB "VMware Virtual disk 1.0" appeared

But what stayed the same?

Code:
ahci0: <Intel Cougar Point AHCI SATA controller> port 0x4038-0x403f,0x4030-0x4033,0x4028-0x402f,0x4024-0x4027,0x4000-0x401f mem 0xd9d00000-0xd9d007ff irq 18 at device 0.0 on pci3
ahci0: [ITHREAD]
ahci0: AHCI v1.30 with 6 6Gbps ports, Port Multiplier not supported
ahcich0: <AHCI channel> at channel 0 on ahci0
ahcich0: [ITHREAD]
ahcich1: <AHCI channel> at channel 1 on ahci0
ahcich1: [ITHREAD]
[ ... ]
ada0 at ahcich1 bus 0 scbus4 target 0 lun 0
ada0: <OCZ-AGILITY3 2.15> ATA-8 SATA 3.x device
ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada0: Command Queueing enabled
ada0: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C)
ada1 at ahcich2 bus 0 scbus5 target 0 lun 0
ada1: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
ada2 at ahcich3 bus 0 scbus6 target 0 lun 0
ada2: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device
ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada2: Command Queueing enabled
ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C)
[ ...etc... ]


See, if VMware is inserting itself in the mix, you'll see the "VMware Virtual Device" foo and then yes caching and other issues could be in play. But this is a VM that's talking directly to the hardware.

So here's the bit that should blow your mind:

I can copy the config, shut down the ESXi host, stick in a FreeNAS USB key, start it up, upload the config, and be up and running. They're the same disks and the same controller (and for good measure the network cards are configured so that the virtual ESXi and the physical non-ESXi layout are identical too).

Now I'll agree with what you say about ZFS and caching ... if, and only if, you're running things through a VMware datastore, which is most likely pointless, stupid, dangerous, and did I say pointless already? What about dangerous? Not to mention it is kind of like brandishing a loaded shotgun. And you already figured that out a long time ago, which is good, because it should be repeated frequently. But I really want you to see that other options exist.

BTW, I teased you on another thread a few mins ago. Someone with ESXi and a FreeNAS server with problems :P

Mmmm hmm. Nice. :smile:
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Ok.. You've made some good points and clarified the PCI passthrough. Now I'm read up on ESXi. It seems to have changed alot since I last used it personally and the few people I talk to that use it regularly are clearly idiots. And since I'm a little disappointed that I cant run FreeNAS and Plex on the same hardware I'm considering giving it a try with ESXi. Grr!

Really, I'm hoping that ESXi can use something like vmdk files so that I can setup my plex server and then move it later when I have more powerful hardware. I just downloaded the manual and ISO and I'm about to install it on a vmware workstation VM to see how it goes. Wish me luck.. ;)
 

pirateghost

Unintelligible Geek
Joined
Feb 29, 2012
Messages
4,219
Ok.. you're a total biznatch and you've made some good points and clarified the PCI passthrough. Now I'm read up on ESXi. It seems to have changed alot since I last used it personally and the few people I talk to that use it regularly are clearly idiots. And since I'm a little disappointed that I cant run FreeNAS and Plex on the same hardware I'm considering giving it a try with ESXi. Grr!

Really, I'm hoping that ESXi can use something like vmdk files so that I can setup my plex server and then move it later when I have more powerful hardware. I just downloaded the manual and ISO and I'm about to install it on a vmware workstation VM to see how it goes. Wish me luck.. ;)

ESXi DOES use vmdk files....thats the format that all the virtual disks run in.

VMWare also offers a free converter tool that allows you convert from many different formats to work with ESXi, even converting physical to virtual and moving from esxi host to esxi host (if you dont want to shell out the bucks for virtual center licenses, but dont expect HA or auto failover)

i have tried several times over the years to virtualize my NAS device and found it woefully unacceptable for my needs. one of which being that I have always built my own boxes and didnt ever buy the right hardware to pass through physical hardware, so it made it difficult to monitor my hard drives, which i feel i need when running a storage server of any kind.

my preferred method is to have one big storage box that does nothing but serve files. i dont mod my FreeNAS install, i dont deviate from its 'defaults' generally. i have 3 virtual hosts that handle all my other needs (domain controllers, testlab, download server, backup server, router, firewall/webfilter, MySQL, PBX, etc) all using my storage server
 

BobCochran

Contributor
Joined
Aug 5, 2011
Messages
184
This is exactly my goal for my FreeNAS server: turn it into a virtual machine so I can make better use of the physical hardware that I have. All those processing cores can be put to work running virtual machines, of which FreeNAS is exactly one. I simply can't see any reason for multiple physical boxes when one box and a bunch of drives can run dozens of virtual machines. The money saved can be pumped into lots of other interesting things.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There are significant caveats though.

The first one is that many people would love to use a FreeNAS VM on ESXi to act as storage for ESXi. This doesn't work, at least not correctly, because ESXi expects to be able to find all its datastores when booting, and it can't mount FreeNAS based datastores if the VM isn't running. Classic bootstrap paradox.

The second one is that the temptation will be strong for people who don't have hardware supporting PCI passthrough to create some datastores and make virtual disks to give to FreeNAS; don't do it. Generally speaking, there ARE a lot of good reasons to avoid blindly running FreeNAS as a VM inside ESXi in any random manner you happen to find convenient. FreeNAS *is* designed to sit on top of real hardware. It is a natural side effect of doing a good job of what it was written to do. ZFS doesn't need or want additional layers between it and the hardware, caching or messing with stuff. SMART needs access to chat with the drives. Etc. That means VMware Virtual Disks as ZFS components are Very Bad Ideas. That means VMware Raw Device Mappings are Very Bad Ideas.

That really does put about 90% of the attempted ESXi installs we've seen come through the forums here into Bad/Dangerous/Foolish territory, and it is not shocking that people have lost data, systems, etc. It all seems to work fine until it doesn't, at which point the microscopic bits have already spilled all over the floor.

The key, as I already outlined, is that with PCI passthrough, you are blending the best bits of ESXi with the best bits of FreeNAS. If you don't have a physical FreeNAS setup that's workable, then you probably shouldn't try to add the complexity of a hypervisor to it. This really is just a big FreeNAS box that happens to run ESXi as a management layer. And let me say this again: if ESXi fails on this box for any reason, it really is designed so that I merely need to stick a USB key in and bam, there's FreeNAS without ESXi, the critical bits of the system appear identical to FreeNAS, and your storage pool is there, and life goes on.

One of the weaknesses of FreeNAS is the USB key thing. As of now, I've seen several go screwed up or wonky, and it is wicked inconvenient to be in a situation where one has failed or gotten screwed up 800 miles away, and FreeNAS never seems to notice that there's a problem, until the kernel panics and it reboots and can't load.

The ESXi layer we're using here adds a hardware RAID controller with some 120GB SSD's. The FreeNAS USB key is replaced with a 2.1GB VMware Virtual Disk ("vmdk"), an absolutely awesomely more reliable and flexible way to arrange for FreeNAS to boot. Want a different OS? Create a new VMware Virtual Disk. Or even a whole new VM. ESXi is really pretty competent at what it does. I can sit here at my desk and mess with it all day long, never seeing the hardware...

But there are severe dangers too. You really need to know this stuff and how it all works together. That experience is hard to get unless you're actively doing it, which is kind of a catch-22. What's severe danger? Example: ESXi has been known to gleefully overwrite other operating systems when being installed. So if you have an ESXi host with attached FreeNAS storage on PCI passthrough, and suddenly you were to need to reload ESXi, um, well, it could be dangerous to your data if you didn't detach all the FreeNAS storage first.
 
Joined
Dec 20, 2012
Messages
1
Here's a scenario where having VMware Tools installed is pretty important.
I work at a fairly small software development shop. I'm am going to create a SQL cluster on windows 2012 in VMware Workstation. There will be a domain controller, 2 cluster servers and a FreeNAS box in it. The cluster servers will connect to the FreeNAS box for all of their shared storage needs (quorum, witness, data, backups). I'll then let every developer copy these VMs to their dev boxes (SSD drive for performance) with some simple instructions on how to use them. The key word is simple. The developers know C++, C#, SQL, and/or ASP. They don't know a damn thing bout clustering, iSCSI, PCI passthrough, systems management, or IT (they may think they do, but that's a rant for another day). Getting them to read and follow directions like 'right-click on the VM Team and select Power On' and 'when you want to create a snapshot, you will need to shut down all of the VMs first, right-click the VM Team and select shut down' is going to be hard enough without having to also tell them to go into the console and
type 11 to shut it down (If I had a nickel for every time I said RTFM...), as well as having to constantly tell them that it's ok that VMware tools isn't installed. They'll bring them up, throw their build on the cluster servers, and fix whatever anything breaks.

This type of build-once, use many is critical for development shops. I don't want to have to set up a new cluster for each and every developer/tester that comes along. I don't have to try to beg for a SAN in Ireland so that the small shop there can have a cluster environment for testing, they just copy the VMs over.

Another reason for having vmware tools installed is time synchronization, which is important for performance and stability.

So how about we go back to the question, which is 'How you install VMWARE tool on freenas 8 that is installed on a vm in ESXi?'

With 8.0.4, I used http://www.blackstudio.com/freenas-vmware-tools.html, but with 8.3, that doesn't work (package dependencies).
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
You do have me there. But you could also simply pause the VM instead of shutting it down. Not sure how well that would work but its an idea. I know I've paused FreeNAS VMs before and they throw an error about the clock advancing when resumed but that's all.
 

Mordock

Cadet
Joined
Feb 18, 2013
Messages
4
No one ever did answer the basic question. How do you install the current version of VMware tools in FreeNAS?

My application is training. I want to use FreeNAS to create a NAS with a preset bunch of VMs already created. I will then create linked clones of the master copy of this appliance for each student where this master copy will likely be cached to SSD on the physical SAN. However to get VMware customization to work, I need the latest version of the tools so that the cloning process can modify the IP address on each of the newly created clones.

FYI, I will not be using zfs at all as it requires too much memory and cpu. I need these VMs to be lean and mean. They will only be used for 1 week each and then destroyed with new copies created for the next week of class as necessary.

The current wave of virtualization is to virtualize storage where a virtual appliance (VM) provides the front end to any type of backend storage whether it be an older SAN without SSD support, a FreeNAS physical server, local DASD, or any other type of storage conceivable. While we are primarily looking at FreeNAS for training and not production, we would be very interested in using it as a front-end for our older Equallogic SANs that do not support SSD. The missing piece of this idea is an easy to configure failover and replication services to allow one FreeNAS device to failover to another should the ESXi host fail. This is also an obstacle to the physical deployment FreeNAS where the server itself (let alone the usb key) is the single point of failure.
 
Status
Not open for further replies.
Top