psbeekeeper
Cadet
- Joined
- Dec 4, 2012
- Messages
- 1
Hello,
How you install VMWARE tool on freenas 8 that is installed on a vm in ESXi?
How you install VMWARE tool on freenas 8 that is installed on a vm in ESXi?
I don't know what the USB stick statement proves. What I've seen are that USB sticks eventually fail, and they're often slow and annoying. Give me a 2.1GB vmdk on a RAID1 vmfs datastore any day and it's better and faster than the USB stick in just about every way.
As for performance, having just shoved a machine that was on the bench with a USB stick for FreeNAS debugging, into production as an ESXi host, um, really didn't see any noticeable performance change. Oh, wait, yes, it got faster under ESXi because every write to the config file wasn't forcing a storm of writes onto a slow USB stick... really, when you can shove PCI devices into a VM for direct access, talk about the "overhead" of virtualization sounds like just that... talk.
Seriously, there's some sort of performance hit for virtualization, but it isn't that great, especially if you do it right.
So what you're saying is.. your machine is up in 60 seconds and mine is up in 2 minutes. Yours also potentially shuts down a little faster than mine. Totally worth virtualization for that :P
You do realize that only when you are changing settings is the config file written to? My virtual machine has a config file that hasn't changed since I upgraded to 8.3. I have a cron job that backs up my config to my zpool every day at 2300. They are all the exact same size and the same SHA-256 checksum.
My concern is less about the performance hit and more about the added complexity and troubleshooting of FreeNAS machines in ESXi. I've seen so many people start threads off with "I'm using FreeNAS in ESXi" and the thread ends with "damn.. I didn't enable that!" How long until a bunch of these people realize that their zpools are failing because they didn't know what they were doing? And the crappy part is they likely won't even figure out that it was their own undoing and blame ESXi or FreeNAS for their lost data.
That seems like a poor reason not to support VMware Tools, especially if the vmxnet driver is included already.
It's not that I don't support it. It's just that the developers haven't given it much of a priority. I'd argue that if you are choosing to put multiple VMs on 1 physical machine you either are building an overpowered machine or you aren't too concerned with performance.
For most businesses, paying for the electricity is a small price compared to the cost of the technicians having to setup the hardware. Not to mention the cost of the hardware itself. While virtualization has certain potential benefits, for some situations it just isn't a great idea. FreeNAS wasn't designed with the expectation that it is a great idea to virtualize it. If you want good virtualized storage you are probably better off with a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware.
The problem is that frequently you maintain capacity to be able to provide service under heavy load, but normal loads are substantially lighter. If you don't want performance to degrade when someone is shoveling stuff at you at gigabit speeds, but your average utilization is only a hundred megabits, do you size the server for the gig or the hundred?
When the real estate costs $1900/month for just one of your racks, a large fraction of which is due to power, you look to minimize rack footprint and power consumption.
I don't get what "a real RAID controller and a server OS that doesn't have compelling needs for direct access to the hardware" means, by the way.
Nope, but you don't need a very powerful processor to saturate Gb speeds. My first gen i3 can saturate 2 Gb NICs simultaneously. Besides, we're talking about 30-50watts at the most. My i3 system with 16GB of RAM and no hard drives ran at 40w.
Also, from my experience, when you are paying for rackspace on a monthly rate you aren't paying for the power used. Just to "rent" the space.
Again though, when you are talking about storing multi-TB of data, you have plenty of other options. Offline backups for instance.
Overall, there aren't many situations where throwing a FreeNAS server in a VM is going to provide a significant profound value that would be worth the risk of data loss from misconfigured ESXi. I've seen 2 ESXi systems go down that wiped out all of the data on them despite being allegedly configured correctly. Not sure what the exact failure mode was because IMO when something like that goes THAT bad then its almost always an administration issue. If you assume it is you can't necessarily expect the administrator to identify what he got wrong because he probably doesn't know himself.
What I meant is that some OSes are better supported for hardware RAID controllers. Windows Server works decently with hardware RAID, and it somehow has built-in SMART functions without installing anything but the driver for the card. But FreeBSD does not work as well. Don't get me wrong, I'm in the process of looking at upgrading my home server to FreeNAS and getting rid of my aging Windows Server, but some stuff does "just work" under other OSes.
I do have some reservations about ESXi's implementation of disk caching. ZFS makes some big assumptions about any sync write is really truely written to the disk(s). Adding things like RAID controller write caching or ESXi's implementation of caching(if it has one.. I'm betting it does for performance reasons) could have serious dire consequences for ZFS in the event of a loss of power or ESXi crashes. There's a setting you can change to disable sync flushes, and the ZFS guides say that while you will see an amazing performance increase you should never ever ever disable the sync flushes. I'd wager that disabling sync flushes has the same potential consequences as adding RAID controller write cache or potentially ESXi's caching. Why take that risk? More than likely neither of us knows the answers to these complex questions and we can't assume that since it works on your VM right now that its smart to do that.
ahci0: <Intel Cougar Point AHCI SATA controller> port 0x4038-0x403f,0x4030-0x4033,0x4028-0x402f,0x4024-0x4027,0x4000-0x401f mem 0xd9d00000-0xd9d007ff irq 18 at device 0.0 on pci3 ahci0: [ITHREAD] ahci0: AHCI v1.30 with 6 6Gbps ports, Port Multiplier not supported ahcich0: <AHCI channel> at channel 0 on ahci0 ahcich0: [ITHREAD] ahcich1: <AHCI channel> at channel 1 on ahci0 ahcich1: [ITHREAD] [ ... ] ada0 at ahcich1 bus 0 scbus4 target 0 lun 0 ada0: <OCZ-AGILITY3 2.15> ATA-8 SATA 3.x device ada0: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes) ada0: Command Queueing enabled ada0: 57241MB (117231408 512 byte sectors: 16H 63S/T 16383C) ada1 at ahcich2 bus 0 scbus5 target 0 lun 0 ada1: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada1: Command Queueing enabled ada1: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) ada2 at ahcich3 bus 0 scbus6 target 0 lun 0 ada2: <ST3000DM001-1CH166 CC43> ATA-8 SATA 3.x device ada2: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes) ada2: Command Queueing enabled ada2: 2861588MB (5860533168 512 byte sectors: 16H 63S/T 16383C) [ ...etc... ]
BTW, I teased you on another thread a few mins ago. Someone with ESXi and a FreeNAS server with problems :P
Ok.. you're a total biznatch and you've made some good points and clarified the PCI passthrough. Now I'm read up on ESXi. It seems to have changed alot since I last used it personally and the few people I talk to that use it regularly are clearly idiots. And since I'm a little disappointed that I cant run FreeNAS and Plex on the same hardware I'm considering giving it a try with ESXi. Grr!
Really, I'm hoping that ESXi can use something like vmdk files so that I can setup my plex server and then move it later when I have more powerful hardware. I just downloaded the manual and ISO and I'm about to install it on a vmware workstation VM to see how it goes. Wish me luck.. ;)