Maybe it's just me then... because I did not realize anything was "wrong". It works great for me on sizable workloads, but maybe I just got lucky on my first try. Or maybe, I have realistic expectations and knew I'd have to pass-through an HBA, give the guest lots of RAM, and even the guest dedicated CPU reservations.The problem is that the FreeNAS project has no clue what is wrong and VMWare isn't about to give us access to their code to find out.
And you know what all boxes virtual or non-virtualized can one day get "pissed at you". That's not exactly anything new in the world of IT.spidernas,
The problem with virtualization is as I have said many times on the forum and is also in the "please don't virtualize FreeNAS" thread. It works great. It may work great for a day, for a week, or even a year. But, one day your system gets pissed at you and the server Gods smite you and its all over. It's one of those things that works and then suddenly doesn't work. We've had people that did nothing more than reboot their host machine properly and... *poof*.. it was over. Never saw their data again.
You're back huh?And you know what all boxes virtual or non-virtualized can one day get "pissed at you". That's not exactly anything new in the world of IT.
Nope I'm not back, I'm just poking the caged tiger....You're back huh?
Yes, but when it's significantly more virtualized than non-virtualized you can't ignore the extra risk because 'any box can get pissed at you'.
You wouldn't punch a lion in the face would you? But one might eat you next time you are on a Safari adventure in Africa.
It's all about risk mitigation, something you personally don't seem to understand.
Well I'm guilty of not deploying 188.8.131.52 so I can't answer for it, but the stock drivers provided with ESXi 5.x have worked fine all except for 1 or two of the releases in the 9.2.x series where you had to compile the drivers in order to use them(I'm thinking it was maybe 9.2.1 or something like that). I deployed at some level every build in the 9.2.x series up to 184.108.40.206 and once ix backed out the one kernel patch that changed the APIs in the kernel the stock drivers have worked great. Anyways let's not lock the thread.Please, let's not get this thread locked. It's the only one that has vmxnet3.ko updates. :)
It's not an aversion. Feel free to go and do it. I posted a guide to it somewhere. It is even very well thought out in ways that n00bs tend not to understand.OT: I still don't get this overblown aversion to virtualizing FreeNAS. People seem to want it - why ignore this user base?
I haven't found any real gotchas except that there's a performance hit, and more danger in performing upgrades. I believe there was some badness upgrading 8->9 due to some MSI issue in FreeBSD 9, but that was a long time ago. In theory if there was a specific issue that created an incompatibility that would lead to corruption, you could lose your pool. I think that's the big thing that gives me nightmares, but it's mostly because I'm paranoid.I'd love to see more known good formulas, and optimization shown for running virtualized. Good hardware and vt-d really seems to be flawless. Throw in a usb stick and boot to baremetal instantly if you want. I've been doing my best to break it and I can't. I've pulled drives, cut power, loaded it to the nines.... I keep wondering where the gotchas are.
We like making y'all our guinea pigs. So there.We really do appreciate the brain trust around here helping those of us that wander from the straight and narrow occasionally.