Alvin
Explorer
- Joined
- Aug 12, 2013
- Messages
- 65
FreeBSD 9.2 will also contain virtio drivers. That should ease virtualisation on KVM or bhyve. (source: http://forums.freebsd.org/showthread.php?t=41246 )
It becomes ever more challenging to figure out the whole VMware ecosystem as they keep adding and changing various features and modules. Has there been any estimate as to when 5.5 will actually hit FCS?
From my PoV, the real win isn't any of that but rather Flash Read Cache (people here, think of it as L2ARC for your ESXi). And I really have to say, about fscking time, VMware.
My questions are :
- Having fiber SAN architecture and using NPIV, I have a better (or different) chance of successfully virtualize?
- VMware documentation says that I should use RDM to have vMotion, it means that I can't follow the directives of jgreco if not sacrificing vMotion ?
- In my server I have 2 hba to connect with redundancy to storage through 2 fiber switch, using the PCI- passthrough, I map HBA direct to a VM , but this means that my esxi host cannot use them to access a LUN for the vSphere DataStore?
- I guess the best solution is a third server with 2 other HBA where install FreeNAS directly. But I need to purchase additional hardware and I haven't hw redundancy...
[*]Having fiber SAN architecture and using NPIV, I have a better (or different) chance of successfully virtualize?
[*]VMware documentation says that I should use RDM to have vMotion, it means that I can't follow the directives of jgreco if not sacrificing vMotion ?
[*]In my server I have 2 hba to connect with redundancy to storage through 2 fiber switch, using the PCI- passthrough, I map HBA direct to a VM , but this means that my esxi host cannot use them to access a LUN for the vSphere DataStore?
[*]I guess the best solution is a third server with 2 other HBA where install FreeNAS directly. But I need to purchase additional hardware and I haven't hw redundancy...
[/LIST]
I apologize for the long post and the bad English, but I am really anxious to put in production a fully open environment! (except vSphere)
thanks!
sincar
Using RDM in a VMware blessed manner may also be okay, maybe pbucher will stop in with some comments
Since you have real SAN you should be able to use RDM in a vmware officially supported way which will help things greatly. The key is your SAN needs to give the LUNs a unique serial # and have it stick with the LUN. This usually isn't a problem, except under locally attached storage using certain HBAs and brands of hard drives(there isn't a list vmware just forbids the whole setup to be safe).
I've used RDM physical disks successfully in the past, though once I got the HBA to pass through and work correctly I swtiched to PCI passthrough of the HBA(in fact I went back and forth a few times without having to rebuild the pool). I've heard other people have some really bad problems with RDM setups when a disk had to be replaced, I suspect though they had run of the mill consumer hardware and had issues with the drive serial #s. I just brought up a RDM setup using ESXi 5.5 and FN 9.1.1 and it seems to be stable and is holding up ok so far(I hope I will be able to convince the company to buy a server that supports pass-through in the near future).
Far too many folks using RDM are doing it via hacks and worse often choose the hack of doing a virtual RDM and not a physical one.
My M1015 reflashed to IT mod provided the serial number in the device ID when I did an RDM map for a linux VM. I can't confirm any other configurations myself. Currently I'm doing RDM via the onboard SATAIII port on my Supermicro X9SCM-F motherboard with ESXi 5.1 and it definitely has the serial number too.The key thing is does the drive serial # that vmware counts on for the mapping come from the physical drive or SAN controllers vs a #s assigned by a driver or some firmware have the drive.
Is it fool of SATA drives with some kind of FC convertor
Opps I meant to say full....but only a fool would put SATA drives in a FC SAN........Please don't fool around, tell us what you really think!
Generally we("we" being some of the more knowledgable people here) just say "too bad so sad" when we get cases of RDM gone bad because recovery is pretty much impossible from what has been researched on the topic before.