Mostly for the benefit of "everyone else" I offer the following thoughts in response:
For VMWare's stance on using RDM to string together any kind of "poor mans" SAN see
http://kb.vmware.com/selfservice/mi...nguage=en_US&cmd=displayKC&externalId=1017530 who's title(Creating Raw Device Mapping(RDM) is not supported for local storage) pretty much sums it up. RDM is meant to present FC & iSCSI attached LUNs to a VM for the purpose of creating a M$ Cluster or something like that.
Hey like totally awesome, I had been not finding that link for months now, because VMware uses such effin' oblique terms and vague handwaving, and I wasn't motivated enough to keep trying every possible combination of terms. I've updated the OP to include your link. Thanks.
Also note there are 2 types of RDM mappings: Virtual Mode & Physical Mode. If you are going to play with fire make sure you use Physical Mode, it does work and removes any sort of 2TB limit.
For purposes of my warning, they're basically both bad. I would guess that there might actually be a use case for physical RDM's with FreeNAS in an actual SAN environment that was officially supported by VMware. But the question becomes, why.
1) If you have a ~$10,000+ SAN storage setup for VMware, it likely already offers a good data protection environment and a system for managing failed disks, spares, and replacements. You then add FreeNAS to the mix, basically transforming the hardware to an overly expensive HBA with unused features. Bleh.
2) A full larger FreeNAS system might be
Chassis - $1000
Mainboard/E5 CPU - $1000
M1015 - $100
128GB RAM - $1000
12 4TB SATA - $1800
A highly redundant 30TB+ system for about $5000. Most of the people who seem to want to run a big FreeNAS have already figured that out. It seems to be mostly guys trying to make everything run on a single non-HCL lab box at home that are desperate to find a hack to "make it work".
I wasn't very clear, what I meant to say is the hacks(that can be found) to make a RDM from local storage could very well be discontinued in the next major release. Since they are clearly unsupported by VMware and with VT-d becoming widespread it's becoming a non-issue to build things like virtual sans.
VT-d isn't widespread(*). Unless you're following VMware's HCL and you aren't budgetarily-constrained. So yes you and I and guys like us, we probably run across the quaint odd platform where VT-d isn't supported. But most of the users here, they're trying to recycle hardware, or they think they "know what to buy," or they're cost-constrained. I'm tired of explaining to people why they shouldn't buy an ASUS 1155 board with Realtek for $110. We've seen problems with VT-d on platforms that claim to support it but that aren't server boards. Basically VT-d is a feature I only trust in certain conditions, and I use the term trust loosely.
I assume they're not removing RDM support entirely, then? Because that would be a loss. I don't pay enough attention to the VMware world, as you have probably guessed... I've always seen the potential for RDM to be useful in certain environments. I can't imagine that people would generally find the substitution of VT-d and a controller to be acceptable, since that'd screw with the number of VM's that could be hosted, vmotion, etc. so it makes a lot more sense if we're talking strictly about removing RDM local disk (non-)support rather than RDM in general.