"Unsupportable block size" for disks in ESXi

Status
Not open for further replies.

millenix

Cadet
Joined
Jan 6, 2013
Messages
4
Hello everybody,

I just installed FreeNAS-8.3.0-RELEASE-p1-x64 on my HP MicroServer N40L and created a RAID-Z2 (forced 4k sectors) using 5x 3 TB WD30EFRX (WD Red) HDDs.
Now I want to run FreeNAS as guest operating system in VMware ESXi 5.1 and access the volume from within the VM.
I created a physical RDM (vmkfstools -z) following http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/ and assigned the virtual disks to the VM.
They show up as disks but i get "Unsupportable block size" errors in the console. Switching to virtual RDM (vmkfstools -r) doesn't work due to VMware limitations. There's no S.M.A.R.T. data available, too, so it wouldn't be an option.
I talked to some guy in the VMware channel who got the same problem and found this thread http://forums.nas4free.org/viewtopic.php?f=16&t=1020&start=20 with a similar problem in NAS4Free.
Any ideas appreciated.

Regards,

Thomas
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
What exactly is the point of all this? You have additional space for ESXi datastores in your N40L or something?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
The HP N40L I believe only supports 8GB of RAM. You shouldn't be using less than 6GB for FreeNAS, I'm not really sure why you'd want to do that. (I think this is why jgreco is asking the question).

Also, if you do alot of research you'll figure out that if your data is important you shouldn't be doing RDM for your disks with FreeNAS. One of many things that go horribly wrong is you lose SMART. I told some guy off a week or two ago in another thread because he didn't get it. The manual states:

NOTE: instead of mixing ZFS RAID with hardware RAID, it is recommended that you place your hardware RAID controller in JBOD mode and let ZFS handle the RAID. According to Wikipedia: “ZFS can not fully protect the user's data when using a hardware RAID controller, as it is not able to perform the automatic self-healing unless it controls the redundancy of the disks and data. ZFS prefers direct, exclusive access to the disks, with nothing in between that interferes. If the user insists on using hardware-level RAID, the controller should be configured as JBOD mode (i.e. turn off RAID-functionality) for ZFS to be able to guarantee data integrity. Note that hardware RAID configured as JBOD may still detach disks that do not respond in time; and as such may require TLER/CCTL/ERC-enabled disks to prevent drive dropouts. These limitations do not apply when using a non-RAID controller, which is the preferred method of supplying disks to ZFS.”
By virtualizing with ESXi you are removing the exclusive disk access.

I tried to dabble with ESXi and jgreco(bunch of PMs back and forth) and finally gave up. While I could have gotten it working with RDM, the risk wasn't worth the potential reward. When ESXi goes bad it seems to go horribly bad and take out disks with it.
 

millenix

Cadet
Joined
Jan 6, 2013
Messages
4
Thanks for your replies.
I'm using 16 GB of ECC RAM in my HP N40L and so far there are no problems with it.
I wanted to compare some setups (non-virtualized vs virtualized) and check performance so I needed to access to my RAID-Z2 from within FreeNAS in ESXi.
Would be really neat to have a storage setup regardless if I use virtualization or not.
But thanks for pointing out that there are some issues. I'll have a deeper look at it today. Thought there was maybe some driver update for the storage subsystem that is passed from ESXi to the guest OS or something like that. Nevertheless if there IS a possibility to have RDM working somehow i'd be happy if you have a link or something for me.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Nevertheless if there IS a possibility to have RDM working somehow i'd be happy if you have a link or something for me.
Is there a greater than 0% chance of you getting it to work? yes.

Would I ever recommend someone use RDM if access to the data is important(for instance, in a business environment)? Absolutely not.

Would I ever recommend someone use RDM without complete and very throughly updated backups? Absolutely not.

The problem is if you are trying to use FreeNAS it is most likely for ZFS and its awesomeness. But by going to RDM you are instantly breaking alot of stuff in ZFS as well as FreeNAS/FreeBSD. For reliability you are probably better off looking for something else that is better designed for use in a virtualized environment. ZFS just wasn't designed with any thought for virtualization.
 

millenix

Cadet
Joined
Jan 6, 2013
Messages
4
I don't want to loose my data, so I'll follow your advice. All data is still on another machine. Right now everything is a big playground for me, nothing more.
Could you tell me the reason for this "Unsupportable block size" messages and the procedure to overcome the issue (via PM, if you like)?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Not a clue how to fix it. Never heard of the error before. But I wouldn't expect much of a response from the ESXi gurus because you definitely won't be getting that error if you are using PCIe passthrough(which is pretty much the only recommended way to use ESXi with FreeNAS).
 

millenix

Cadet
Joined
Jan 6, 2013
Messages
4
The N36L/N40L doesn't support DirectPath I/O so that is not an option. Will stick to a multiboot then and no virtualization if I really want multiple OS to access zfs.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
The HP N40L I believe only supports 8GB of RAM.

Minor correction: The N40L supports 16GB the same way many Atoms support 8GB... undocumented-but-works. 16GB is big enough to be ESXi-useful and you can probably find some trite VM's that you could run alongside FreeNAS.

You shouldn't be using less than 6GB for FreeNAS, so why do you want to do this at all? (I think this is why jgreco is asking the question).

Right line of thinking, wrong specifics.

Problem 1:

ESXi requires a datastore on which to maintain VM data files and disk images, and that cannot be self-hosted on a virtualized NAS. Chicken-and-egg. ESXi will not use a USB disk or USB flash for datastore, so my question was summarized precisely by what I asked. The N40L has five bays total. It *is* possible to add another controller card such as a BR10i or M1015 or even a plain SATA controller for ESXi to use for datastores, and there's space in the N40L case for an extra 2.5" disk or two if you use tape. However, in general, the N40L will be quite small to host 5 full size disks PLUS extra hardware for ESXi datastores.

Problem 2:

HP MicroServer does not support PCI passthrough, which is a damn shame.

Fundamentally, FreeNAS is a real nice system and a real nice concept. However, the USB flash thing has its ups and downs. For a home user, it's probably pretty great. For us, it's real inconvenient to have a USB flash on a system that's half a continent away, and I've seen some failures where the flash has somehow become corrupt... it's great that the data on the FreeNAS server is well protected, but the FreeNAS system itself is poorly protected against failures. Updating it remotely (remember the 1GB->2GB size bump?) or replacing possibly failed devices is a pain.

That's where ESXi could really shine. Stick in an inexpensive ESXi-supported RAID controller. Throw in some small SSD's. Suddenly you have awesome-fast boot (no more minutes to load FreeNAS over USBv1!) AND it is redundant AND you can do installs and upgrades easily. With ESXi you basically have a large supply of USB keys that you can switch around without touching the hardware, and you can even run more than one at a time, for that annoying (and inevitable) case where you forgot to set something up and you wish you could have both the old and new server running at the same time so you could see just what you did last time.

That plus the other positives of virtualization make me understand why someone would try to do this. However, from a practical point of view, the Microserver is a poor platform for ESXi. It is lacking many of the features that would make for an awesome hypervisor platform.

Anyways, we're an ESXi4 shop here, so I have no comment on RDM (an ESXi5 feature) other than I'd think it could be made to work. But we've seen a lot of people come through here with tears for their shattered data eaten by their questionable virtualization platform. I'm pretty convinced that the only safe and sane way to virtualize FreeNAS is by starting out with a hardware platform that is absolutely positively designed for virtualization, which means a modern Dell/IBM/HP/Supermicro with a proven and tested server-grade VT-d implementation, the correct Xeon CPU, and the correct PCI-e hardware. noobsauce80 and I spent a little time trying to get ESXi+FreeNAS up and running on something that theoretically supported it, which turned into kind of a horrifying scenario where it kind-of sort-of seemed to work (worked fine, then didn't work, etc). It was bad. I see strong, compelling reasons for making sure that not only the storage devices but also the storage controllers are owned by the FreeNAS kernel - passing the controller itself in seems to be best.

Me, I'm paranoid, and so I've designed a FreeNAS server that also happened to be fully supported by ESXi. Lots of experimentation had already convinced me that our N36L is a waste of watts, because for only ~10 watts more a Xeon E3-1230 platform provides better performance while also being nearly idle serving files under load. So our old 1U storage servers have been getting upgraded to X9SCI-LN4F's with 32GB, E3-1230, and a M1015 in crossflashed IR mode to give ESXi a RAID1 datastore for boot and FreeNAS VM. The difference is that you can actually load up some heavier VM's on the unused capacity of the Xeon and get virtualization-style efficiencies.

So I definitely understand why people want to run ESXi. However, as much as I might like virtualization, the fact remains that noobsauce80, myself, and many others have seen both the relatively few successes (usually with higher end enterprise grade server gear) and the many failures, most especially including the many people who have entirely lost their pools when something went awry.

And I tell you all of this so that you have the full context of why I'm about to say this:

If you have an N40L, it makes a great (if somewhat underpowered) FreeNAS box. Set it up, stick it in a corner, and leave it the hell alone. But don't make it more complicated. It is a poor ESXi platform.
 
Status
Not open for further replies.
Top