FreeNAS on ESXi 5 with virtual harddisks

Status
Not open for further replies.

vanhaakonnen

Dabbler
Joined
Sep 9, 2011
Messages
32
Hello,

I have installed FreeNAS 8.0.2 in an virtual machine (with LSI Logic parallel SCSI-Controller for the vm and a Supermicro server with an Areca ARC-1231 RAID-controller with 5x2TB harddisks in RAID5). FreeNAS has one 8 GB vdisk for the installation itself and 6x1TB virtual harddisks for the ZFS-Raid (zraid1). For the first time I started to use only a stripe set of disks in FreeNAS. But time after time I got I/O errors at my clients. A "zfs status" shows that there are some CKSUM-Erros on every single disk. After that I recreated the ZFS-Array as a zraid1 for some repair-functionality.

But my question is:
How can this happen to virtual disks? My Ubuntu-vms on the same server shows no disk-erros at all.

Thanks

VanHaakonnen
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
So you've got a VMware host with a big RAID5 datastore.

On top of that, you create 6 1TB virtual hard disks and run RAIDZ1 on that.

And then you're getting I/O errors. Do I have this right?

Do you like beating your head on a brick wall? What's the point of this setup?

If you really want to debug this, you'll need to think about issues such as whether or not you've created those VMware virtual disks as thin provisioned, or what other things you might have done to shoot yourself in the foot. But the complexity level here is not comforting.
 

vanhaakonnen

Dabbler
Joined
Sep 9, 2011
Messages
32
Hello jgreco,

thanks for your reply. The reason why I wanted to try this was to have only one bigger machine running all the time with severel vms on it.
I have tried some different configs and setups. First of all ZFS seems to be a problem in my case. I don´t know what exactly the problem is related to (maybe 4k sector? What sector size unter 4K HDDs formatted with VMFS and on top ZFS... :/ )

I changed from ZFS to UFS. I got no more hard freezes since this change (at ZFS my whole ESX frooze some times). Second ESX detaches the datastore if the latency of the diskpool/hba is too high. 6x1TB virtual disks with only 5 physical hard drives was not so intelligent in my case. The latency got very high in some cases.

At the moment I have UFS with only 2 x 2TB virtual disks in stripe mode. This seems to work fine for now. UFS has the advantage over ZFS that is requires less ressources in the vm and performs really better. Okay - no snapshots in UFS... :/

How can I check if my UFS-Filesystem has no erros? "fsck_ufs /dev/ufs/UFS" ?
 

fastie81

Cadet
Joined
Jan 4, 2012
Messages
7
Hi,

I have this sort of setup and it works awesome

Setup
ESXi 5
2x 700GB disks setup as Datasore disks. (This is not part of the NAS) The FreeNAS installation disk is on this disks.
FreeNAS 7.02 as a VM
FreeNAS setup:
Full installation
3x 2TB SATA RDM Disk published to the VM ( you can find out how to do it from here:http://vm-help.com/esx40i/SATA_RDMs.php)
ZFS is then setup with the 3x 2TB disks.

This way ZFS is using the disk and I can move over to FreeNAS 8 when there is an upgrade path.
I don;t have I/O problems, and My datastore is doing it's own think with not affecting the NAS operations

I will document this all on my website and if any are interested in my setup let me know, then I can send you he link.

Chris
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I have no idea why ZFS would cause problems on a VM datastore, and I doubt it is actually the root cause of your problems. ZFS is a lot more sensitive than many other things, because it makes extra attempts to actually catch some types of problems that many other things don't, so it may be noticing a hardware problem of some sort that hasn't killed your machine outright.
 
Status
Not open for further replies.
Top