Quick how-to for newbies like me: FreeNAS RAIDZ on VMware ESXi

Status
Not open for further replies.

snuffy

Dabbler
Joined
Aug 27, 2012
Messages
10
This took me ages to figure out, but it's actually quite simple to do.

This assumes you want RAIDZ1 and have three physical drives for the virtual disks that FreeNAS will use. This also assumes you already have VMWare ESXi set up and running.


  1. Create three datastores in ESXi, one for each of three three separate physical disks you have installed in your NAS box
  2. Follow instructions in FreeNAS manual to get the VM up and running...
  3. Create a 4GB VM
  4. Edit the VM settings and add 3 virtual disks of 100GB, put one on each of the 3 datastores.
  5. Boot VM from FreeNAS ISO and install on the 4GB VM (may appear like it's the ESXi flash drive on the server it's found (if you're running ESXi off a flash drive like I do), but it's not)
  6. Once installed and rebooted the disks should be available in the FreeNAS GUI.
To create a RAID set in FreeNAS:


  1. Volumes > Volume Manager
  2. Add the three disks and select the ZFS RAIDZ options
  3. Set the permissions: leave at Unix ACL, and tick all the Read/write/Execute options (yes, not good for security, but this is for testing)
Create a Share:


  1. Windows (CIFS) Shares
  2. Add Windows (CIFS) Share
  3. Path: choose the volume created above
  4. Allow guest access
All done.

Cheers,
Tim.

Caveat: for a production environment I wouldn't put my balls on the line by doing it this way, but in a production environment you wouldn't need to; you'd have resources for additional hardware. However, for a test-environment/study/home environment, or a 'just to see' it appears to work great.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Not based on his description. And we've seen lots of such setups end in tears, which was the reason for the two virtualization stickys I authored. There are other dangers here too, like a 4GB VM?
 

Thecal

Cadet
Joined
Jan 23, 2014
Messages
6
The one thing I didn't see in the stickies (might have missed it) was how data loss/corruption happens in a setup like this.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
In the setup I outlined? Hasn't been observed. There are, however, more moving parts involved (interrupt routing, etc) for virtualization which could still result in badness. So lots of testing and paranoia are butt-savers.

In the original OP setup? Don't know/don't care. We just know it has happened. Feel free to be the next victim and do an in-depth analysis of what went wrong, 'cuz I'd love to know too.
 

Thecal

Cadet
Joined
Jan 23, 2014
Messages
6
Yeah, I definitely meant OP's setup.

Hell, I might throw it on a box, fill it with unimportant data, throw a decent amount of I/O at it, and try to figure out what happened when things go bad. I've done enough post-mortems at work that I should be pretty good at it by now. :(
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
Feel free. The real train wreck has been RDM, where RDM loses the mappings and then the other volumes seem to ... we don't know quite what but losing a disk has been bad to the *other* disks.

But we've also observed that there seem to be problems with the OP's setup which you'd think would be just fine, amiright? A separate virtual disk on each of several datastores. Seems equivalent to a bare metal platform. But: Losing a datastore seems to make the disk device driver go insane, and my suspicion is that I/O is piling up or something awaiting the return of the datastore (which never happens). But it could also just be that people trying this are invariably trying it on like 4GB of memory which has other potential pool-loss risks.

That's what I know or guess at. I don't have the time to play hot-rod testing with our NAS storage; the NAS is supposed to be as solid as a Mack truck... it has to store data and not fscking LOSE it. Or it's worthless.

But I fully encourage and support anyone solving mysteries such as the 8GB RAM floor or how to make virtualization work better.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Note that the OP wrote that message in Feb 2013 and hasn't logged into the forums since that post.
 

snuffy

Dabbler
Joined
Aug 27, 2012
Messages
10
Hi, sorry for the delay, I didn't receive any notifications.

I agree with the guys on here, for a production environment I wouldn't put my balls on the line by doing it this way, but in a production environment you wouldn't need to; you'd have resources for additional hardware. However, for a test-environment/home environment, or a 'just to see' it works great. I still have this set up on my HP microserver at home. I don't get great speeds (but think that's hardware related, more than the fault of FreeNAS). I have all my stuff backed up so no big deal if it all caves in.

No pass-through set up was configured. It just worked.

I'll see if I can update my original post to caveat the non-production environment I have it in.

Cheers,
Tim.
 

Thecal

Cadet
Joined
Jan 23, 2014
Messages
6
Interesting. How long has it worked and how much usage are you hitting it with?
 

snuffy

Dabbler
Joined
Aug 27, 2012
Messages
10
I've had it on there since I originally posted. Not a lot of use when I think about it. Mainly for long-term archive storage really and I don't have it running 24/7. I tend to start it up when I need it and shut it down after. I moved onto other projects before I had time to test breaking it and recovering from 'failed' disks. As I said I have a backup of everything, so not that critical. I don't have the room/finances for additional hardware, hence the reason trying it like this.
 

Thecal

Cadet
Joined
Jan 23, 2014
Messages
6
Fair enough. I'm honestly looking at running hardware RAID and handing FreeNAS a single UFS volume.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
UFS support is about to be removed from FreeNAS. Because of that I wouldn't recommend you go to UFS at this time.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,681
I would expect that if you were to run ESXi with a RAID card, create a big datastore, then create a single virtual disk for your data, that this would have the same reliability characteristics as the underlying storage. As with any ESXi datastore, you need some way to monitor it for problems. You would retain ZFS's ability to detect bitrot, but since the virtualized hardware isolates ZFS from the underlying disks, no ability to recover the redundant copy or repair the damage.

I would also expect that if you needed to go big, you could even do that with multiple virtual disks, and by laying them out as mirrored or RAIDZ, you would gain the ability to recover bitrot.

I am *pretty* sure, based on my observations over the last several years and the apparent virtualization guy, that what goes wrong for people in these setups is as outlined several mmessages above: a nonredundant datastore fails, ESXi blocks I/O to that datastore, that causes FreeNAS to ... pause? SCSI bus lockup? something? At which point user freaks, reboots VM, fails, then reboots ESXi, and maybe some other things happen, possibly involving user attempting/failing improper restoration of the failed dataset, or possibly involving devices being reordered, or who knows, so that if they're able to bring it back up, something worse happens next, or they're just not able to get back into the VM.

Someone competent in ESXi and FreeNAS who experienced a failure could easily narrow this down further, of course. I chose not to be that person, because I've taken the pragmatic course of simply making the virtual platform more similar to the bare metal platform, which gives a whole bunch of recoverability benefits, and since I just needed reliable storage, ...
 
Status
Not open for further replies.
Top