Building first FreeNAS

Status
Not open for further replies.

Zort86

Cadet
Joined
Aug 28, 2014
Messages
3
Hey guy's, I am working on building my first FreeNAS system.
We are a mid size company, we are running ESX 5.5 with iSCSI datastore connections for 98% of our environment.

I was looking for some recommendations on running FreeNAS 9.2 as a virtual system on ESX. I will be using it only to provide network shares to our users, I am going to share out about 4.5TB of CIFS shares.

Thanks for any help or recommendations.
 

Zort86

Cadet
Joined
Aug 28, 2014
Messages
3
We are planning on installing a new Dell Compellent system. Currently we are using iSCSI to connect the ESX to the storage.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, to be more clear, then, you're planning to put a FreeNAS VM on top of some Compellent-supplied datastore?

What I'm trying to get at is whether or not what you want to do is safe. The short form is that it is okay to run FreeNAS as a VM in certain configurations but not most of the configurations that your average home/hobbyist/hacker user wants to try. Specifically it is exceedingly risky to run a FreeNAS VM on top of nonredundant storage because ESXi can hang the I/O subsystem if your VM's disk becomes unavailable. Assuming your Compellent has redundancy configured on the datastore, that becomes much less likely.

Confirm for me that that's what you're planning and I will discuss the next part of the equation.
 

Zort86

Cadet
Joined
Aug 28, 2014
Messages
3
Ok, got ya,
Well our enviorment consists of 7 5.0 ESX hosts (only cause our servers are not on the HCL for 5.5) managed by vCenter 5.5 configured in a vShpere data center. All the hosts have iSCSI connections to the LUNs presented by the storage (currently a older EMC)
I have vMotion turned on to move guest systems between the hosts.

From a networking. connectivity stand point, we have 2 iSCSI switches with at least 2 iSCSI connections coming from each host server, one connection to each of the switches. We also have 6 iSCSI connections coming from the storage (EMC) 3 to one switch and 3 to the other.
We are also running 2 TOR network switches. each host has 2 vmotion network connections and 2 network access connection, those are spread across the two TOR switches.

We are currently running a CIFS server from the EMC, but are planning on replacing that with the new Dell solution. We will have the same conectivity from teh storage to the hosts (6 iSCSI connections from the Dell storage split between two dedicated iSCSI switches)

Thanks
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Okay, so that didn't really answer the question I was asking but it indicates that the answer has gotta be "obviously." I was actually interested in redundancy of the storage on the Compellent. But by the time you've gone to all that work I assume that you're not just exporting JBOD from the Compellent. So you've got RAID-protected storage.

So pay careful attention here because there's a potential gotcha.

ZFS has its own integrity protection. You CAN make a single vmdk disk, share that out using a FreeNAS VM, and it'll seem to be all good. This is most likely what you had in mind, right?

The problem is that if ZFS detects a checksum error in the data, it is not able to reconstruct the data from anywhere. It cannot dig through to the Compellent and get the original data blocks to try to figure it out. So instead it returns an error. This is probably not what you intend.

So what you really want to do is to make two vmdk's and mirror them together, giving ZFS a way to recover data if there's bitrot. This may seem a waste to you since it is already on top of RAID storage, but it is really the best way to go.

And if that makes perfect sense to you, then yes, in my opinion at least, all your bases are covered and this is completely fine to do.
 
Status
Not open for further replies.
Top