FreeNAS Project: Data Migration from HFS+

Status
Not open for further replies.

maverix

Cadet
Joined
Jan 25, 2012
Messages
6
DISCLAIMER: I have not used FreeNAS in any way shape or form. Please do not harrang me for asking questions that may seem dumb. Just need to see if FreeNAS is tool for the job.


Hello,

I have a few questions about the FreeNAS Server architecture as it relates to the project I' am currently working on. Right now I have an ESXi 5.0.0 deployment which manages my SAN. (5 TB RAID 5 + 1 Hot spare) so FreeNAS would be in VM.

I have a Mac OS X server that we would like to migrate from and serve the data from a SAN so that it is faster and more secure. However the Marketing department has funny naming conventions and when a previous attempt was made to transfer the files, some of the names were different and got all messed up.

I would generally turn to Ubuntu server to serve up the large share but Ubuntu doesnt support HFS+ Shares above 2 TB due to instability. I was wondering if ZFS would mess up the names of the files and folders transfered from an HFS+ File system. I know this may be a "I dont know why dont you try it" Type of thread but I was just wondering if anyone else has been in this situation and could offer some advice.

Also, VMWare only supports VMDK disks up to 2 TB. I would like to use LVM to "merge" the disks together and create one big share. So that the users only see a 5 TB pool and not 3 disks that they can save too. Does FreeNAS support LVM?

Thank you!

Jake
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
Couple of things:
1) virtualizing IO is generally a bad idea. FreeNAS is all about IO. I can't really come up with a good reason to do it, especially with the cost of an old desktop or the like being around already.
2) you would need to either test the file names that are problematic, or provide some specific examples to get any kind of useful answer to that.
 

maverix

Cadet
Joined
Jan 25, 2012
Messages
6
So.

First things first.

The IO should not be an issue. 8 paths of iSCSI cat 6 cabling going directly into the host from the SAN provides alot of IO and does not leave room for a bottle neck. I would be running FreeNAS in ESXi5 on a dell poweredge 1950 with 16 GB of RAM 2 Procs.


I dont understand the comment

"especially with the cost of an old desktop or the like being around already."

This would be for an enterprise production setup and not run on a flimsy old desktop. Are you not recommending FreeNAS for enterprise setups?

Help me understand.
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
The number of connections to your storage backend doesn't matter. What I'm talking about is you are virtualizing the IO between FreeNAS and the disk. This is not awesome. Very inefficient. There are no para-virt drivers for FreeBSD with VMware. You will get slow IO performance no matter how much hardware you throw at the problem. You need a physical solution, not a virtual solution. I played with a setup similar to this (started with an EqualLogic with 6x gig-e connections, then moved to an EMC CX4 with 8G FC). The problem is not the speed of the hardware, it's virtualizing the IO. The whole point of FreeNAS is to provide IO and virtualizing that slows it down (paravirt drivers would be much faster, but they don't exist).


If you want enterprise, you will want some kind of support contract in case there is a problem with your storage no? I would certainly get a support contract for any storage I put in to play for enterprise level IT. That being said, I would get a TrueNAS appliance (http://www.ixsystems.com/truenas) and be done with it. You get redundant heads, dedupe, and their support is awesome. You can't go wrong with them.

(No, I don't work for them, but I have been buying their hardware for about 6yrs now across 3 companies)
 

maverix

Cadet
Joined
Jan 25, 2012
Messages
6
Thank you for elaborating.

So I have 2 Development Environments now.

One is VMWare with FreeNAS installed
One is FreeNAS installed on a Physical Server.

I have a DELL MD3220i SAN with 8 iSCSI paths ready and available.

Can these 8 paths connect to the Physical FreeNAS Server and the SAN be configured as a ZFS volume?

The way I have my VMWare FreeNAS setup is I have 3 VMDK disks added directly to the FreeNAS VM and then added as 1 ZFS Volume in FreeNAS. This worked by the way when transferring the data from HFS+ (no names were changed and all the meta-data was preserved.)

So i know that ZFS works for this project. I posted on the VMWare forums asking about the paravirtual drivers just to see if anyone had any experience with this as well.

Someone came back with this

http://forums.freebsd.org/showthread.php?t=27899

I added the following to my loader.conf:

Code:
hw.pci.enable_msi=0
hw.pci.enable_msix=0

Has anyone tried this with FreeNAS?
 

louisk

Patron
Joined
Aug 10, 2011
Messages
441
To my knowledge, FreeNAS does not (yet) support being an initiator, only a target. You would need an (supported) iSCSI HBA and then it should work fine.

I know you can run FreeNAS as a VM, I've done it. What I don't think you'll be able to do is get high performance from it, for example, saturate a 1G network.

I'm glad they're working on paravirt, but in this case, it requires your IO to be going through an LSI card which handles local storage.
 

maverix

Cadet
Joined
Jan 25, 2012
Messages
6
Hmm so in my case running physical is out of the picture. I really appreciate all your input. It really made a difference in this project.

Good news is that the 5.1 TB on the SAN wont be directly accessed by clients.

The clients will connect to a share that is on a Hardware RAID 1 Local Drives (SAS 10K 600 GB) and work from there. Now I can make this an Ubuntu Server with Netatalk and AFP, really depends on the performance of FreeNAS in this setting.

And plus I could make the share HFS+ which the OS X clients will love.

Then at night incremental backups will be taken and rsync'd to the FreeNAS ZFS Store.

I'm debating on buying some 1 TB drives to use as the Archive and using the 5.1 TB as the working drive but for now the 600 GB local drive will be good.

Also!

I switched the LSI SCSI Controller in ESXI 5 from LSI Parallel to LSI SAS and saw a significant increase in IO when writing to disk.

Thanks again!
 
Status
Not open for further replies.
Top