Please do not run FreeNAS in production as a Virtual Machine!

Status
Not open for further replies.

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
I'm all for learning. The request is to not run FreeNAS in production as a VM. In the N00bs forum, on top of it. So it is a limited warning in some ways.

If Xen is allowing the use of random Linux variants as the dom0/service console, then, yes, you might be able to go that route.

But let me just say something here. You can run ZFS on Linux. Yes. But part of what makes FreeNAS and some of the other NAS appliances special is that they're designed to be fully baked storage systems. You are presumably wanting to use ZFS for its data protection capabilities. I'm guessing this is because you love and value your data and don't want it to vanish.

FreeNAS, properly done, assists in that by providing facilities to configure ZFS, monitor ZFS, run periodic scrubs, set up SMART reporting, snapshot itself, replicate itself, and let you know when something's gone wrong.

Linux, or FreeBSD, or Solaris, they don't necessarily do those things out of the box. Sure, you /can/ configure most of that, but it can be kind of a pain. Without a structure around you, it may be difficult to know what to do!

A major reason the senior people sit here and advise n00bs not to do this and not to do that, it's basically because we don't want you to lose your data. We've seen too many people lose data. WITH FreeNAS. FreeNAS isn't a guarantee of no data loss. It is just an appliance, and can still be inadvertently sabotaged. We're not trying to be discouraging. We're just trying to be responsible with the advice that we hand out.

So, I've explained all that so that I can make the following statement without being taken the wrong way: I would be skeptical of taking some random Linux distro and running it as a Xen dom0 to "provide ZFS facilities," because that is unlikely to be as polished and reliable a solution as FreeNAS, and your desire to use ZFS suggests that you are seeking reliability instead of ease. Your suggestion seems contradictory to me.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Just trying to get my head around all this.

What I would /like/ to have is a single box which runs both linux and Windows under a hypervisor but also provides NAS functions, maybe limited to providing a zfs filesystem to those linux and windows guests.

Does the advice not to run freenas in a virtual machine apply to running it as dom0 under Xen?

i

If you want to have any chance of success I suggest trying out the free version of ESXi. You can run FreeNAS very successfully under it, but make sure you read and understand the link that CJ just posted first.
 

rbanaco

Cadet
Joined
Oct 15, 2012
Messages
6
Hello there!

First of all thank you very much for this post!
Unfortunately I didn't know that RDM was bad idea! Having this in mind and because at the moment I came across with no problems regarding data I want to save it! Can I install freenas directly now? will it recognize my RDM data? or do I need to format my drives? I don't have place to store the almost 3Tb that I have... Thanks for the help!
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
I'm all for learning. The request is to not run FreeNAS in production as a VM. In the N00bs forum, on top of it. So it is a limited warning in some ways.

If Xen is allowing the use of random Linux variants as the dom0/service console, then, yes, you might be able to go that route.

But let me just say something here. You can run ZFS on Linux. Yes. But part of what makes FreeNAS and some of the other NAS appliances special is that they're designed to be fully baked storage systems. You are presumably wanting to use ZFS for its data protection capabilities. I'm guessing this is because you love and value your data and don't want it to vanish.

FreeNAS, properly done, assists in that by providing facilities to configure ZFS, monitor ZFS, run periodic scrubs, set up SMART reporting, snapshot itself, replicate itself, and let you know when something's gone wrong.

Linux, or FreeBSD, or Solaris, they don't necessarily do those things out of the box. Sure, you /can/ configure most of that, but it can be kind of a pain. Without a structure around you, it may be difficult to know what to do!

A major reason the senior people sit here and advise n00bs not to do this and not to do that, it's basically because we don't want you to lose your data. We've seen too many people lose data. WITH FreeNAS. FreeNAS isn't a guarantee of no data loss. It is just an appliance, and can still be inadvertently sabotaged. We're not trying to be discouraging. We're just trying to be responsible with the advice that we hand out.

So, I've explained all that so that I can make the following statement without being taken the wrong way: I would be skeptical of taking some random Linux distro and running it as a Xen dom0 to "provide ZFS facilities," because that is unlikely to be as polished and reliable a solution as FreeNAS, and your desire to use ZFS suggests that you are seeking reliability instead of ease. Your suggestion seems contradictory to me.

Thanks jgreco for your words of advice. I do appreciate them. Let me back up and explain my thinking here.

I've lost several drives over the last couple of years and had some ram go bad on me. So I'm looking for solutions to make handling this in the future easier. The first step is obviously a freenas box running zfs but that can't be the end of the story: it's going to need a secondary remote freenas for backup (a problem I'll think about later), and then there's the issue of how I work with the data on the nas.

At the moment I run a Windows pc (4tb ntfs) to support a number of windows-only photographic applications. I could, in theory, move all the data apart from the os disk onto the nas and it looks like I might be able to get adequate response time from that. But I move around a bit, a problem I solve by having two identical PCs and transporting just the hard drives between then. If I had to have two primary nas boxes as well it would seriously complicate my life. Hence the pondering if I could have my principle pc running windows /under/ something which supported zfs without issues and just the remote freenas for backup.
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
If you want to have any chance of success I suggest trying out the free version of ESXi. You can run FreeNAS very successfully under it, but make sure you read and understand the link that CJ just posted first.


I'm not familiar with ESXi but I'll check it out. Thanks.

i
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
BTW, just seen that FreeBSD 10, expected in 2014, will be able to run as a Xen dom0.

That may be, but that's no guarantee FreeNAS will run as dom0. I'd be shocked if that was implemented in FreeNAS because of the small developer group. That is, unless it is absolutely trivial to implement(which I wouldn't hold my breath on).
 

IanWorthington

Contributor
Joined
Sep 13, 2013
Messages
144
That may be, but that's no guarantee FreeNAS will run as dom0. I'd be shocked if that was implemented in FreeNAS because of the small developer group. That is, unless it is absolutely trivial to implement(which I wouldn't hold my breath on).

Ah. So FreeNAS doesn't automatically pick up all the capabilities of FreeBSD?
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
It does in some circumstances. But generally anything that isn't trivially implemented in FreeBSD is not part of FreeNAS. Don't get me wrong, I'm sure a support ticket will be created the second 10.0 hits the market. But I wouldn't expect it to be implemented right away if ever. Developer resources are short in the FreeNAS project, so time is devoted to bugs that are serious, bugs that are experienced by alot of users, and features that a large number of users want first. I've only seen 1 person that I know of even ask about running FreeNAS on Xen. So clearly the user-base is very small. ESXi on the other hand sees lots of users here. And there are plenty of ESXi requests that are a year+ old still open. So don't hold your breath on it.

Of course, all it takes is that one guy that really really wants FreeNAS to run on their Xen system to code it in and upload it to git for the feature to be available.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
FreeBSD is an entire operating system, including a full kernel and userland. It is full of extremely useful and also largely useless things. You probably don't need the USB drivers for a 3G wireless radio built in to your FreeBSD box, or drivers for old ISA network cards, right? Well there are entire classes of things that are simply not NAS-suitable. Further, most of the userland gets stripped away as well, because who really needs a C compiler on an appliance box?

So yes a lot of "maybe useful to someone somewhere someday" gets tossed out when building FreeNAS.
 

deasmi

Dabbler
Joined
Mar 21, 2013
Messages
14
Just in case it helps anyone I thought I'd post my working all in one build, I spent a lot of time researching this before going ahead and it has been totally stable (touch wood) since February.

Motherboard TYAN S5512
This is on the VMWare HCL​
VT-d/x support​
32GB RAM - Matched to board from reseller​
Very difficult to source in the UK sadly, took a while to get one.​
IMPI2 with KVM, this is great as you can see the console over a dedicated network port, restart, power management etc.​
2x GigE​
IBM SeverRaid M1015 - Reflashed as LSI9211-IT straight through mode no Raid
2 Additional Intel GigE cards, again HCL
Xeon 1240-v2
128GB SSD boot drive for ESXi and FreeNAS VM
4X500GB WD RE2 drives as ZFS striped mirror - For VMs
2X3TB GB WD Red drive as a ZFS mirror - For data
Corsair H60 cooler & Corsair TX750 PSU
Fractal Design R4 case

I setup the FreeNAS VM to use 6GB of RAM, 8GB doesn't seem to make any noticeable difference, but I need time to benchmark properly one day but there is only 4TB of total storage.

ESXi is configured to boot the FreeNAS instance first then wait 2 minutes by which time it's NFS share is visible to ESXi.

L2ARC and ZIL as VM disk images from the SSD, I know this is not ideal at all but the performance hit is worth it until I can get another SSD. I still have 2 channels free on the M1015 so will add an SSD later.

Only issues I've had are with iSCSI presentation to Linux hosts, but this is a common problem with non virtualised boxes as well, when the ZIL is being flushed it can cause iSCSI timeouts.

Originally ESX 5.0, then 5.1 now 5.5. Will be trying the new vGPU support soon I hope to give a native graphics card to a windows VM.

If anyone has any questions I'm more than happy to answer them.

Edit: vCPu -> vGPU
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
There's no point in L2ARC, it is just going to stress your ARC.
 

hpnas

Dabbler
Joined
May 13, 2012
Messages
29
Great write up on your VMware configuration! However, what was the rational to have your boot up drive on SSD?

Sent from my Nexus 7 using Tapatalk
 

vegaman

Explorer
Joined
Sep 25, 2013
Messages
58
Great write up on your VMware configuration! However, what was the rational to have your boot up drive on SSD?

Sent from my Nexus 7 using Tapatalk
ESXi doesn't support using a USB stick for datastores. So that means an extra drive for that, might as well be an SSD when it could get you some extra performance and a spinner's probably going to be way bigger than you need.
Easier to mount somewhere without wasting a 3.5" bay too :-D
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
ESXi doesn't support using a USB stick for datastores. So that means an extra drive for that, might as well be an SSD when it could get you some extra performance and a spinner's probably going to be way bigger than you need.
Easier to mount somewhere without wasting a 3.5" bay too :-D

Bingo...I'm doing the same thing...sure it's a bit of a waste but so is 3.5 spinning rust.

Plus I feel better that skipping raid for the ESXi boot drive that a good quality SSD is less likely to die on me vs some spinning rust.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Well:

If you look at http://www.tomshardware.com/reviews/ssd-reliability-failure-rate,2923.html they concluded that SSDs have a 1.5% failure rate while hard drives are about 5%.

If you look at SSD rates at http://www.behardware.com/articles/881-7/components-returns-rates-7.html and HDD rates at http://www.behardware.com/articles/881-6/components-returns-rates-6.html it's very mixed. Some SSDs(OCZs) have up to a 40% failure rate(gee.. I wonder why they're bankrupt...)! WD HDDs have the lowest at 1.48%, but the best SSD(Intel) beats them with 0.45%.

I know that I have all SSDs in all of my machines(except the server) and they are all at least 3 years old. I don't baby them at all and they all claim to have an EOL estimate of at least 2017. Before SSDs I'd average about 2 hard drive failures a year among all of the machines in my house. So from my (albeit small) sample I totally take SSDs at being more reliable than HDD.
 

pbucher

Contributor
Joined
Oct 15, 2012
Messages
180
Also just to clarify things, I'm not keeping anything on the boot disks that will ruin my day if it dies(hence why I saved the money on 2nd SSD & a RAID controller). Just recently when I pulled the RAIDed HDDs and put in a blank SSD it took less then a hour per server to bring the server back up, add it to my vcenter server & config it. Now if you are an environment where loosing your ESXi boot drive will ruin your life then RAID it SSD or HDD, personally I felt I had better uses for a good RAID card and using an Intel SSD was safe enough for what it will cost me if it dies.
 

loesje

Cadet
Joined
Nov 25, 2013
Messages
4
Jgreco started a very interesting thread on virtualisation.
Already there has been said very much about it.
But I will point out the beginning of this thread, as jgreco mentioned:

1. FreeNAS is designed to run on bare metal
And so is ESXi

As all the members said, yes virtualisation is possible. It is not too hard to install raw disks and yes it works fantastic, when nothing goes wrong. I have done it with Nas4Free onto an usb-stick.

But why do you want to do this virtualisation?
Well because you have all the hardware in front of you with an operating system on it and won't buy a new box for Freenas.
You probably just want to add some disks.

Well in that case, in a home situation, you mount them onto your existing OS. And when you insists, you can do logical volume management and/or Raid in that OS.

As jgreco said further:
However, its cache is your system's RAM.
Because of that, ZFS needs ECC memory and a mobo that supports that.
So you cannot pick up some hardware lying around in the corner of your room. And as jgreco stated, as much memory as you can afford. ZFS will eat it all.

So in my opinion, when you are in a home situation with probably 8 disks, your existing OS can do it reasonably well.
But when you have to arrange much more disks (in a production environment), buy you decent new hardware and then Freenas is on its place running on bare metal.
 

hpnas

Dabbler
Joined
May 13, 2012
Messages
29
So if you had 3 disks you don't see any issues running esxi and freenas?

Sent from my Nexus 7 using Tapatalk
 
Status
Not open for further replies.
Top