FreeNAS + ESXi5.5 + HP Microserver G8

Status
Not open for further replies.

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
Hi,
I am planning to use FreeNAS to replace my ageing NAS box (dlink - please don't laugh) at home. Since I've got a small server, I am hoping to "share" its resource.

The hardware:
- HP Microserver G8, with Xeon 1220, 16GB ECC memory, and it's in built Raid card - HP Dynamic Smart Array B120i Controller
- too add another 2x 3TB drive for FreeNAS usage
- Running ESXi5.5
The plan
- using the raid card, I will create new RAID1 volume with the 2 new drive. Not going to use pci-passthrough
- In ESXi, the new drive will be added as VMFS5 drive
- Install FreeNAS on one of the local drive (not the new 3TB drive)
- Allocate the new 3TB drive using "Think Provision Lazy Zeroed" and the configure the FreeNAS to use as it's storage
- Will not use FreeNAS ZFS
Sanity check:
1. Is this feasible at all? It's my home NAS, with important data to me
2. I understand ZFS is wonderful, but i've got "hardware" raid. Can I assume hardware raid is more stable?
3. What is the implication of having VMFS5 between the hardware and FreeNAS?
4. Since I do not use ZFS, does it make sense to allocate less memory? 4GB?
Appreciate your advice and sorry if this is a ridiculous plan
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
ouch thats harsh. but it's a forum so i understand. I RTFM in the link earlier and this is what I thought?
  1. FreeNAS is designed to run on bare metal, without any clever storage systems (UNIX/VMFS filesystem layers, RAID card caches, etc!) getting in the way. Think about this: ZFS is designed to implement the functionality of a RAID controller. However, its cache is your system's RAM, and its processor is your system's CPU, both of which are probably a lot larger and faster than your hardware RAID controller's cache! - But i've got a hardware raid card. So i do not really need the intelligence of the ZFS
  2. Without direct access to the hard drives, FreeNAS lacks the ability to read SMART data and identify other developing problems or storage failures. - I undertand my Raid card will handle this
  3. A lot of the power of FreeNAS comes from ZFS. Passing a single virtual disk to ZFS to be shared out via FreeNAS is relatively safe, except that ZFS will only be able to detect and not actually correct any errors that are found, even if there is redundancy in the underlying storage. - same as #2 above
  4. There is a great temptation to create multiple virtual disks on top of nonredundant datastores in order to gain "MOAR SPACE!!!". This is dangerous. Some specific issues to concern yourself with: The data is unretrievable without the hypervisor software, the hypervisor might be reordering data on the way out (which makes the pool at least temporarily inconsistent), and the hypervisor almost certainly handles device failures non-gracefully, resulting in problems from locked up VM to unbootable VM, plus interesting challenges once you've replaced the failed device. - which is why i am dedicating the new drive only to FreeNAS plus they are redundant because of the raid configuration
  5. Passing your hard disks to ZFS as RDM to gain the benefits of ZFS *and* virtualization seems like it would make sense, except that the actual experiences of FreeNAS users is that this works great, right up until something bad happens, at which point usually more wrong things happen, and it becomes a nightmare scenario to work out what has happened with RDM, and in many instances, users have lost their pool. VMware does not support using RDM in this manner, and relying on hacking up your VM config file to force it to happen is dangerous and risky. - irrelevant. not using RDM
  6. FreeNAS with hardware PCI passthrough of the storage controller (Intel VT-d) is a smart idea, as it actually addresses the three points above. However, PCI passthrough on most consumer and prosumer grade motherboards is unlikely to work reliably. VT-d for your storage controller is dangerous and risky to your pool. A few server manufacturers seem to have a handle on making this work correctly, but do NOT assume that your non-server-grade board will reliably support this (even if it appears to). - irrelevant too
  7. Virtualization tempts people to under-resource a FreeNAS instance. FreeNAS can, and will, use as much RAM as you throw at it, for example. Making a 4GB FreeNAS VM may leave you 12GB for other VM's, but is placing your FreeNAS at a dangerously low amount of RAM. 8GB is the floor, the minimum. - true. however the question is since i do not use ZFS, can i allocate less memory? I've got a poor man server
  8. The vast majority of wannabe-virtualizers seem to want to run FreeNAS in order to provide additional reliable VM storage. Great idea, except that virtualization software typically wants its datastores to all be available prior to powering on VM's, which creates a bootstrap paradox. Put simply, this doesn't work, at least not without lots of manual intervention, timeouts during rebooting, and other headaches. (2013 note, ESXi 5.5 may offer a way around this.) - irrelevant since i'm planning to use FreeNAS to store pictures and stuff
Appreciate any feedback. Constructive preferred.. sorry
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
I'm sorry, but I'm not going to explain this in too much detail. But, I'll say this.

Many have tried various things, just like you're trying to figure out how to avoid the "problems". It seems to always work, to a point. Unless you plan to keep daily religious backups, I wouldn't even consider doing it your way. Even the people that don't read these warnings usually have success at first. Then they shutdown the ESXi server for a weekend while they go on vacation and come back to realize its just not working anymore.. with no reason to be found. We don't even understand why it works, then suddenly ceases to work.

UFS is about to go bye-bye. And x86 support is about to go bye-bye as well. So your dreams of a VM with 2GB of RAM is about to go out the door. x64 pretty much requires 4GB of RAM even if you were wanting to run UFS. x64 code and software uses more RAM, even for the same tasks being performed with x86 code. That's the reality of it.

I'd never even consider what you are doing, nor would I ever do it even if you paid me to do it. It's not worth tarnishing my name with someone that lost data because they wanted to do what isn't recommended. It's like you asking me to build you a house, then demanding that the foundation be made of styrofoam. No homebuilder would do it for any kind of money. They don't want their name being tarnished with that kind of irresponsibility.

Here on the forums, we take a strict stance with VMs. Either you don't do it, or you are totally alone on the island if something goes wrong. We have no guilt about seeing you cry that your data isn't mounting and ignoring, closing, or deleting your post regarding it. We don't recommend it, we don't sanction it, and we don't even give the illusion that we support it. We don't do anything with VMs at all. If you think you can handle anything that *might* go wrong, have a go at it. But, I'll warn you from an experienced person to a noob person... this is going to end with spilt tears at some point. It might not be today, and it might not even be tomorrow. But inevitably, things go bad and you are going to look like a clown when you show up in IRC or the forums asking for help. We'll laugh at you, point you to the sticky that says "don't do it", and leave you to your own misery.

We've created that sticky I linked to in hopes people listen. Most done, some don't. They think they have some bulletproof solution that ignores all of the complaints we're discussing. The problem is that our complaints often discuss only "don't do this" and have no actual technical basis for why they don't work. We just know that some methods don't work as well as others. Hardware seems to be a big factor, and there's no hardware that's been proven 100% safe because we don't know the mechanism for the failure modes. Doing FreeNAS in a VM, even if you do everything exactly how we tell you to, still carries significant risk. I never recommend VMs for data that is important or where uptime matters at all.
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
hi cyberjock, thanks for trying to explain. I've got my old NAS as my freeNAS mirror/backup

Can you possibly point me to some post where users had issue after running FreeNAS in ESXi for couple of weeks or months? I had to take this path since I can't afford a bob. understanding the problem faced by them can be my guidance to avoid pitfall hopefully

I'm also trying to understand your point above. Are you saying ESXi is unstable? Or does not work well with FreeNAS due to FreeNAS architecture?

I believe virtualisation is the way of the future and having dedicated device performing dedicated function (esp storage) seems primitive?

Sorry if my question annoys you
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
Yeah.. I've got no links specifically. I basically close my browser the second I see someone even mention using FreeNAS in a VM. Feel free to search for your own though. Frankly, the fact that I even read the rest of your post once you mentioned ESXi was strictly by chance. I'll never understand why someone wants a "very reliable server" and then do something so stupid and unreliable as put it in a VM. There's just zero logic in it in any planet I'm from.

I'm saying ESXi doesn't seem to play well with FreeNAS. No clue if the problem is with FreeNAS or ESXi. But, the symptoms people have when using FreeNAS on ESXi have never happened on bare metal, so the only information I can give with absolute accuracy is that "the relationship between FreeNAS and ESXi could use some work". The FreeNAS devs aren't trying to find the problem because frankly, we don't care. If someone found the problem and provided a patch, it would probably be accepted. But since ESXi is very secretive about how their stuff works internally, there's about 0% chance of that happening. ZFS was designed for bare metal, and 99% of users use ZFS. Hence we're walking away from x86 and UFS because x86 isn't an option for ZFS users anyway.

Virtualization is the future... for some situations. It will never cover all situations and there are significant limitations that aren't going to be technologically overcome in the next 10-20 years. File servers are one of those situations that aren't going to be overcome anytime soon. Especially if you want to use ZFS safely. So there will always be dedication machines. I think you should understand this and maybe you'll be a little less "gun-ho" about virtualizing everything. Frankly, just the fact you want to virtualize your data storage is a clear indication in my book you do not understand all of the relationships between the hardware and the software. But that's okay. You'll either decide not to do this, or you'll learn it the hard way. You've given up a primary indicator for a drive that is failing by choosing to use vmdks.. SMART.

Good luck to you!
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
While I'm a firm believer in server virtualization, I don't believe storage should be virtualized.

I believe virtualisation is the way of the future and having dedicated device performing dedicated function (esp storage) seems primitive?
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
I respect your dedication to the FreeNas community. For that kudos!

You obviously shut yourself down to shut yourself when someone mention VM. Well FYI, I am using vmdks on HP Dynamic Smart Array B120i Controller.. SMART!
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
While I'm a firm believer in server virtualization, I don't believe storage should be virtualized.


Interesting read. Taken from
http://go.nutanix.com/rs/nutanix/images/IndustryResearch_Forrester_Do_You_Really_Need_San.pdf

Forrester defines application-centric storage as storage architecture managed by a business
application, such as Exchange, or an infrastructure application, such as a database or hypervisor.
The back-end storage subsystem would be relatively bare bones, just offering storage media in a
cabinet, along with basic cache and I/O support, and possibly hardware RAID. The application
would deliver all the intelligence to manage capacity, snapshots, distance replication, reporting, and
high-value features such as thin provisioning and deduplication.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
FreeNAS works in ESXi, even with disks via datastore. I used one with 512MB RAM and ZFS when I needed to dump some files. Worked fine for that short time but I only used 250GB VMDK.
The problem is when it stops working for seemingly no reason.
What works today may not work the next time. And with all those layers there is no practical way to get data back. You can't even be sure you can add the HDD into another machine with data intact.
There is no easy fix or goto answer when things go wrong, other than "told you so".
And spending hours trying to rescue something that should not be used does not seem to happen around here.

I have had a bad sector (or something, I really don't know) on a datastore disk. Moving the VMDK didn't work and reading from it was buggy at best.
IIRC I ended up cloning it with clonezilla and moved the VM off that datastore disk. No way to check the disk in ESXi and VMFS upgrade made no difference.
I can imagine the headache with large VMDKs.

I have thought about adding disks to my current ESXi box doing something similar to your description, but I quickly found many reasons against that. Many are already mentioned.
Tempting, but as I have no way to fix it if something should happen I'd rather look for another solution.

However I will probably be doing my new NAS with ESXi and HBA in passthough, making sure the FreeNAS will work bare metal needed be. Not as reliable as pure bare metal but close enough if done with care.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Listen to no_connection. Really, I wrote the virtualization stickies to save your data. You can think you're exempt if you want to, and you are likely to be successful for awhile.

Asking us to document cases of failure is ridiculous; I already spend far too much time just talking about it, as in the post that I just did yesterday describing my impression of the worst problem vectors for using virtual disks.

http://forums.freenas.org/index.php...eenas-raidz-on-vmware-esxi.11294/#post-101753

If you cannot even be bothered to read the current discussions on the topic, then you lack the foundation to do this successfully and reliably. In particular, being dismissive of the PCI passthrough strategy to give FreeNAS actual access to the actual controller hardware is not going to be a good idea, because that little gem of a strategy simultaneously fixes several major issues, including - most importantly to me - recoverability in the event of hypervisor failure.

I suggest you take advantage of the forum search feature and do your own search for the failures. They tend to be further back in time these days because there was a time we were seeing people come in at least weekly, which was what prompted me to write the original sticky discouraging virtualization. Along the way you may become sufficiently comfortable with the documented method for virtualization that you actually go ahead and do it. At which point you're quite likely still on your own, but at least it has been engineered to have a good chance of being recoverable.
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
thanks jgreco, you explain what was observed in the link, which is really helpful, appreciate it. I tried looking/googling for failed freenas on esxi but did not get what I wanted. From the search, most problem was due to non-redundant hardware failure, configuration error, and users being incompetent to virtualisation technology.

PCI passthrough is definitely an interesting proposition, which gives the ability to "bare metal" freenas if esxi break loose. I'm checking if I can configure pci passthrough by disk instead of the whole controller.


Listen to no_connection. Really, I wrote the virtualization stickies to save your data. You can think you're exempt if you want to, and you are likely to be successful for awhile.

Asking us to document cases of failure is ridiculous; I already spend far too much time just talking about it, as in the post that I just did yesterday describing my impression of the worst problem vectors for using virtual disks.

http://forums.freenas.org/index.php...eenas-raidz-on-vmware-esxi.11294/#post-101753

If you cannot even be bothered to read the current discussions on the topic, then you lack the foundation to do this successfully and reliably. In particular, being dismissive of the PCI passthrough strategy to give FreeNAS actual access to the actual controller hardware is not going to be a good idea, because that little gem of a strategy simultaneously fixes several major issues, including - most importantly to me - recoverability in the event of hypervisor failure.

I suggest you take advantage of the forum search feature and do your own search for the failures. They tend to be further back in time these days because there was a time we were seeing people come in at least weekly, which was what prompted me to write the original sticky discouraging virtualization. Along the way you may become sufficiently comfortable with the documented method for virtualization that you actually go ahead and do it. At which point you're quite likely still on your own, but at least it has been engineered to have a good chance of being recoverable.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,525
PCI passthrough on a per disk is called "RDM".. LOL
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi imnoob,

I read your post #3 above and got the impression that the factor driving your desire to use the RAID card is simply because you have a RAID card, and this is not the kind of thinking that drives a good decision.

-Will
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
Hi Will,
on my post #1, I asked the question if I can assume hardware raid is more stable. Please forgive my lack of depth to freenas; still trying to read up.

Are you suggesting i dump the hardware raid and use freenas raid instead?


Hi imnoob,

I read your post #3 above and got the impression that the factor driving your desire to use the RAID card is simply because you have a RAID card, and this is not the kind of thinking that drives a good decision.

-Will
 

survive

Behold the Wumpus
Moderator
Joined
May 28, 2011
Messages
875
Hi imnoob,

"more stable" is pretty broad....a good hardware RAID card can certainly do a perfectly good job driving a RAID array, but using a hardware RAID card with ZFS is not an ideal configuration because ZFS expects to have direct control of the disk drives. Keep in mind that ZFS was designed specifically to not have any sort of intelligent device between it and the disks, you really want the fastest, dumbest disk controller you can get to drive the disks. When you add in the extra layers of visualization between ZFS and the hardware ZFS doesn't have the control over the disks that it wants (needs) to have to do it's job.

That said, you have some of the sharpest guys in the forum telling you this is a bad idea....these are the guys you will be turning to for help when things go all pear shaped and (to be blunt) if you are going to do what you plan to do they really won't be able to help.

One thing, you haven't explained why you want to use ESXi....is it possible you could do what you want (run what you need) via FreeNAS' jail system?

-Will
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
Hi Will,
I get that ZFS needs direct access to the drives to monitor smart, and to perform all the nice feature that comes with ZFS, which is why in post #1 i mention I wont be using ZFS. I am expecting the hardware raid to perform all the drive management and use freenas as a nas provider rather than a drive manager.

I'm definitely not ignoring the advice and stickies, i am trying to weigh the pros and cons

So far this is my understanding
Scenario 1: using freenas as bare metal - life is good, everyone's happy, simple and clean. in case server failed, yank out the drives, attached to a new server, reconfigure freenas and i will be able to retrieve all my data.

Scenario 2: using esxi and virtualised freenas+storage. In case server failed, yank out my drive, reinstall esxi on new server, reattach drive, reconfigure freenas, attach existing vmdks to freenas and retrieve data.

Scenario 3: hybrid, esxi and virtualised freenas + pci passthrough for drives. If esxi fail, attach usb key with freenas (bare metal), configure freenas, and recover all data but losing esxi.

please let me know if my understanding is wrong above. acknowledging that there is no full proof method, and weighing the pros and cons of each, I am keen to explore scenario 2. I believe I can handle the virtualisation layer. note that i will create a few manageable vmdks and will not configure striping across them on the freenas. I may choose a different option if I have a different hardware but with what i have i guess 2 works for me. Which brings me back to the initial questions

1. Is this feasible at all? It's my home NAS, with important data to me - from the response i get, yes but not a great idea unless i know what i'm doing?
2. I understand ZFS is wonderful, but i've got "hardware" raid. Can I assume hardware raid is more stable? no answer ??
3. What is the implication of having VMFS5 between the hardware and FreeNAS? - more complexity which may hamper/complicate data recovery if server/hypervisor/or even freenas goes down
4. Since I do not use ZFS, does it make sense to allocate less memory? 4GB? - found answer somewhere - yes

thanks for all who tried to help, post your thoughts and showing your enthusiasm :)


Hi imnoob,

"more stable" is pretty broad....a good hardware RAID card can certainly do a perfectly good job driving a RAID array, but using a hardware RAID card with ZFS is not an ideal configuration because ZFS expects to have direct control of the disk drives. Keep in mind that ZFS was designed specifically to not have any sort of intelligent device between it and the disks, you really want the fastest, dumbest disk controller you can get to drive the disks. When you add in the extra layers of visualization between ZFS and the hardware ZFS doesn't have the control over the disks that it wants (needs) to have to do it's job.

That said, you have some of the sharpest guys in the forum telling you this is a bad idea....these are the guys you will be turning to for help when things go all pear shaped and (to be blunt) if you are going to do what you plan to do they really won't be able to help.

One thing, you haven't explained why you want to use ESXi....is it possible you could do what you want (run what you need) via FreeNAS' jail system?

-Will
 

imnoob

Dabbler
Joined
Feb 18, 2014
Messages
13
Will, missed out one of your question. Am using esxi for my home lab server and sdn lab
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
Hardware RAID is more stable than RAIDZ in the same way that a VW Bug is more survivable than a Ford Expedition.

Really, these resources are available here and I'm not going to go into any depth. But:

1) RAID controllers typically involve some lock-in in terms of brand, etc., due to the configuration metadata invariably stored on disk. ZFS just wants disks.

2) RAID controllers have meager to nonexistent resources to work with. ZFS has your CPU and system RAM, giving it massive cache and CPU resources.

3) RAID controllers typically have no way to detect bitrot. ZFS does, and will recover your data automatically from whatever redundancy is available. Unless that's been buried behind a hardware RAID controller, in which case you merely get sad messages about your data being corrupted.

etc etc.
 

no_connection

Patron
Joined
Dec 15, 2013
Messages
480
Scenario 2: using esxi and virtualised freenas+storage. In case server failed, yank out my drive, reinstall esxi on new server, reattach drive, reconfigure freenas, attach existing vmdks to freenas and retrieve data.
Reattaching disks may or may not work depending on controller, so that may be an eventful recovery of you need to do it. Or it might work fine.
If you intend to go that route, I would urge you to test your recovery procedure to make sure it works.

There is no binary point when going below 8GB. For some usages it could work fine, for other you might loose your pool.
The point is, above 8GB things are known to work, but below things can become complicated.
 
Status
Not open for further replies.
Top