To ZIL or not to ZIL (specifically based on my usage)

Status
Not open for further replies.

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
To ZIL or not to ZIL

Hey all! I know this topic keeps coming up, but everyones usage scenario is a little bit different, so I'd appreciate your comments on my implementation.

Q1


I'm currently running my FreeNAS install on a ESXi server (see sig for details). Using a large dd test (to avoid distorted results from cache) from /dev/zero to disk and back from disk to /dev/null The 8 disk volume reads at about 520MB/s and writes at about 485MB/s. It currently consists of 4x4TB WD RED's and 4x3TB WD Greens which I am swapping out one by one as I can afford new RED's.

I don't work with defined data sets very often, and I don't run VM images or intense databases from it. Instead my usage will involve accessing random files across the volume, so I don't think an L2ARC is the way to go.

Activity on the volume will look something like this:

- Home file server: 3-4 computers, copying files, saving downloads, playing mp3's, playing movies, etc.

- MythTV backend recording as many as 6 HD streams (~18Mbit/s each) at the same time and playing them back to clients

- Symform P2P cloud backup. Initially I plan on contributing ~5TiB to pay for my 2.4TiB of backups

So, with all of these going on at the same time there will be a decent amount of simultaneous load.

While my benchmarks say I can write 485MB/s that beinchmark isn't necessarily very useful, for a number of reasons, among them writing all 0's is highly compressible, so it skews the data up, and with multiple files being written at the same time, seek times will drastically reduce throughput from its sequential max.

What worries me the most are the MythTV recordings, as they are recorded off the cable in real time and cant sustain as much of a delay as - say a backup. Video playback also needs to keep up.

So this type of workload looks like it will make good use of an SSD zIL, allowing the volume to group writes for better speeds when the volume is less busy. The question is, is my workload high enough to warrant a ZIL, or will I be wasting my money? What do you guys think?



Q2

As a follow-up, if I do go with a ZIL, I understand they have an impact on RAM use. Once I am done swapping out my drives for RED's, my system should have 8 4TB drives, resulting in a total useable space of 24TB which is ~21.8TiB. My server maxes out at 32GB RAM (at least until 16GB modules become available) and I have a few other VM's on it, leaving the max I can afford to give FreeNAS 25GB, so I am already getting close to the recommended 1GB of RAM per 1 TIB of storage. How much should I expect my RAM needs to increase by adding a ZIL?



q3

How large of an SSD and what type should I use? Old guides and forum posts all seem to recommend SLC drives, but SSD's have changed a lot in the last few years. MLC write endurance has improved a lot. These days even enterprise server users often use MLC drives. My understanding is that more or less, current MLC is the new SLC, and current TLC is the new MLC.

Based on this, I was leaning on getting something like a 128GB Samsung 840 pro for my zil. How does this sound?


q4

Most guides recommend mirroring the ZIL. I understand this used to be more of an issue than it is today, as in the past ZIL failure could result in data loss, where it no longer does. These days the motivation seems to be to avoid performance degradation in case of a drive failure in the ZIL. If I can live with an hour of performance degradation while I run down to MicroCenter and buy another SSD, is there really any reason to mirror it?


Anyway, I apologize for my longwinded post on what must be a tired subject at this point, but it would help to clarify all this with the latest information out there and not rely on outdated info.

Thanks,
Matt
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
My answers are in red....

Hey all! I know this topic keeps coming up, but everyones usage scenario is a little bit different, so I'd appreciate your comments on my implementation.

Q1


I'm currently running my FreeNAS install on a ESXi server (see sig for details). Using a large dd test (to avoid distorted results from cache) from /dev/zero to disk and back from disk to /dev/null The 8 disk volume reads at about 520MB/s and writes at about 485MB/s. It currently consists of 4x4TB WD RED's and 4x3TB WD Greens which I am swapping out one by one as I can afford new RED's.

I don't work with defined data sets very often, and I don't run VM images or intense databases from it. Instead my usage will involve accessing random files across the volume, so I don't think an L2ARC is the way to go.

ZILs are for sync writes and L2ARCs are for reads... don't confuse the two... they do two totally different things.

Activity on the volume will look something like this:

- Home file server: 3-4 computers, copying files, saving downloads, playing mp3's, playing movies, etc.

- MythTV backend recording as many as 6 HD streams (~18Mbit/s each) at the same time and playing them back to clients

- Symform P2P cloud backup. Initially I plan on contributing ~5TiB to pay for my 2.4TiB of backups

So, with all of these going on at the same time there will be a decent amount of simultaneous load.

While my benchmarks say I can write 485MB/s that beinchmark isn't necessarily very useful, for a number of reasons, among them writing all 0's is highly compressible, so it skews the data up, and with multiple files being written at the same time, seek times will drastically reduce throughput from its sequential max.

What worries me the most are the MythTV recordings, as they are recorded off the cable in real time and cant sustain as much of a delay as - say a backup. Video playback also needs to keep up.

So this type of workload looks like it will make good use of an SSD zIL, allowing the volume to group writes for better speeds when the volume is less busy. The question is, is my workload high enough to warrant a ZIL, or will I be wasting my money? What do you guys think?

The ZIL will only help *if* the writes are sync writes. If they aren't sync writes your ZIL won't help your workload. This is something you will need to figure out, and will probably take a few hours of searching to figure out. If your MythTV is through ESXi it will certainly be sync writes. If not, you are on your own to figure out what your configuration actually is.



Q2

As a follow-up, if I do go with a ZIL, I understand they have an impact on RAM use. Once I am done swapping out my drives for RED's, my system should have 8 4TB drives, resulting in a total useable space of 24TB which is ~21.8TiB. My server maxes out at 32GB RAM (at least until 16GB modules become available) and I have a few other VM's on it, leaving the max I can afford to give FreeNAS 25GB, so I am already getting close to the recommended 1GB of RAM per 1 TIB of storage. How much should I expect my RAM needs to increase by adding a ZIL?

Do not be fooled by the 1GB of RAM per TB of storage. You aren't doing light duty if you are running VMs. There's no telling how much RAM you will need. I will tell you that my rule is not to consider a ZIL or L2ARC until you have 64GB of RAM. It is entirely possible to add either one with less RAM and see performance go down. I know someone that has 128GB of RAM with only 20TB of disk space. They have to do that because of their VMs and their workload. The second you say you want to run VMs you can expect the cost of your server go increase, drastically.


q3

How large of an SSD and what type should I use? Old guides and forum posts all seem to recommend SLC drives, but SSD's have changed a lot in the last few years. MLC write endurance has improved a lot. These days even enterprise server users often use MLC drives. My understanding is that more or less, current MLC is the new SLC, and current TLC is the new MLC.

Based on this, I was leaning on getting something like a 128GB Samsung 840 pro for my zil. How does this sound?

You don't need a big ZIL. If you are running 10Gb LAN, even 10GB for a ZIL is impossibly large. But the drive does need to have good performance and reliability. The only drives I recommend at the present are the Intel enterprise class SSDs.


q4

Most guides recommend mirroring the ZIL. I understand this used to be more of an issue than it is today, as in the past ZIL failure could result in data loss, where it no longer does. These days the motivation seems to be to avoid performance degradation in case of a drive failure in the ZIL. If I can live with an hour of performance degradation while I run down to MicroCenter and buy another SSD, is there really any reason to mirror it?

In the past you could not mount your pool if you lost your ZIL. Today, if you do not run a mirrored ZIL and your ZIL fails at an inopportune time you will lose data in the ZIL. You will have to force the pool to mount from the CLI and you *will* lose some data. Since you are running VMs the result could be anything from a few corrupt files to the VMs no longer booting. You still have redundancy in your pool, but you do not have redundancy for your ZIL. Does that sound like a good idea to you?

In short, if your data is important, you should be mirroring it with 2 high reliability drives. This means not OCZ, not TLC based SSDs, and no drive model that you are aware of that has reliability problems.

On an unrelated note, if you think you are going to run VMs and get good performance, you'd better stop and go back to the drawing board. I already discussed the fact that 32GB of RAM probably isn't going to cut it. But RAIDZ2 is a major no-no for VMs. You want to run VMs you *will* have to do mirrors. PERIOD. You need the high I/O of multiple vdevs. If this isn't palatable then you get to buy like 256GB of RAM, 1TB+ of L2ARC, etc so you can offset that serious performance penalty.

Like I said..... the second you say you want to run VMs the cost increases significantly.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Thank you for taking the time to type out your responses. I think you misunderstood my usage scenario though.

I use VMWare ESXi to HOST my FreeNAS install as a guest, but FreeNAS is not used as an ESXi datastore. None of the guests run off of the array.

The VMWare guests are relatively small and all fit on a high performance SSD connected directly to ESXi (which I back up regularly, and even if they fail, all I've lost is a configuration, which doesn't take long to recreate, and I usually have backed up anyway)

The array itself is merely for file storage, but this file storage is accessed by the guests on ESXi as if they were remote networked clients.

My guests and what they do with the FreeNAS storage:

1.) pfSense: This is my house router. It never mounts anything from FreeNAS.

2.) FreeNAS: This resides as a guest. For the sake of stability it has a IBM M1015 SAS controller direct forwarded to it for the array, and doesn't use VMWares virtualized file system except for the 1GB install, which resides on the main ESXi box SSD. It has a dual port Intel Pro/1000 PT NIC also direct forwarded, and connected via 802.3ad LACP to my switch. There is also an internal 10gig virtual VMXNET3 NIC which links it to my Ubuntu Server Guest below.

3.) Ubuntu Server: Since I prefer working with modern Linux over BSD, and didn't want to bother with the jail, my Ubuntu Server install is what I use for all SSH related tasks and daemons I want to do (be they wgets, running various backups, etc. etc.) It accesses FreeNAS internally via the virtual 10gig adapter only for file storage. No databases or other intensive activities hit FreeNAS.

4.) (To be added in near future) MythTV backend based on Mythbuntu. I plan on running this as a separate server from my main Ubuntu server due to the complex dependencies involved and wanting to keep it isolated from everything else. It - too - will access FreeNAS via the internal 10gig vmnic. FreeNAS will be used only to store TV Recording video files. The database it uses will all be on its virtual image on the ESXi SSD.

So I hope this clarifies my usage. I consider my FreeNAS pools a "Home File Server 'plus'" install. Most of the data is just plain dumb files being stored.

The MythTV setup is essentially also just plain dumb files, just with a lot of concurrent slow sequential writes.

The unknown is the Symform p2p cloud backup, which I plan on running inside the Ubuntu server. It will pull data off of FreeNAS through a reverse EncFS over NFS, and then upload it to my cloud backup. The beauty of Symform is that you pay by sharing your storage so other peers back up their data on your system. The unknown is how much disk activity I am going to see by sharing 5TB of space with the p2p cloud.

Anyway, I hope this clarifies things.

Summary:
- No VmWare datastores on the pool
- Trying to avoid the array getting so busy with shared p2p backup traffic that it can't write the real time TV streams fast enough. Not sure if the it is fast enough as is to handle it.



PS.

Re-reading your guide has scared me a little. when I originally built my setup, I intended to use ECC. I read the memory controller on the CPU supported ECC, and assumed it would work, but unfortunately the BIOS doesn't recognize it is ECC, and thus just runs it as plain old RAM. This may be something I have to rectify, but man, server motherboards are so expensive!
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
If you want to run FreeNAS as a guest you are totally on your own. We do not support or condone FreeNAS as a guest at all.... we have 2 stickies on why you should never consider doing this and let's just say "we'll leave it at that".
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
If you want to run FreeNAS as a guest you are totally on your own. We do not support or condone FreeNAS as a guest at all.... we have 2 stickies on why you should never consider doing this and let's just say "we'll leave it at that".



Understood, but I read the guide on that, and I am not concerned for my usage.

Essentially the gist of it is, "Don't run it as a guest because you might be tempted to do stupid things like using virtual hard drives for your pool, or using FreeNAS as a datastore for the host it is running on creating a circular dependency" but I am not doing any of those stupid things.

With a direct forwarded SAS controller, it is really no different than running it on a dedicated machine, except that it shares some RAM bandwidth and occasionally (but rarely in my setup) some CPU load. It's been perfectly stable and performed well as a file server for 4 years, and I have confidence in the setup. Simply using it for consolidation, as it beats having multiple servers all sucking power with expensive components in my basement.

The fact that it is running as a VMWare guest is irrelevant to my performance questions. I'm not asking for help with the "getting it to run as a guest" side of things. I have that part of it all figured out.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
Right, we don't really discuss it here at all. If we start telling you things then 500 other newbies will think it must be safe and a great idea. It's not, and unless you are pro enough to handle it on your own, you shouldn't be trying to do it.

Sorry, but that's all I'm willing to say about this topic. We've had far too many problems for me to go much further.
 

Nick2253

Wizard
Joined
Apr 21, 2014
Messages
1,633
To make a long story short: you probably won't see a benefit from a ZIL in your particular use case, assuming you are connecting the FreeNAS as a network drive to the MythTV "box." A single stream of uncompressed HDTV needs about 150MB/s, and I doubt you'll be doing it uncompressed.
 

empsis

Cadet
Joined
Oct 2, 2014
Messages
3
Hi all!

This is a bit off topic. I saw in Matt's post that he is using Symform p2p backup service on FreeNAS.

Matt, maybe you could let me know where could I find any guidance on how to use Symform on FreeNAS? I don't see that there would be FreeBSD version available. Did you use Linux version somehow?

Thank you!
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hi all!

This is a bit off topic. I saw in Matt's post that he is using Symform p2p backup service on FreeNAS.

Matt, maybe you could let me know where could I find any guidance on how to use Symform on FreeNAS? I don't see that there would be FreeBSD version available. Did you use Linux version somehow?

Thank you!

Hey,

That was actually a "will be" as in future state. Have not installed it yet.

I don't foresee any problems, as I run FreeNAS virtualized (yes, I know the documentation recommends against it, but if you don't do anything dumb it works very very well, and is rock stable).

So I plan on setting up a small dedicated ubuntu guest on my server that communicates using VMWares direct attach network emulation driver (VMXNet3) which I have benched at about 30gbit using iperf. So speeds will be like local speeds, but Symform will actually be running headless in an Ubuntu guest.
 

empsis

Cadet
Joined
Oct 2, 2014
Messages
3
Hey,

That was actually a "will be" as in future state. Have not installed it yet.

I don't foresee any problems, as I run FreeNAS virtualized (yes, I know the documentation recommends against it, but if you don't do anything dumb it works very very well, and is rock stable).

So I plan on setting up a small dedicated ubuntu guest on my server that communicates using VMWares direct attach network emulation driver (VMXNet3) which I have benched at about 30gbit using iperf. So speeds will be like local speeds, but Symform will actually be running headless in an Ubuntu guest.

Hey,

Thank you for your clarification! (Apologies for not writing back sooner). I understand now, I initially thought that you have made Linux Symform run on FreeNAS/FreeBSD. I have actually voted for this feature to be implemented (i.e. Symform for FreeNAS to be released - https://community.symform.com/entries/45047448-FreeNAS-support-), perhaps it will take time to get it done.

As for running FreeNAS virtualized - have you installed hypervisor / ESXi on a dedicated disk or have you used the same disks that you use for creating data stores in FreeNAS later? (I understand moderators / FreeNAS community might be unhappy with me posting such questions since such setup is not supported - apologies guys, still, it would be helpful).

Thanks again!
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
Hey,

Thank you for your clarification! (Apologies for not writing back sooner). I understand now, I initially thought that you have made Linux Symform run on FreeNAS/FreeBSD. I have actually voted for this feature to be implemented (i.e. Symform for FreeNAS to be released - https://community.symform.com/entries/45047448-FreeNAS-support-), perhaps it will take time to get it done.

NP.

If any cloud storage company were to make a FreeNAS compatible client, it would be Symform due to how their storage sharing model works. The rest worry about this due to their unlimited plans :p Many even block external storage from being backed up.

As for running FreeNAS virtualized - have you installed hypervisor / ESXi on a dedicated disk or have you used the same disks that you use for creating data stores in FreeNAS later? (I understand moderators / FreeNAS community might be unhappy with me posting such questions since such setup is not supported - apologies guys, still, it would be helpful).

Thanks again!

You are correct, FreeNAS recommends against any virtualization at all, and refuse to support it, but thus far they haven't been censoring discussions on the topic.

To me the arguments against virtualization the team has laid forward essentially boil down to this: "Don't virtualize FreeNAS, because you might be tempted to do something stupid, like under-provision RAM, or use disk images as drives, etc. etc." which seems like throwing out the baby with the bathwater to me. There are ways to build reliable and stable virtualized FreeNAS solutions that have no greater risk involved than a bare metal install. The way I have done this (Direct I/O forward a known forwardable SAS controller, and use it to attach disks) is one of them.

There are A LOT of people in the homebrew ESXi community who use their ZFS pools as a datastore on the same machine it is hosted on. Most of them use Napp-ITor ZFS for Linux instead of FreeNAS, presumably because FreeBSD has historically had some performance related inefficiencies when running as a guest, and Napp-IT is based on OpenIndiana (I believe).

With FreeBSD 9 (which current FreeNAS is based on) a lot of those inefficiencies have been solved, and they are said to be completely gone in FreeBSD 10, which will be great when FreeNAS gets there.

The way they do it seems to be to boot ESXi off of a USB stick, and create a small datastore on that USB stick, on which only the ZFS based guest (clould be FreeNAS but usually Napp-IT) is installed. Then the zfs pool is shared to the ESXi host as a datastore using NFS. At boot time the ZFS guest autostarts first, with a delay corresponding to it's full startup time, and once online, the remaining guests, stored on the ZFS pool can boot up.

I did not go this route for a few reasons:

1.) I thought the almost-circular references between Host -> Guest -> Host -> Guest seemed a little bit risky, and like asking for trouble.
2.) All of my guests are either BSD or Linux based, so they use very little hard drive space, and none contain any large databases. A single 128GB SSD is MORE than large enough to contain all of them.
3.) A single 128GB SSD is much faster to run guest operating systems off of, than any RAID array, due to the low seek times.
4.) All my critical files are stored on the zfs pool. I do back up the guest disk images frequently, but even if they were lost, it's easy to re-install Ubuntu, so I don't even bother mirroring the main datastore SSD.

Many of my guest operating systems DO access my zfs pool on FreeNAS for files and data storage, but they do so using NFS internally to the guests. The ESXi datastore where the image files the guests boot from are located is entirely on the one SSD (which is also what ESXi itself boots from)

Your needs may vary though. If you need very large images (due to big databases, or running Windows based operating systems as your guests (which tend to gobble up storage space for no reason) this solution might not work for you.

Don't let me scare you off from trying. I know many people who do host ZFS as a guest in ESXi and then share the pool back as a datastore, and have done so reliably for years primarily using NappIT (though I see no reason FreeNAS should be any worse of a selection, I - for one - like it better) it just added complexity I didn't need, without really offering me any benefit (in fact it would slow me down).

I hope this helps. Please let me know if you have any further questions.
 

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
To me the arguments against virtualization the team has laid forward essentially boil down to this: "Don't virtualize FreeNAS, because you might be tempted to do something stupid, like under-provision RAM, or use disk images as drives, etc. etc." which seems like throwing out the baby with the bathwater to me. There are ways to build reliable and stable virtualized FreeNAS solutions that have no greater risk involved than a bare metal install. The way I have done this (Direct I/O forward a known forwardable SAS controller, and use it to attach disks) is one of them.

That's not it at all. It has nothing to do with doing stupid things. It has to do with the fact that you can do all the right things and still lose your pool. Remember, we don't understand the fail mechanism very well. Obviously users that are doing ESXi are more tempted to do stupid things, but that's not an ESXi problem. That's a PEBKAC. If it was only PEBKAC problem I'd by happy to discuss Virtualizing FreeNAS in detail and support it. People do stupid things with FreeNAS all the time, be it hardware RAID, insufficient RAM, incompatible hardware, etc. But the solution isn't to outright ban FreeNAS. It's about educating the masses. But you can't educate the masses against things that aren't explained. If they were understood there would definitely be a slideshow written by me that would be an addon to the "noobie presentation". But again, can't teach what isn't understood.

With FreeBSD 9 (which current FreeNAS is based on) a lot of those inefficiencies have been solved, and they are said to be completely gone in FreeBSD 10, which will be great when FreeNAS gets there.

No clue what you are talking about there, so I can't comment at all.

The way they do it seems to be to boot ESXi off of a USB stick, and create a small datastore on that USB stick, on which only the ZFS based guest (clould be FreeNAS but usually Napp-IT) is installed. Then the zfs pool is shared to the ESXi host as a datastore using NFS. At boot time the ZFS guest autostarts first, with a delay corresponding to it's full startup time, and once online, the remaining guests, stored on the ZFS pool can boot up.

ESXi won't let you use any USB device for a datastore. I tried. It is nothing but fail. There is a way to hack it, but let's face it...nobody that cares about their VMs and such are going to do such things. This goes back to the "if you don't know what you are doing with ESXi, you shouldn't be doing it". Bad idea. ;)
 

empsis

Cadet
Joined
Oct 2, 2014
Messages
3
NP.

If any cloud storage company were to make a FreeNAS compatible client, it would be Symform due to how their storage sharing model works. The rest worry about this due to their unlimited plans :p Many even block external storage from being backed up.



You are correct, FreeNAS recommends against any virtualization at all, and refuse to support it, but thus far they haven't been censoring discussions on the topic.

To me the arguments against virtualization the team has laid forward essentially boil down to this: "Don't virtualize FreeNAS, because you might be tempted to do something stupid, like under-provision RAM, or use disk images as drives, etc. etc." which seems like throwing out the baby with the bathwater to me. There are ways to build reliable and stable virtualized FreeNAS solutions that have no greater risk involved than a bare metal install. The way I have done this (Direct I/O forward a known forwardable SAS controller, and use it to attach disks) is one of them.

There are A LOT of people in the homebrew ESXi community who use their ZFS pools as a datastore on the same machine it is hosted on. Most of them use Napp-ITor ZFS for Linux instead of FreeNAS, presumably because FreeBSD has historically had some performance related inefficiencies when running as a guest, and Napp-IT is based on OpenIndiana (I believe).

With FreeBSD 9 (which current FreeNAS is based on) a lot of those inefficiencies have been solved, and they are said to be completely gone in FreeBSD 10, which will be great when FreeNAS gets there.

The way they do it seems to be to boot ESXi off of a USB stick, and create a small datastore on that USB stick, on which only the ZFS based guest (clould be FreeNAS but usually Napp-IT) is installed. Then the zfs pool is shared to the ESXi host as a datastore using NFS. At boot time the ZFS guest autostarts first, with a delay corresponding to it's full startup time, and once online, the remaining guests, stored on the ZFS pool can boot up.

I did not go this route for a few reasons:

1.) I thought the almost-circular references between Host -> Guest -> Host -> Guest seemed a little bit risky, and like asking for trouble.
2.) All of my guests are either BSD or Linux based, so they use very little hard drive space, and none contain any large databases. A single 128GB SSD is MORE than large enough to contain all of them.
3.) A single 128GB SSD is much faster to run guest operating systems off of, than any RAID array, due to the low seek times.
4.) All my critical files are stored on the zfs pool. I do back up the guest disk images frequently, but even if they were lost, it's easy to re-install Ubuntu, so I don't even bother mirroring the main datastore SSD.

Many of my guest operating systems DO access my zfs pool on FreeNAS for files and data storage, but they do so using NFS internally to the guests. The ESXi datastore where the image files the guests boot from are located is entirely on the one SSD (which is also what ESXi itself boots from)

Your needs may vary though. If you need very large images (due to big databases, or running Windows based operating systems as your guests (which tend to gobble up storage space for no reason) this solution might not work for you.

Don't let me scare you off from trying. I know many people who do host ZFS as a guest in ESXi and then share the pool back as a datastore, and have done so reliably for years primarily using NappIT (though I see no reason FreeNAS should be any worse of a selection, I - for one - like it better) it just added complexity I didn't need, without really offering me any benefit (in fact it would slow me down).

I hope this helps. Please let me know if you have any further questions.

Thank you so much for taking time to write such comprehensive guide!

Cheers
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280
That's not it at all. It has nothing to do with doing stupid things. It has to do with the fact that you can do all the right things and still lose your pool. Remember, we don't understand the fail mechanism very well. Obviously users that are doing ESXi are more tempted to do stupid things, but that's not an ESXi problem. That's a PEBKAC. If it was only PEBKAC problem I'd by happy to discuss Virtualizing FreeNAS in detail and support it. People do stupid things with FreeNAS all the time, be it hardware RAID, insufficient RAM, incompatible hardware, etc. But the solution isn't to outright ban FreeNAS. It's about educating the masses. But you can't educate the masses against things that aren't explained. If they were understood there would definitely be a slideshow written by me that would be an addon to the "noobie presentation". But again, can't teach what isn't understood.

I've heard you make these comments about people losing their pools on ESXi a lot, and I am actually curious. Do you have any good examples?

The reason I ask, is because it contradicts my personal experience a lot, and I don't think I've ever seen an instance of this happening where it wasn't user error.

As long as a system is properly set up (using Direct I/O forwarded SAS controller, sufficient ECC RAM, UPS) I haven't even seen a single pool failure in the forums I hang out where this type of setup is very very common, that wasn't caused by user stupidity/ignorance, and wouldn't also have happened on bare metal.


No clue what you are talking about there, so I can't comment at all.

It's a little bit outside of my area of expertise. I am just regurgitating information I have read about BSD and virtualization in the past. I will try to dig up a link.

ESXi won't let you use any USB device for a datastore. I tried. It is nothing but fail. There is a way to hack it, but let's face it...nobody that cares about their VMs and such are going to do such things. This goes back to the "if you don't know what you are doing with ESXi, you shouldn't be doing it". Bad idea. ;)

I could be wrong about this point, as I haven't set up this configuration myself (mine boots from and has its datastore on a 128GB SSD). That being said, I feel like I remember lots of folks doing this. I too haven't had any luck getting ESXi to see USB mass storage devices and add them as datastores, but I feel like this might work if - and only if - ESXi is installed on a USB mass storage device, and the datastore resides on that same USB mass storage device.
 

mattlach

Patron
Joined
Oct 14, 2012
Messages
280

cyberjock

Inactive Account
Joined
Mar 25, 2012
Messages
19,526
I've heard you make these comments about people losing their pools on ESXi a lot, and I am actually curious. Do you have any good examples?

The reason I ask, is because it contradicts my personal experience a lot, and I don't think I've ever seen an instance of this happening where it wasn't user error.

As long as a system is properly set up (using Direct I/O forwarded SAS controller, sufficient ECC RAM, UPS) I haven't even seen a single pool failure in the forums I hang out where this type of setup is very very common, that wasn't caused by user stupidity/ignorance, and wouldn't also have happened on bare metal.
Sure, search the forums. I don't search for this stuff because I don't feel like it's my job to spend hours finding that one thread in 1000 that mentions ESXi and ended up with a dead pool. You can take my advice or leave it. But as I read every thread here and I don't really care what you do with your server or data, I just put the warnings up. I've done quite a few Teamviewer sessions with the exact people you are wanting me to link to though. So it's not like I'm just making this stuff up.

I could be wrong about this point, as I haven't set up this configuration myself (mine boots from and has its datastore on a 128GB SSD). That being said, I feel like I remember lots of folks doing this. I too haven't had any luck getting ESXi to see USB mass storage devices and add them as datastores, but I feel like this might work if - and only if - ESXi is installed on a USB mass storage device, and the datastore resides on that same USB mass storage device.

Read the ESXi docs. They tell you it's not supported. I know.. I thought I could do this with a 16GB USB stick (which is what my ESXi box uses right now). I was able to validate in 5 minutes of Googling that this is still the case (unless you want to hack the OS).
 
Status
Not open for further replies.
Top