Virtualization again. Don't do and stickies are not enough.

Status
Not open for further replies.

m00nraker

Dabbler
Joined
Apr 12, 2015
Messages
13
Hi.

I openend the thread about issues with LSI 9300 passthrough to 9.3 under Proxmox/KVM. Before I go back to this thread I wanted to ask something generell about FreeNAS and virtualization:

I read the stickies and some more in this forum. I also read a lot about ZFS and the background of it. Of course this doesn't turn me into a specialist for FreeNAS, ZFS and virtualization. The experts here always pray not to virtualize it. Up to now, I really didn't found the right answer, why it's not a good idea and where exactly the reason is for data loss in this scenario. I always read things: Don't do it. But if you do it, it will finally end up in loosing your pool and we don't want crying users.

Sometimes it sounds that FreeNAS can work as a guest, but it seems to me that there is a big random factor in the game. It could come to a big crash one day, but it must not. Then you can read, that a crash will come for sure, it's only a question of time when. The experts say, people could do wrong things in virtualizing FreeNAS, so give up this way.
E.g. they could use Non-ECC RAM. I understand, why it's a bad idea not to use ECC and that even pool backups are not the solution to be protected agains data lost when using Non-ECC. That's clear. And I personal don't see the problem, not to use ECC, so that is no question for me, assuming money is not a showstopper. When they say "don't do it", they probably could think I could do something wrong. ECC is one example. Then I could use non server hardware, even it is e.g. labeled with vt-d. I also understand when the hardware is labeled with vt-d on top, vt-d must not be well implemented inside, see consumer market. So the potential FreeNAS virtualization user could get into trouble with his pci-passthrough setup, or the user could buy server hardware and make mistakes setting up the software. He could for example configure virtual harddisks, what is a no go. Maybe FreeNAS gets wrong smart-data and doesn't see the disk is beginning to die, and so on. These variables of doing wrong things maybe one reason for not giving the advice to virtualize it. I could understand this, when you would communicate this to uninformed users. Better get your hands off, you could make mistakes. But are there other important reasons I don't see at the momment?

We have many variables in the hole game. Let's suppose, we have good hardware for the job and we make no design mistakes when setting up the virtual enviromnent (HBA passthrough, no virtual disks, server board, ECC, FreeNAS could access smart data, etc.). What else can lead to a complete crash of the pool data, so that the pool isn't accessible anymore, when using a virtualized FreeNAS? Basically this is exactly what is always prayed and said here.

Let me quote some guys of the forum.

gpsguy:
Even I don't want to take a risk virtualizing FreeNAS in production.
...
Even with the stickies, etc. we often see users virtualize incorrectly. If they are luckly, all they have to do is find a place to backup their data, blow away the pool and start over. Other times, the data is gone forever, often times without any backups.
cyberjock:
Then let me give you some advice. Give up on trying to virtualize FreeNAS RIGHT NOW!
HoneyBadger:
Careful, it's addictive. One day you're just running on a little old HP with 8GB and a pair of mirrors, then the next thing you know there's a rackmount SuperMicro in your basement with enterprise Sun flash cards for SLOG ;) I'm glad we could steer you away from FreeNAS-on-ESXi and towards bare metal

I could quote some others. But you know what I mean. I think these guys know what they are talking about. But I didn't find the answer, why they always say, don't do it. Is the only reason for that what I tried to describe above in my own words?

My last point is, what happens, when a native installed FreeNAS crashes compared to a virtualized FreeNAS crash?

Does a FreeNAS crash generally lead to data loss or pool loss? I don't think so, beacuse you can protect against some cases. Maybe I'm wrong. On a native setup: The motherboard could fail, powerloss, HBA failure, HDD failure, ECC-RAM failure,... So there are some points that can make FreeNAS crash, even if it's native. You can proteted agains some situations: Redundant data, UPS, ECC, maybe redundant power supply, redundant ZIL log devices in Raid1. But there is always room for a complete system failure, even when FreeNAS is native installed. FreeNAS itself is praised to be rock soild. But maybe ugly bugs are still there, who knows. The same for ZFS. I think some hypervisors are also rock solid. Big systems are running on top of them. A virtualized FreeNAS has this as an additional layer between itself and the hardware. Now, when I trust the FreeNAS and ZFS code, why shouldn't I trust my hypervisor code? The advice not to do it only because of the existence of this additional software layer? Or is there some more reason not to do it? Using a hypervisor leads to a higher probalility for system failure compared to not using it. But this higher probalility is relative, because there is always a probalility for system failure, even when using a native FreeNAS setup.

Now, what you think, do I bake my favorite own world or is there a bit truth in it? I read that so many users here have lost their data because of virtualization. Are there reproducable steps why it comes to data loss?

I want some more information about data corruption in a virtualized FreeNAS enviromnent. Don't do it is not enough, not to do it. I'm not resistent to your advices, I want to learn s.th. about it.

Maybe you could help make me find my answers.
Kai
 

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112
Hi Kai,

You have a good understanding of the issue when you say "we're worried you might do something wrong" because FreeNAS as a VM is a very delicate balancing act. I'll tag in and quote @jgreco here from the Big Virtualization Sticky:

The viability of virtualization is a function both of the technical issues and of the opinion and expertise of whoever is installing it.

It is very easy to do something wrong when virtualizing FreeNAS. You're also asking about doing it on Proxmox/KVM which isn't nearly as well understood or supported as ESXi, so you would very much be cutting your own trail doing this.

With all of that said, you seem to have done a lot of research and have a thorough technical understanding of the Do's and Don'ts. So don't let us stop you. We will warn you, absolutely, but ultimately the decision of whether or not you consider the potential risk of making a mistake worth it or not is up to you.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
HoneyBadger covered most of what I would have said.

The quotes you included were from a recent thread where the OP doesn't have any experience (yet) with FreeNAS or VMware. In his case, I think we all agreed that he shouldn't virtualize right now.

If our users followed jgreco's vitualization guides to the letter, they'd have a chance of success. Virtualization aside, we have a lot of new users show up with problems because they followed some random website and/or video, but didn't read the official documentation.

Now, throw in virtualization. If I can run Windows Server 2012R2 with 1GB of RAM, that ought to be plenty for FreeNAS 9.3. Nope, one really needs 8GB at a minimum. And, then there are other issues, like whether the hardware supports vt-d ...

At the end of the day, the forum doesn't have the manpower to support virtualization questions. We are just (unpaid) volunteers.
 
Last edited:

HoneyBadger

actually does care
Administrator
Moderator
iXsystems
Joined
Feb 6, 2014
Messages
5,112

m00nraker

Dabbler
Joined
Apr 12, 2015
Messages
13
@m00nraker - See this post for an example of a "FreeNAS as a VM being done wrong."
https://forums.freenas.org/index.php?threads/wasted-storage-space.30200/
Unless this system is strictly for learning the GUI/appearance of FreeNAS it is absolutely asking for disaster.

Dear HoneyBadger, thank you for your attention. I read Jake's post, but I'm not familiar with VMware, so I have to ask: At first sight I would say there are 3 issues: VMware player is a bad choice, configuring a stripe set in FreeNAS is the second bad choice and last but not least, I suppose this isn't a passthrough setup. Is that what you mean?
 

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
Dear HoneyBadger, thank you for your attention. I read Jake's post, but I'm not familiar with VMware, so I have to ask: At first sight I would say there are 3 issues: VMware player is a bad choice, configuring a stripe set in FreeNAS is the second bad choice and last but not least, I suppose this isn't a passthrough setup. Is that what you mean?

The last point really is the big issue. At that point, why use FreeNAS and ZFS, anyway?
 

m00nraker

Dabbler
Joined
Apr 12, 2015
Messages
13
The last point really is the big issue. At that point, why use FreeNAS and ZFS, anyway?
I know why virtual disks are the wrong way, but passthrough is a virtualization only issue. My point is, that for this example you can also say: Why using ZFS when not using redundancy at the same time? A stripe set doesn't provide any kind of redundancy, so ZFS has no benefit when data secutity is important. This is an example where some guys would say: don't virtualize it, bad way, because the user decides not to passthrough. Why is always virtualization the point? You can also say: better don't use FreeNAS at all, because you don't know what you are doing there and have no clue about ZFS. Hope I found the right words for what I mean.
 
Last edited:

Ericloewe

Server Wrangler
Moderator
Joined
Feb 15, 2014
Messages
20,194
I know why virtual disks are the wrong way, but passthrough is a virtualization only issue. My point is, that for this example you can also say: Why using ZFS when not using redundancy at the same time? A stripe set doesn't provide any kind of redundancy, so ZFS has no benefit when data secutity is important. This is an example where some guys would say: don't virtualize it, bad way, because the user decides not to passthrough. Why is always virtualization the point? You can also say: better don't use FreeNAS at all, because you don't know what you are doing there and have no clue about ZFS. Hope I found the right words for what I mean.

Yeah, you're right of course.
 

mjws00

Guru
Joined
Jul 25, 2014
Messages
798
I have yet to see ANY reproducible problem with virtualization as outlined in @jgreco's guides. In fact I doubt we will find a condition that crashes a vt-d pool that would not cause similar damage on a baremetal pool. It is possible we may isolate a specific manufacturers poor vt-d implementation, or locate a bug within ESXi... but there is similar risk within the BSD, FreeNAS, and OpenZFS code bases.

The issue is one of risk management. If you embrace the notion that FreeNAS is intended as enterprise level ZFS storage, with end to end data protection, and its primary function is data protection alone. Then any additional risk added by introducing a hypervisor is unnecessary and contrary to its intended purpose. One could take that further, and even claim that the middleware layers in FreeNAS present unecessary risk. However, in real life things are not so black and white. We all have different priorities and agendas for some the flexibility far outweighs the small risks.

I think the forum as a whole has evolved to a point beyond 'Virtualization == EVIL'. But the primary and best course of action is to get inexperienced users to pause, and think about potential consequences. We see very few super 'dumb' virtualization scenarios proceed. We also see very few pools lost to virtualized systems that follow the guides.

In the case of Proxmox/KVM. The knowledge base is miniscule at best. Not to mention you already have ZFS on Linux running. Why virtualize BSD and ZFS on top of that? We also have known BSD on Hyper-V / Proxmox issues to contend with. Some of which are known to be fixed within FreeBSD 10. But the FreeNAS stable codebase is not on that kernel. So it is a case of blazing a trail that has a very limited purpose.

A secondary consideration is the level of support available. We have some great virtualization expertise (ESXi) available... but the system quickly becomes so complex that the man hours just don't exist for us to troubleshoot all the possible places to screw up. It is a difficult environment to troubleshoot on a forum, with inexperienced end users, especially when there is a whole addtional FreeNAS / ZFS learning curve added.

Any way. That's my two cents. I'm quite pleased to see progress in the community on the topic, and hope it continues.
 

gpsguy

Active Member
Joined
Jan 22, 2012
Messages
4,472
Ditto this comment.

But the primary and best course of action is to get inexperienced users to pause, and think about potential consequences. We see very few super 'dumb' virtualization scenarios proceed. We also see very few pools lost to virtualized systems that follow the guides.

As an example, I recently went the extra mile with a user, about 2000 miles away from me. A bunch of PM's over the course of a week, a couple of phone calls, and a screen sharing session. He seemed to have a networking problem, so I asked him to do some pings to/from the server. Unbeknownst to me, he thought that meant http://ipaddress of the webgui. After working with him to reinstall FreeNAS and giving him an overview of what he'd need to reconfigure (he didn't have a backup of his configuration file), he finally got his system up and going again.

It is a difficult environment to troubleshoot on a forum, with inexperienced end users, ...
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
last but not least, I suppose this isn't a passthrough setup

Which isn't exactly an issue, as I point out in the Virtually FreeNAS sticky. There's often more than one way to accomplish a task. You just have to understand what you're doing and why.

If you embrace the notion that FreeNAS is intended as enterprise level ZFS storage, with end to end data protection, and its primary function is data protection alone.

Yes, but you can go mad down that line of reasoning. Down that line of reasoning lies RAIDZ3 and enterprise class drives and other costly insanity that doesn't necessarily translate to good return on investment.

I understand you're not making that argument, by the way, I just wanted to expand upon your comment.

I think the forum as a whole has evolved to a point beyond 'Virtualization == EVIL'. But the primary and best course of action is to get inexperienced users to pause, and think about potential consequences. We see very few super 'dumb' virtualization scenarios proceed. We also see very few pools lost to virtualized systems that follow the guides.

I'm not sure we've seen ANY lost, but that includes proper burn-in and validation as part of the process. I know we've seen some dodgy setups that looked like they might work but failed pretty quickly.

With all that said, up to the first post.

Don't do it. But if you do it, it will finally end up in loosing your pool and we don't want crying users.

s/will/could/

And as long as you own your disaster, we're fine with it, and maybe we'll even buy you a beer.

it sounds that FreeNAS can work as a guest

I think it can

but it seems to me that there is a big random factor in the game.

Which is introduced by the arrogant user-who-knows-better-than-me that sits in front of his computer with a six pack of beer and expects that he isn't subject to fate's cruel sense of humor.

Ironically it may not be hard to know-better-than-me, since I don't necessarily claim any special expertise.

However, I've made a living in this business by assuming things will find a way to bite me in the ass, so, y'know, often things get burned in for many months prior to being put into production, or get tested alongside existing solutions, etc., so I can realistically evaluate them. You'll notice that the main virtualization guide is a thinly disguised guide as to how to get RID of the virtualization layer in the event of problems.

If you plan for disasters and problems, fate is less likely to come knocking at your door.

What else can lead to a complete crash of the pool data, so that the pool isn't accessible anymore, when using a virtualized FreeNAS?

I think the scariest possibility is some unexpected interaction.

Like what happens if you reinstall the hypervisor and it cheerfully sees all that "unused" disk space out there and formats it for you? Scary quote from VMware documentation:

By default, all visible blank internal disks are formatted with VMFS, so you can store virtual machines on the disks. In ESXi Embedded, all visible blank internal disks with VMFS are also formatted by default.

caution_small.png
Caution
ESXi overwrites any disks that appear to be blank. Disks are considered to be blank if they do not have a valid partition table or partitions. If you are using software that uses such disks, in particular if you are using logical volume manager (LVM) instead of, or in addition to, conventional partitioning schemes, ESXi might cause local LVM to be reformatted. Back up your system data before you power on ESXi for the first time.


I could quote some others. But you know what I mean. I think these guys know what they are talking about. But I didn't find the answer, why they always say, don't do it. Is the only reason for that what I tried to describe above in my own words?

There's only a few cases.

1) You're not a big boy and you get scared off. You're safe in that case.

2) You're not a big boy but you think you are. You eventually get spanked by some bad event. That's sad. If you don't get spanked, you're really frickin' lucky.

3) You're a big boy and you own the result of your efforts. Could be great success, could be total data loss, could be anything in between. I, at least, am happy to share knowledge with other big boys. Which is why I published a laundry list of the known issues.

I think some hypervisors are also rock solid. Big systems are running on top of them.

What hypervisors would those be? Because I encourage you to look at the changelog between versions. This might give you a better idea of what "rock solid" is.....n't.

A virtualized FreeNAS has this as an additional layer between itself and the hardware. Now, when I trust the FreeNAS and ZFS code, why shouldn't I trust my hypervisor code?

A person that trusts both is twice the fool. I wouldn't trust either one. Decades of experience coding and administering systems convinces me that there are plenty of problems in both.
 

depasseg

FreeNAS Replicant
Joined
Sep 16, 2014
Messages
2,874
To me, the warnings against virtualizing FN are intended (and have been effective) at keeping people who don't work with virtualization (dare I say at an enterprise level) on a daily basis from following some guide and getting in way over their head with no way to recover.

Can a storage system run virtualized safely? Absolutely - just look into Nutanix. It's a VM running on either ESX or Hyper-V (and maybe Xen), and passes the controller through to the VM. Rock solid. No issues. Because there are limited configuration choices and users don't have the ability to go in a muck it up. FreeNas doesn't have that level of control. Just look at the number of 4GB RAM systems that people are reporting issues with.

I equate it to ski areas that have Green, Blue and Black Diamond slopes for varying degrees of terrain. And then, just to hammer home the warning, in the really, really tough terrain, they rope off the entrance to a narrow path with HUGE signs so whoever proceeds realizes that should a skier fall, there is an extreme risk of permanent injury or death. If you are an awesome skier, go for it, but if you have some hesitation, then that slope probably isn't for you.
 

m00nraker

Dabbler
Joined
Apr 12, 2015
Messages
13
Thank you very much for participating and sharing your thoughts and skills.

@gpsguy:
The quotes you included were from a recent thread where the OP doesn't have any experience (yet) with FreeNAS or VMware. In his case, I think we all agreed that he shouldn't virtualize right now. Now, throw in virtualization. If I can run Windows Server 2012R2 with 1GB of RAM, that ought to be plenty for FreeNAS 9.3. Nope, one really needs 8GB at a minimum. And, then there are other issues, like whether the hardware supports vt-d ...
At the end of the day, the forum doesn't have the manpower to support virtualization questions. We are just (unpaid) volunteers.
But what when people want to use FreeNAS without understanding ZFS at all? You can't be successfull with FreeNAS when you don't understand the filesystem. Using 1GB or 8GB is more a ZFS issue than a virtualization issue. There are some other things to understand when using FreeNAS. That's what I mean. Maybe a user is an expert in virtualization, but doesn't have any skills in ZFS.

@mjws00
...
In the case of Proxmox/KVM. The knowledge base is miniscule at best. Not to mention you already have ZFS on Linux running. Why virtualize BSD and ZFS on top of that? We also have known BSD on Hyper-V / Proxmox issues to contend with. Some of which are known to be fixed within FreeBSD 10. But the FreeNAS stable codebase is not on that kernel. So it is a case of blazing a trail that has a very limited purpose.

A secondary consideration is the level of support available. We have some great virtualization expertise (ESXi) available... but the system quickly becomes so complex that the man hours just don't exist for us to troubleshoot all the possible places to screw up. It is a difficult environment to troubleshoot on a forum, with inexperienced end users, especially when there is a whole addtional FreeNAS / ZFS learning curve added.
In that case ZFS on Linux is only for the host vm storage. Proxmox setup creates a pool for itself and for vm storage. When I would share this host ZFS pool with a virtualized FreeNAS setup, FreeNAS wouldn't have control over this pool. So I would loose some of the data protection funtions FreeNAS offers. That's why I would give FreeNAS its own control over a bunch of HDDs and let FreeNAS create its own pool. I think, it's wrong to ask, why do you virtualize BSD and ZFS on top of a ZFS on Linux pool. Proxmox needs a filesystem, so why not ZFS on Linux. It's easy to setup a redundant Raid1 or Z1 for the host system. FreeNAS controls its own ZFS pool. So why shouldn't the host?

You say, that there isn't enough manpower to troubleshot virtualization. I accept this of course. Some do it in their spare time. But even if there is a higher learning curve, that should not be the reason to say, that's the wrong way. Get your hands off.

@jgreco
Which isn't exactly an issue, as I point out in the Virtually FreeNAS sticky. There's often more than one way to accomplish a task. You just have to understand what you're doing and why.
Can you please explain it again? I don't understand what you mean. In your sticky you wrote:
The only sane way to do that is to attach the disks to the FreeNAS VM directly, via PCI-Passthrough, as documented here.
Now you write that not passthroughing a HDD isn't exactly an issue. That doesn't fit together. Or I'm wrong? With not passthroughing a HDD FreeNAS won't have complete control over the pool and the HDD, right? What is the other way your mentioned?

And as long as you own your disaster, we're fine with it, and maybe we'll even buy you a beer.
What about this: It depends on where the pub is and how far I have to go. Probably I will refer to your offer and take grateful the beer without creating any disaster. Promise. At this occasion I will bring you my server to administer, because your're the expert and I look over your shoulder to learn some things. Agree? :)

Which is introduced by the arrogant user-who-knows-better-than-me that sits in front of his computer with a six pack of beer and expects that he isn't subject to fate's cruel sense of humor.
Hope you don't think I am this kind of user. :)

You'll notice that the main virtualization guide is a thinly disguised guide as to how to get RID of the virtualization layer in the event of problems.
Ok, understand.

Like what happens if you reinstall the hypervisor and it cheerfully sees all that "unused" disk space out there and formats it for you? Scary quote from VMware documentation:
LOL, funny example. Not so funny for the user loosing his pool. I understand what you mean. As always: Those who can read (and understand) have a clear advantage.

What hypervisors would those be? Because I encourage you to look at the changelog between versions. This might give you a better idea of what "rock solid" is.....n't.
Huuuu, right, maybe I used the wrong expression in that context. "Rock solid" maybe wrong. But I think that a hypervisor like ESXi is stable enough to use it as a virtualization platform for a storage system. In reality I don't know this of course. That's my subjective opinion. I read ESXi 5.5 release notes:
https://www.vmware.com/support/vsph...55u2-release-notes.html#resolvedissuesstorage
So there are many known existence issues. But what makes a software rock solid? There will always be issues, bugs, whatever... Look at Kernel code.

A person that trusts both is twice the fool. I wouldn't trust either one. Decades of experience coding and administering systems convinces me that there are plenty of problems in both.
Oh man. Please tell me, what is the alternative when you don't trust anything? If you even don't trust FreeNAS itself, not talking about virtualization at this point, what can you do else? Do you trust the Kernel, your HDD, your bank? You have no chance other than putting your trust into the things. If you don't trust a virtualization layer, then you must trust the FreeNAS code, right?


@depasseg
To me, the warnings against virtualizing FN are intended (and have been effective) at keeping people who don't work with virtualization (dare I say at an enterprise level) on a daily basis from following some guide and getting in way over their head with no way to recover.
Ok, but consider that there is also a learning cuve for ZFS. People who are not used to it can make some design mistakes in their storage systems, especially when they work on enterprise level.

Can a storage system run virtualized safely? Absolutely ... Because there are limited configuration choices
HoneyBadger already told it: "Unless this system is strictly for learning the GUI/appearance of FreeNAS it is absolutely asking for disaster."
I mean, if a user has no clue about data integrity he could setup raid0 storage and thinks he is save, because he has heard, putting data on a raid-system is a good choice, but he hasn't any idea of the different raid level. So even a simple GUI with limited configuration choices doesn't protect you against foolishness. In all aspects you have to know what you do. I think virtualization has no exceptional position in this game.

Using limited space (4GB) hast also nothing to do with virtualization. What if using 1GB with a native FreeNAS? I could imagine that you also run into trouble.
 

m00nraker

Dabbler
Joined
Apr 12, 2015
Messages
13
Overall I now have a better understanding what you all think about FreeNAS virtualization and what the problems are. I personally decide to try it for home usage. If it fails one day, I won't complain. Promise. My preferential platform is Proxmox and KVM. I will give it a try with FreeNAS, even if I still have a setup issue. I will see. Maybe I get some help setting up my system in my other thread.
So thank you all. I like the forum.
 

jgreco

Resident Grinch
Joined
May 29, 2011
Messages
18,680
@jgreco

Can you please explain it again? I don't understand what you mean. In your sticky you wrote:

Now you write that not passthroughing a HDD isn't exactly an issue. That doesn't fit together. Or I'm wrong? With not passthroughing a HDD FreeNAS won't have complete control over the pool and the HDD, right? What is the other way your mentioned?

The main virtualization sticky is a general warning for the people who come in here thinking they're going to heap eight 4TB drives onto a 16GB ESXi box, pass them through as nonredundant datastores, run FreeNAS as a 4GB VM, and self-serve iSCSI back to the VMware host platform in order to provide space for their other VM's. Or any combination of that foolishness. I mean, yes, it's a perfectly obvious thing to want to do. It just leads to tears.

The alternative virtualization sticky touches an entirely different use model, one that assumes storage reliability is guaranteed, managed and mitigated through the conventional vSphere tools. It isn't practical for large amounts of storage, but would work for a small office needing to do some basic CIFS document sharing, etc.

There are other theoretically workable scenarios, and I can virtually guarantee that some really smart people have gone through the documentation I've provided, said "I can mitigate this problem by doing X, and this doesn't apply to my specific case because of Y, and I have to fix Z" and come up with some equally workable solutions. That's why I talk about the problems (and solutions) at a somewhat abstract level.

Oh man. Please tell me, what is the alternative when you don't trust anything? If you even don't trust FreeNAS itself, not talking about virtualization at this point, what can you do else? Do you trust the Kernel, your HDD, your bank? You have no chance other than putting your trust into the things. If you don't trust a virtualization layer, then you must trust the FreeNAS code, right?

No. Assume something's out to get you. Test, validate, replicate, and backup. The practical reality is that you cannot prove to me that one of us won't be dead in five minutes. We won't even see the plane coming, or the meteorite, or the hidden structural defect in the building we're sitting in, or the electrical defect in the device we're typing messages on. For the most part, we can recognize the potential of risks without also being paralyzed into inaction.
 

Tywin

Contributor
Joined
Sep 19, 2014
Messages
163
The practical reality is that you cannot prove to me that one of us won't be dead in five minutes. We won't even see the plane coming, or the meteorite, or the hidden structural defect in the building we're sitting in, or the electrical defect in the device we're typing messages on.

We call this someone's "bus factor"; if an individual were to be hit by a bus on their way home, how deeply would it affect the organization's capability? The deeper the impact, the higher the bus factor :)
 
Status
Not open for further replies.
Top