Copyright © 2017 K Carpenter Associates Inc. All Rights Reserved.

Website Design by Drip Drop Creative

# When Do You Know Enough? – Part 2

About a year ago, I wrote a blog entry entitled, “When Do You Know Enough?” The point of that piece was that you can be sure of the need to take action (and sometimes what action to take) even when you do not have certainty about what the results of those actions will be. Many companies and individuals waste time and resources trying to “prove the future,” which is a futile effort. Rather, we need to have the nerve and the good sense to take reasonable risks in the face of uncertainty.

But surely, *sometimes* it makes sense to wait, doesn’t it? Yes, of course: it may make sense to postpone a decision if we anticipate acquiring information within that period of time *and if the following criteria are met:*

- There is a reasonable probability that the information will cause us to change our decision
- The net value added by the information is positive, meaning that the value exceeds the net cost, including any costs or lost value associated with delaying the decision.

This may sound like common sense, but you would be surprised how often these criteria are violated, ignored, or rationalized away.

People *like* having more information, especially in complex situations with lots of uncertainty. If you ask someone how much information they should collect prior to making a major decision, a common answer is, “As much as possible.” In today’s world of Analytics 3.0 (a.k.a. “Big Data”), that might be quite a lot of information, indeed. Sending a team back to gather more data and do more analysis is also a common way for business managers to postpone making tough decisions.

But much of the information potentially available is likely to fail one or both of the above tests. I’m not talking about the obviously irrelevant (like the shoe sizes of prospective patrons of a hardware store we’re considering opening); I’m talking about information like the results of focus groups, for instance, which may or may not be representative of our prospective clientele. Small sample sizes and sampling bias make much of that type of information extremely unreliable. If considered properly, unreliable information will usually not cause us to change our decision.

So how reliable is “reliable enough?” Enough to meet the two criteria listed above! Admittedly, that is circular logic: We will only potentially change our decision if the information is reliable, and the way we determine reliability is by checking whether it has the potential to cause us to change our decision. The paradox is resolved by calculating (estimating, usually) several factors:

- The probability of a positive outcome if we go ahead and make the decision without the additional information
- The expected value (EV) of that positive outcome
- The EV of a negative outcome
- The “positive reliability” of the information
- The “negative reliability” of the information

With those last two factors, it is important not to put the cart before the horse. The reason we want more information is we are uncertain about some real-world parameter (like how strong the market will be for our product, whether there is a gold vein under our property, or whether our new drug will be effective in curing halitosis). The probability of getting a positive indication from our test *depends on the state of that real-world parameter*, not the other way around. So the probability of getting an encouraging result from our marketing survey depends on how strong the market for our product really is; the probability of seismic data indicating a gold vein depends on whether or not there actually is a gold vein; the probability of a positive result from a clinical trial depends on whether or not the drug actually is effective. Conditions in the real world cause positive or negative test results; test results do not change conditions in the real world. Therefore, we must always start with “Given a real-world situation of X, what is the probability of seeing test result Y?”

But that’s not what we want to know. We don’t care what the probability of a positive test result is, given a positive real-world scenario; we want to know what the probability of a positive real-world scenario is, given a positive test result. We need to convert the actual cause-and-effect order of events (the real-world situation is X, which causes a test result of Y) into the order in which events happen in our world (first we run the test and get the results, then later we get to know what the real-world situation is). We do this by doing what is called a Bayesian inversion. The math is actually fairly simple; structuring the problem correctly takes a bit of practice.

As a public service, Decision Strategies recently uploaded a value-of-information (VOI) tool to our web site that is free for the downloading. Judging from the number of downloads so far, people are interested. The tool takes you through the logic step by step and automates the Bayesian inversion for simple VOI situations – those in which you have only two or three decision choices, and in which your uncertainty can only have two or three values (or can be simplified to be represented as such). It has other features to accommodate typical VOI complications, but rather than describe them here, I would recommend that you take a look at the tool itself at http://www.decisionstrategies.com/toolbox.