All decisions, in one way or another, deal with numbers. Making good choices is about understanding things like risk calculations, statistical patterns, and basic probability. And though many people turn and run at the sound of these words, there’s no need to. You can easily apply these number-based concepts in order to improve your decision making.
Here, we offer you a set of tools that can improve the way you think about numbers in decision making. We will discuss three of the most common mistakes related to numbers and decisions. Then you’ll learn about specific strategies to counteract these errors. In no time, your understanding of the power of numbers will help you make the most logical, unbiased decisions.
As always, our team of psychology and neuroscience PhDs have looked into over 20 papers in decision science, cognitive neuroscience, and psychology to provide you with these suggestions.
Why numbers are so important in making good choices
Numeral and statistical literacy isn’t a trivial skill. Such abilities are crucial for optimizing decisions. They help with things like:
- Applying statistics and likelihood estimates as evidence in arguments.
- Persuading and convincing others about the importance of an idea or concept.
- Reading and interpreting numbers presented in surveys, statements, and graphs.
- Seeing the difference between correlation and causality to know which course of action demands more of your attention and time.
- Sticking to long term plans and goals for better project success.
Yet despite this, most of us have memories going back to our earliest days in grade school, where our teachers led us to believe that you either “get” number and math, or you don’t – and that there isn’t much you can do about it. Not true. The way you process numbers can be learned and improved, regardless of your baseline skill level, and regardless of what you were told by your Grade 8 teacher.
Let’s start the training.
So how good do you think you are with numbers? A simple online tool (link below) can assess your basic statistical literacy in decision making. More specifically, it’s a test of your probability reasoning. In the test, there will be anywhere between 2 and 4 questions and should only take you a couple minutes. You will be told your percentile ranking on the last page after giving your answers. Click here to start. Come back here when you’re done.
How did you do? Are you surprised by your results? This will give you an indication of how much you stand to improve your numeral and statistical literacy in making good choices, especially those related to uncertainty and risk.
Below, we show you how to tackle 3 of the most common pitfalls related to statistical literacy in decision making, which will teach you how to:
- Tell the difference between relative and absolute risk.
- Recognize your disposition bias (AKA “know when to fold ‘em”)
- Recognize false positive and false negative information related to your decisions.
Don’t be tricked by relative risk estimates
There are two types of risk – absolute and relative risk. Absolute risk is the overall size of the risk. Let’s say, for example, you don’t want to roll a 3 on a dice because you’ll lose all your money in a bet. When you go to roll the dice, the risk that you will roll a 3 is ⅙, or 16.67%.
Relative risk, on the other hand, is how much your risk increases or decreases under certain comparison conditions. Continuing our example, if you were to now roll 2 dice (still hoping to avoid a 3), the relative risk increases twofold (you have two ‘3’s in the set). But the absolute risk remains the same – 2 out 12, which is the same as 1 out of 6, or 16.67%.
These principles have real-world implications. Consider, many people have to make time investment related decisions and to ponder over questions related to risk evaluation, return, and uncertain outcomes. For example, when you have a presentation to do, you need to decide how much time to invest in preparing for it. Is the return on your investment worth it? Will your boss recognise your efforts? Will you get a raise? In such circumstances, it’s rare to think in terms of absolute risk; instead it’s believed that relative risk is the optimal way for making decisions. The problem with this is, relative risk-values tend to be over-inflated and can be more easily skewed.
For example, let’s say existing knowledge and evidence tells us that a particular investment decision in a certain market leads to negative outcomes, on average, 1 out of 50 times. Now, shifting market pressures cause the negative outcome to happen not once (out of 50 instances), but twice. But the only information that you hear through the grapevine is that it’s twice as risky – a relative increase of 100% (a relative risk estimate).
This information alone would deter you because a two-fold, or 100%, increase in risk sounds unnerving. But when thought about in terms of absolute risk, it’s merely a 1 in 50 increase (now 2 in 50), a minor bump you don’t need to lose sleep over.
The same flaw in forecasting about future unknowns can happen for positive outlooks as well. Whereas the above example might cause unnecessary anxieties, similar reasoning can promote unrealistic hopes and over-the-top optimism.
Here are some tips to keep in mind when evaluating risks. These will help you improve your numeral and statistical literacy for making better choices:
Be suspect of relative risk values: This means coming across any data or numbers being reported in percentages. Ask yourself:
- Is the risk evaluation based on percentage data?
- Is the number presented in relation to other numbers? For example “Our company is 80 times more likely to go bankrupt … ”
If yes, pay extra attention to the following steps:
1. Once you know the number you’re looking at is relative, look for more information (“small-print” info that is not so obvious). What does the number you are looking at represent? Is there an agenda of the person or group providing this information? Would they have reason to intentionally skew the data in their favor? You want to translate the relative risk number into the absolute risk number for greater transparency.
2. For example, if you see “80 times more likely to go bankrupt …” and it ends there, you should always ask yourself the follow-up: “80 times more likely than what? than whom?” Perhaps it’s “80 times more likely to go bankrupt than the largest competitors in the market.” Now you will be prompted to find out how likely it is for the big players in that space to go bankrupt. Perhaps their risk of bankruptcy is a mere .05%, which makes the initial estimate on your company 4%. That 4% is the absolute risk. That sounds much less disastrous than the 80 times initially outlined as a relative estimate.
3. Also, look for initial states when you come across percentage estimates. For example, if you see something that says “The sales data went up by 30% …” then that signals that you should find out the initial state to get a true read of the before and after. Perhaps it’s: “The sales data went up by 30% from Q1 2014 to now.” Even better, get the raw unit count to see how many items were sold then, versus now. “The number of units sold in Q1 2014 was 375k. The number of units sold last quarter was 488k.”
When making your decision, don’t be swayed by the relative numbers. Ignore them! Instead, focus on the absolute values you identified in the step above. If you’re unable to find that additional information to make the translation from relative to absolute risk, you should be skeptical until you do find out more information.
Bonus tip: We recommend that any time you read or hear about things being put in terms of percentage estimates, probabilities, or higher versus lower likelihoods, question whether it’s the relative risk or the absolute risk being reported. And be aware: most media and journal reporting outlets (even the reputable places) rely on relative instead of absolute reasoning because the latter is more impactful, but often more misleading.
Know when to hold ‘em, know when to fold ‘em
Many decisions that you face come down to either giving up on a plan or holding onto one. How it’s decided is based on the expected utility – that is, the potential positive gain you expect to receive from sticking to a current plan versus abandoning it. This is referred to as disposition effect. It’s the tendency to stick to the plan when you expect positive gains and abandon the plan if you expect little or no gains.
Because of the fear of uncertainty and loss, the disposition effect can skew our thinking and lead us to make wrong decisions. People are risk averse when facing potential gains, and risk tolerant when facing potential losses. This means that we’re much more likely to “stick it out” and take the risk with a plan even when, all else being equal, it appears to be ineffective.
It’s during these times that we wrongly believe there’s a better chance that things will turn around. It’s an unconscious bias akin to wishful thinking, where we hope that any initial losses can be eventually recouped with just a little bit of patience and waiting. At the heart of it, however, it’s a flaw in probabilistic reasoning.
Why?
It’s all in your brain. There is a broad set of areas in your brain responsible for recognizing utility in decision making. Look at the brains on the right hand side of the image below. Those areas become active if there are potential gains involved in your decision making.
However, if the decision involves potential losses, the same brain regions experience a decreased level of activation (brains on the left hand side). What is interesting is that those areas are also responsible for processing negative emotions in decision making. So, as a result of dampened brain activity, a potential loss in a decision doesn’t come with negative emotion. With little anxiety or fear, you’re more motivated to “stick it out.” That’s not good.
How do you deal with it?
The best antidote against this is to have a predefined rules-based approach when facing the decision to abandon or stick to a plan. Follow these steps:
1. In the beginning stages of a plan, create a set of rationally-defined rules. Be specific! Those rules are something that defines whether your project is a success or not. They are your benchmark for evaluating if your project is worth continuing on. When you set the rules, make sure they are based on concrete numbers. This is where the power of statistical literacy kicks in – use the numbers to draw the benchmark line that will indicate your success. Also, use the absolute, not relative numbers – quantify what the successful project represents.
2. These rules are agreed on well in advance, and ideally done with at least two other people for improved objectivity. You can refer to the project proposal and look at the goals and timeline you specified there. That way there’s no level of ambiguity down the road when questions or issues arise.
3. Make the criteria you established in Steps 1 and 2 known, and share it with the team. Have a “running project” whiteboard in the office where you list out these criteria and revisit them at set points in the project timeline. By doing so, you make it official and you cannot ‘back out’ and change your rules if the project takes a turn.
4. Agree with your team on a firm date when you make a decision about whether to carry on with the project.
5. If the rules are not met by a certain date in the project timeline, it should lead to the decision to abandon the plan or revise it as necessary.
6. Don’t be fooled into thinking that you can reason about future uncertainties properly in the middle phases of a project or plan. This is when you’re most susceptible to the disposition effect. Implement beforehand a series of checks-and-balances that prevent you from falling prey to this bias.
Recognize false information
When making good choices, the notion of false positives and false negatives is critical. Generally, two types of misses or errors can occur, which are relevant to many types of decisions in the workplace:
- The mistake of thinking something is true when it is not (false positive)
- Thinking something is false when in fact it is not (false negative).
First step to reasoning properly about these outcomes is to see them as part of a broad statistical pattern that follow laws of probability instead of as one-off instances. The mistake would be to act on either of the misses without having a more general understanding of all possible outcomes.
For example, let’s say your team has gathered initial research evidence supporting a positive product-market fit in your new business line, but months down the road you learn this initial information was wrong and the product launch was a complete flop (false positive = wasted money and resources).
By contrast, let’s say your team decides there’s no solid reason to invest more money in a new technology that is underdeveloped, only to find out five years down the road that a spin-off technology led to combined reported sales of a $100M in your competitor (false negative = missed opportunity).
To avoid acting on single one-off instances (and being biased by either of these types of misses), we recommend having self-checking methods that prevent one-off false negatives and false positives to help with making good choices.
An effective strategy is publishing and pooling data and outcomes of ALL projects, not just those that work. Why? People have a natural tendency to publicize and make known what works (positive results) and hide what doesn’t (negative results). Yet, there is a lot of information to gain, when you look at what ideas and projects did not work out on a whole.
Keep your organization and your team on its toes by asking yourself questions like:
- How many promising ideas have we dismissed in the past year? (false negative reasoning)
- Why were they dismissed? (false negative reasoning)
- Have these ideas found business models elsewhere? (false negative reasoning)
- How many big bets have we lost and why? (false positive reasoning)
- What caused the product/service to fail the market tests? (false positive reasoning)
A great way to represent the accumulation of all the evidence is by tallying results in a 2X2 table. This allows you to assess the overall probabilities of all the different outcomes (four in total) and to have a better grasp on the possible unknowns of future decisions. For our purposes, we’ll focus on just the two misses/errors. For example:
Here we see that the false positive rate is very good (a low number of only 3%). This is saying that any time the market test results indicate that a product would do well, in only 3% of the cases are the tests inaccurate and actually lead to a poor product/market fit.
But look at the high false negative rate of 25%. This is bad. This says that for every four tests that are run, one comes out saying that the results aren’t going to be good, when in fact they end up being a win. This is huge lost opportunity.
This framework should be used in order to see the accumulation of activity in your business’s decisions. The unknowns of the future can be assessed with a greater degree of confidence if probabilities are considered properly.
Recap on using numbers in making good choices
The common misconception is that some people are good with numbers, and some are not. However, everyone has the potential to master the tools necessary to use numerical and statistical literacy in making good choices. In this post, you learned simple tricks and tips on how to evaluate numbers related to your decision making.
Here is a recap:
- Recognize whether the risk is an absolute or relative value in order to be better informed of the idea and to use that knowledge appropriately.
- Know when it’s a good time to abandon projects to help you evaluate predicted success of something long-term.
- Pool all the misses and failed outcomes in order to get a clearer estimate of how to decide next time around.
With these tactics, you can improve your ability to work with numbers in a whole host of decision making contexts. Making good choices will be as easy as 1-2-3.