Pandemics produce epistemological crises: We are no longer sure about what we know since so much of what we know is no longer true.
It’s happening now, as it happens in all pandemics, going back (at least) to the Black Death
of the 1300s.
The Black Death was caused by “the Plague.” Like COVID-19, it originated in Asia and spread around the world by international trade and travel. Rulers of the era desperately sought information to understand it. Spies were sent to report on what others were doing; foreign agents of the kings of the era were investigating. Social distancing was important, and yes, there was a shortage of PPE.
We have gone through something similar with our pandemic. The first deluge was about how to keep ourselves, our families and our businesses healthy. Now we are in a period of trying to discern what happens next. Will there be a new normal? Will customer trends and needs change? The information is fragmented and spotty.
Today’s newsletter addresses how to make sense of this fragmented and spotty information and offers suggestions on what to do about it.
Make Sure Information is Not Being Blocked by Your Organization
So how does one get the information that matters without being overwhelmed? The key is sampling. Make time to meet with your customers, employees and suppliers (all online, most likely), not to mention keeping an eye on the competition.
That said, remember that sampling comes with its own set of gotchas, so be sure to keep a few things in mind…
Be Careful With Small and/or Biased Samples
“Path Forking” is a small sample problem that occurs when people look hard for their holy grail.
This snag first became clear to drug companies as they spent lots of money on tests and trials that all looked great, until the big test came and… fail!
What happened was that in the search for the next cure, researchers kept slicing and dicing data until they found something they thought would work. And it did work — in that very small sample. Unfortunately, that sample was in no way representative of the general population in question. It was as if they had gone down a path of so many “if then” exercises that they became nonsensical. Fundamentally, this is a problem with all data samples (and most small samples).
Make Sure Your Data Is Truly Representative
- Buy data, if available. Many services track all sorts of industry and consumer data. That data typically comes from a much larger data pool than is available to an individual company.
- Look for trends. If you see a trend with one customer, or supplier, etc., look to see if it holds true with all (or at least most) customers. Look to see that the trend lasts more than a week or a month.
- Test when possible. Years ago, when I worked at Kraft Foods, Packaging developed a squeezable container for Miracle Whip. When it came time to make product samples, rather than rolling them straight out to a test market, they were handed out to employees to use for 60 days. Short answer: Big fail. If the squeeze jar was dropped it blew-up on landing, sending Miracle Whip everywhere! So, test new ideas if you can.
- Look for contradictory examples. In other words, look for when your model is wrong and the unexpected happens. I am not talking about financial models here. Rather, I am referring to the models we all have about how things work. Dig in and seek to understand the variance. Is it idiosyncratic? Random? Customer-specific? Or is it systemic and part of a larger trend? As William C. Wimsatt observed, “We come to understand how things work by studying how, when and where they break down.”
Enough Ruminating. What to Do?
First, consider the consequences of any decision you might make. Is it easily reversible? Will it have a broad or narrow impact? Pull the trigger quickly on decisions with small, reversible consequences, and vice versa.
For example, when I was in the rent a car business, we had a problem of way too many cars getting into accidents — all from local residents. We didn’t have the data to identify zip codes with statistically significant bad accident rates — the sample sizes were too small to mean anything at the zip code level. So we raised rates for all locals from anywhere in Maryland, to discourage them from renting. Since the cost to us of a wreck was sky high, we punted: No return flight out of BWI, no rental car. It worked. Just because you don’t have enough data, don’t be afraid to act when the costs of not acting are too high.
Second, look for robustness.
This refers to making decisions when there is no ironclad data to prove you will be right (AKA, most
business decisions). Robustness is the existence of multiple, at least partially independent, means of determining something.
This may include several analyses that use different data sources or methods and that uncover similar (or even identical) common threads. For example, if some customers are telling you about a trend and you are also seeing that trend elsewhere — particularly with other customers — you can have some confidence that the trend is real.
Third, be wary of mind traps.
These come in many forms…
- Anchoring. The tendency to make “preliminary,” or strawman first impression decisions. Often, this leads to ignoring the rest of the data, even when that data does not support the initial decision.
- Projecting. This refers to placing our own value or belief systems onto other organizations regarding how they will act. This trap led to the Cuban Missile Crisis — Kennedy and his advisors assuming Russian thinking and values mirrored our own. But it plays out in business, too, when we assume competitors, suppliers and customers will react to events in the same way our company would.
- Delusions of clairvoyance. Assuming we know what others are thinking. Whether it’s TSA attempting to apply the technique of “Screening of Passenger by Observation Technique (SPOT) — an effort that uncovered zero terrorists over a five-year period in which 2 billion passengers were screened — or one business assuming it knows what’s going on inside another, research has shown that despite our best efforts, we can’t do this with any degree of reliability.
- Confirmation bias. This involves focusing on information that confirms our existing beliefs and rejecting data that contradicts them. Confirmation bias can make anchoring bias worse.
- Pattern recognition. While our ability to see patterns in information based on experience is valuable in any number of situations, if we don’t check the underlying data, it’s easy to identify the wrong problem or its cause. Pattern recognition without data verification often results from a lack of time to make proper decisions, sloppiness, or overconfidence in general.
- Model reuse. A complementary trap to pattern recognition is reusing models in new situations and assuming the old inputs are the same.
As the pandemic rages on, we are all operating in a world that, while not new in the history of civilization, is new to us. Our environment is rapidly changing and the ways in which we adapt ourselves and our businesses matters more than before.
Moving forward, don’t be afraid to make bold decisions. But do so thoughtfully and with care.