Great People & Poor Decisions
Why do intelligent, experienced leaders, with consistent track record of making the right calls sometimes make poor decisions?
Models are needed not only for timely decisions but also to extrapolate our experiences to new situations. Sometimes, our models fail spectacularly - what can we do to mitigate this?
The ability to extrapolate is an important benefit of building models. Those of us who have played with, and burnt by fire as a child, will know from first-hand experience that flames are hot. When we encounter a different flame - say a Bunsen Burner at school for the first time - we will be instinctively hesitant, even if we have not touched that particular type of flame before.
It is worth noting that all models are simplifications of something, and therefore must omit some information about the thing it is modelling. Good models omit information that is less likely to be material; however it is always possible that some edge cases will occur which will depend on the information that has been omitted.
Furthermore, as the world is always changing, any model built from a point in time will become less useful over time.
A model, for models
Here is a hypothesis for individuals form models:
Tabula rasa
We are all born with a very limited amount of knowledge - a newborn knows very little about the world, as a newborn has very little experience of the world.Establish
Over time, and individually, we create models to help us make sense of the observations. When we encounter a new situation, we use our past experiences to bet on what we should do.Exploit
Those of us with models that predict more accurately, or offer some unique insight that others do not have, get ahead.Rot
The world is continually changing. Those who become more successful focus on exploiting their model, and do not always notice divergence of their model from reality.Collapse
At some point, an important decision is called for, one with material outcomes. The individual is either is not aware of the divergence, or is left unable to make sense of the situation.
A common consulting challenge
A while back, on a transformational programme, I walked into a meeting room in midst of a heated exchange between a Client and the team:
Client: "You consultants are all the same. You don't understand our domain, yet you come in, and give us advice that doesn't work for us. Why should I listen to you?"
Awkward silence ensues.
Variation of the above seems to crop up every now and then. Client recognises a problem, and even sometimes that they lack the capabilities to resolve the problem without outside help. External help is brought in and after the initial analysis, the recommendation is not what the Client had in mind - sometimes, it is the diametric opposite!
Me: "You are where you are, because of the choices you have taken, and the perspectives you have adopted. If you want to be somewhere different, you will need to choose differently."
My response carries risk, but one I would happily take. Consultants are hired in to solve a problem, and the measure of success should be on overcoming/getting around the challenge, not saying agreeable things. The above response (and variations of) has not always had the desired effect, however exiting for the right reasons is preferable to sticking around for the wrong ones.
Overcoming model rot
Validation - the action of checking our modelled outcomes against real-life observations, and adjusting where necessarily - is just as important as the ability to distil an advantageous model in the first place.
Here are three tactics to help us overcome the emotional attachment to a trusted model; to realise that a model is only as good as the predictions it is about to make, not past predictions it has made:
Welcome disagreement
Disagreements (at least ones in good faith) and differing points of view are potential hints of model rot. I now actively excuse myself from conversations where there is unanimous agreement - in such situations, I have nothing to contribute, or gain - except for risk creating a false sense of security.Continuously validate
Test models for divergence frequently, when there is little cost of getting things wrong. Small surprises are often easier to deal with than a single insurmountable one.Question the model
When faced with an important decision, we should recognise that any simplification, by definition cannot account for all situations. The key questions are "what is the reasoning for this extrapolation?" and "what are the material differences?".
Where technology is involved, models tend to go out of sync with reality at alarming rates. Fifteen years ago, if you told me that large scale data processing (e.g. data science) will be done in an interpreted language (python), I would have laughed at you, yet here we are. Hard-earned experience of optimising C++ code (e.g. when to inline functions, intricate knowledge of STL structures and algorithms) are no longer relevant topics of the domain.
It is not that models go wrong sometimes; when technology is involved, model divergence compounds and accelerates over time.
The issue is not with having a flawed model - models are simplifications, and therefore limited in some way or the other. Models are necessary for us to apply our principles learned from past experiences to new situations. Issues arise when we fail to continuously validate our models against reality - and let what we believe the "world should be" override over what reality is.
Photo by Javier Allegue Barros on Unsplash