How hard can it be?

Why do we keep getting seduced by big bold ideas? Because we are experts at seeing the world as simpler than it really is. Read Trust Work's latest blog on the implications of this human trait.

‘How hard can it be?’ were David Cameron’s famous first words, when he assumed the role of Prime Minister of the UK in 2010. (His last words as PM in 2016 were somewhat less breezy).  Why does it seem to be part of human nature to downplay or even to not see the complexities of the choice we are about to make? Is it to speed up evolution: the unlucky or most foolish will eradicate themselves, and the few lucky ones will prosper? Is it to avoid stasis: who would get married, change jobs, discover a new continent, or fly to the moon, if all possible ways in which it could go wrong were known at the time? 

I honestly don’t know, but it strikes me that – especially when it comes to important political decisions, but also when considering organisational change, we seem to be more attracted by big bold visions than by well thought-out and thoroughly evidenced concepts. We seem to prefer placing all our chips on ‘23 red’ rather than carefully spreading our risk. 

It is not difficult to call up a few examples from British politics in recent years: Brexit anyone? Transformation Rehabilitation Provision? HS2? But to be fair, this is not just a politicians’ weakness. 

Of course, big bold ideas speak more to the imagination than a proposal for incremental improvement. We would rather dream of a sleek, shiny super-fast train slicing through the countryside than of a putting up poles to electrify a nearly forgotten part of the rail network in the backcountry. 

Perhaps it has also something to do with the concept of ‘loss aversion’, the effect that human beings tend to prefer to stick with what they know (i.e. avoid a possible loss) unless the alternative brings at least twice the gain. So, if you want to mobilise them and gain their support, you had better promise something big? 

Or perhaps there are other elements at play. Not only do we often find it hard to see the downsides and complexities of the choice we are about to make, but once we have made it, we find it almost impossible to see that we may have got it wrong. Human nature seems to be an expert in twisting logic as well as facts; helping us ‘explain’ why our decision was not only right when we made it, but is actually still right today. 

Take this famous example from 1954, when charismatic housewife Marian Keech from Michigan proclaimed that she had been in touch with an alien civilization who informed her that the world would come to an end on 21th December of that year. Friends, relatives and many others believed her assertion that only true believers would be saved by this alien force. Of course, the 21st December came and went, but did the believers lose faith in Marian and her story? Not a bit of it. If anything, their faith had become even stronger. They redefined the prophecy, claimed that their faith had convinced the alien force to give Earth a second chance, and went to recruit even more believers. 

The learning from this example is that rather than interpreting the evidence for what it is and realising we got it wrong, we are more likely to reframe the evidence, which allows us to come up with new interpretations and clarifications. Sadly, this happens all too often, and it probably occurs to all of us to some extent. It seems we are all hardwired to see the world more simply than it really is. 

This latter point was brought home to me the other day when I read ‘All out war’ by Tim Shipman. It tells the story of how and why Britain ended up voting 52%-48% in the Brexit referendum. As I was reading the book, it felt so obvious, so logical. All these interrelated events, decisions and mishaps seemed to be so inexorably leading to the UK leaving the European Union. Why didn’t anyone do something as these events happened

Apparently, there is a term for this too: the narrative fallacy. Identified by philosopher Nicholas Nassim Taleb, it refers to our tendency to invent simplified stories after the fact, to make sense of the events we have just witnessed. This is far from uncommon: just watch Match of the Day every Saturday night to see pundits very clearly explain what just happened on the football pitch and why, and what the manager should have done differently. Or witness the average political or economic commentator on Newsnight. 

Which begs the question: if it is so easy to explain, why didn’t the people in charge foresee the problems in the first place and prevent them? The fallacy rests in the fact that whatever is presented is hugely simplified and is furthermore benefitting from a generous dose of hindsight. 

OK, so no wonder then, that we are easily falling into the trap of proposing big bold ideas, ignoring the evidence of our plans failing, and presenting a good story afterwards to explain what happened was actually a good thing. But can this be avoided? 

The main lessons here are to be learnt from nature, science and industry. As we all know, in nature change happens gradually, through evolution and adaptation. Sometimes a meteor might strike and the system incurs a shock, but this is the exception that supports the rule. There are plenty of examples where industry shows a similar route. James Dyson famously developed more than 5,000 prototypes before his cyclone technology was good enough to become a mass produced and hugely successful vacuum cleaner. Trial and error, defining hypotheses and measuring results to evidence or disprove them is at the heart of many an industrial innovation. Which in more recent times has found a way in Google’s A/B testing, in which different versions of their web pages are let loose on the public, to see which version works better in practice (and at scale). 

As I am writing this blog in the week that the first Corona vaccinations are happening, it seems only fitting to also shine the light on the scientific practice of randomised control trials to identify what really works. This is the process in which two groups of people are treated for the same condition: one with the real drug and one with a placebo. Neither patients nor researchers know who gets what, so any significant difference in outcomes between the two groups is unambiguous evidence of efficacy. 

So what of this is of help when considering decisions about the types of organisational change or public policies we were talking about earlier? Well, although I am first to admit that I can still get quite excited by big bold ideas, I am also more aware of the need to break them down into small workable parts that can be safely tried and tested. Rather than going for the big-bang, I believe this type of change is more likely to be successful and ‘sticky’. Especially when produced bottom-up, in an incremental way, and with the blessing and ownership of the people who will have to live with the consequences. Trying things out, measuring results against a control group, and going from there. Only then are we more likely to avoid reframing and spin or losing the real impact of the measure in the complexities of real life. 

There will always be moments when a randomised control trial is not an option (a marriage proposal springs to mind) and a leap of faith is called for. But in most other cases we should be able to ‘do change’ much better. With a wealth of data, knowledge and experience, coming up with an informed answer to that ‘How hard could it be’ question should then not be that hard.



Looking for something specific?