“The fox knows many things, but the hedgehog knows one big thing.” ~ Archilochus
In 2005, Philip Tetlock published a widely acclaimed book, “Expert Political Judgment: How Good Is It? How Can We Know?”, which presented the findings of a study of a diverse group almost 300 individuals, examining their decision-making processes over a number of years. The group was made up of high-profile and significant experts. And the findings were rather damning. As one review summarises it:
These are people who appear as experts on television, get quoted in newspaper articles, advise governments and businesses, and participate in punditry roundtables. And they are no better than the rest of us. When they’re wrong, they’re rarely held accountable, and they rarely admit it, either. They insist that they were just off on timing, or blindsided by an improbable event, or almost right, or wrong for the right reasons. They have the same repertoire of self-justifications that everyone has, and are no more inclined than anyone else to revise their beliefs about the way the world works, or ought to work, just because they made a mistake.
Tetlock studied the expert individuals over almost two decades years, reviewing over 80,000 judgements made across the group. He continuously asked questions about how the experts reached their judgements, how they responded to new contradictory information, how they thought about rival perspectives, and how often they changed their minds when their decisions were proved to be wrong. Following the philosopher Isaiah Berlin’s famous distinction, he identified two broad categories of thinkers, ‘foxes’ and ‘hedgehogs’ (see a 2013 Why Dev post on Berlin’s distinction and its relevance to development workers). Interestingly, Tetlock didn’t start off with this classification – it emerged from his data.
Hedgehogs thinkers “know one big thing” and tend to extend the explanatory reach of that one big thing into new domains. They “relate everything to a single central vision. …in terms of which all that they say has significance.” They often over simplify, and don’t use diverse data sources. Tellingly, Tetlock found that “the accuracy of an expert’s predictions actually has an inverse relationship to his or her self-confidence, renown, and, beyond a certain point, depth of knowledge.”
Ironically, the more famous the expert, the less accurate his or her predictions tended to be. The less successful forecasters tended to have one big, beautiful idea that they loved to stretch, sometimes to the breaking point. They tended to be articulate and very persuasive as to why their idea explained everything… they are more entertaining… The media loves them… Experts in demand were more overconfident than their colleagues who eked out existences far from the limelight…”
(Would it be a cheap shot to pause at this point, and ask if any specific development thinkers come to readers minds?)
By contrast, foxes are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hoc-ery” that require stitching together diverse sources of information, and are rather more modest about their own forecasting and decision-making prowess. They emphasise instead learning by doing, and “pursue many ends, often unrelated and even contradictory….entertain ideas that are centrifugal rather than centripetal;…..without seeking to fit them into, or exclude them from, any one all-embracing inner vision.”
In his analysis of how these different styles mapped onto effective decision making, Tetlock was able to draw a number of fascinating conclusions:
- there was no significant correlation between how experts think and what their politics are. Both hedgehogs and foxes were liberal as well as conservative – although hedgehogs were more likely to be at the political extreme, right or left
- That over-simplification was one of the gravest errors: “experts go wrong when they try to fit simple models to complex situations.”
- that hedgehogs performed worse in areas in which they specialized, while foxes actually enjoyed a modest benefit from expertise
- that hedgehogs routinely over-predicted, but that when they are right, they are often ‘spectacularly right’
- that overall the foxes outstripped the hedgehogs in terms of their decision making accuracy and effectiveness: “The better were eclectic thinkers who were willing to update their beliefs when faced with contrary evidence, were doubtful of grand schemes and were rather modest about their predictive ability.”
- that the best decisions and forecasts were made by those who used a variety of formal ‘aggregating models’ to clarify and simulate the problems they faced: “whereas the best humans were hard-pressed to predict more than 20 percent of the total variability in outcomes… models explained on average 47 percent of the variance…”
Relevance and ‘so whats’ for development? Three things stand out for me.
First, that there is an obvious tendency towards hedgehogs dominating the upper echelons of the development system, both intellectually and politically. What are the implications? Should we be worried?
Second, that the pathway to better decisions – which as I read it is to be humble, eclectic, and use better models – very much resonate with the key tenets of applying complex adaptive systems thinking to development.
Third, we are seldom explicit about the way in which we make decisions within the aid system: so many things are so very opaque. More transparency in aid shouldn’t solely be in the form of data, tables and figures – though of course these are important – but also in justifications and explanations. As Tetlock himself puts it: “we as a society would be better off if participants in policy debates stated their beliefs in testable forms… monitored [their] performance, and honored their reputational bets.”