When does crowdsourcing work best? New research from the Institute for Human Development provides answers which may be of relevance for aid projects and programmes.
There has been a lot written, spoken and blogged about the power of crowds in making decisions. In James Surowiecki‘s bestselling Wisdom of Crowds, published in 2004, the central thesis was that diverse groups are likely to make certain types of decisions and predictions better than individuals – even those with specialist expertise. As Surowiecki noted:
…under the right circumstances, groups are remarkably intelligent, and are often smarter than the smartest people in them”.
The six years since the Wisdom of Crowds was published have seen the rise and rise of online social networking and related technologies. Social media and the power of the crowd have been at the heart of everything from political resistance movements to presidential elections (and indeed, resistance movements following presidential elections). The term crowdsourcing was coined in 2006 to describe an organisational approach that harnesses the creative solutions of a distributed network of individuals. As one of the originators put it:
Simply deﬁned, crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undeﬁned (and generally large) network of people in the form of an open call. This can take the form of peer-production (when the job is per formed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the large network of potential laborers.”
There is a growing – some would say evangelistic – enthusiasm for crowdsourcing as the answer to a whole range of problems. Just a few initiatives off the top of my head: fundraising for socially responsible films, the development of transit planning in urban areas, combating corruption, creating markets for innovations, expanding scientific peer review processes. A quick Google illustrates just how expansive this agenda is.
The potential for crowdsourcing to contribute to international aid has also attracted a lot of attention, with perhaps the most prominent example being the role of new innovative technologies in the aftermath of disasters. The following is a typical example of the arguments made by the ‘pro-crowd’ camp:
The rapid proliferation of broadband, wireless and cell phones, coupled with new crowdsourcing technology, is completely changing the face of disaster relief. Everyone with a computer can provide crucial assistance, sifting through satellite photos, translating messages or updating maps, and most people are happy to do this free of charge — contributing to life-saving relief efforts is a powerful motivator… At a fraction of the cost of most relief budgets, crowdsourcing can solve coordination problems on the ground.
As many readers will be aware, crowdsourcing in disaster responses has been the focus of a passionate, sometimes vehement, and at times rather distracting debate.
My intention isn’t to retread ground that has already been well covered – and occasionally angrily stamped on – elsewhere. Instead, I want to explore evidence that tries to explain – following Surowiecki – the specific conditions under which a crowd is effective. Does recent research on decision-making yield any lessons or ideas worth a closer look?
Certainly, some of the crowdsourcing argument is borne out by the evidence. Numerous disciplines – from anthropology, cognitive psychology and evolutionary biology – suggest that collective decision making can help group members cope more effectively with unfamiliar contexts, and it is almost a cliche to say that humanitarian disasters are the archetypal unfamiliar context. However, reviews of this literature suggest many of these studies lack testable, well-structured concepts and hypotheses to explain exactly what collective decision making involves when compared to other kinds of decision making. They also often fail to examine the implications of different kinds of decision-making processes for the accuracy of decisions. These issues echo the challenges that have been put to the crowdsourcing community.
One recent exception to the above is simulation-based research that has been undertaken by analysts at the Institute for Human Development in Berlin. This work looks at a range of decision making processes, and suggests that there are two distinct ways in which groups can work to provide solutions to a problem.
First, individuals can follow specific ‘leaders’ in the crowd. This usually means drawing on those experts with information particularly relevant to the decision at hand. This is comparable to the typical aid decision-making process.
Second, crowds can work to aggregate information from the members, which is then made available to the crowd itself or to a third party. This enables decision making to be enhanced through ‘collective cognition’, a concept that underpins many of the arguments for crowdsourcing. This collective cognition can be unconscious emergent property, or it might be facilitated consciously through network interactions within the crowd.
The work by the HDI suggests a number of findings which are pertinent for the aid crowdsourcing debates:
- a number of conditions influence when groups use ‘follow an expert’ or ‘wisdom of the crowd’ strategies. Specifically, the researchers found that the diversity of the group, the quality of individual information and group size all had a bearing on which approach is chosen.
- in so-called single-shot decisions, experts are almost always more accurate than the collective across a range of conditions. However, for repeated decisions – where individuals should be able to consider the success of previous decision outcomes – the collective’s aggregated information is almost always superior
- regardless of the decision-making approach taken, groups must have the potential to acquire information through social interaction, respond positively to those who possess pertinent information, and update their approaches based on the success of the previous decisions
- In ephemeral and unstable social groups that make collective decisions only occasionally, individuals tend to follow the most informed individual. Stable social groups that encounter repeated decision points would do well to use some information aggregating process.
At the risk of over-generalising, the above suggests an emerging hypothesis – that for many simple or complicated issues where only one attempt is needed – ‘puzzles’ or ‘problems’, as a previous Aid on the Edge post put it – there is potential for experts to outperform crowds. The best illustration is to point out all those problems Malcolm Gladwell covered in Blink – detecting if a work of art was a fake, whether a teenager was carrying a gun, whether a fire would lead to a building collapsing, and so on.
In complex problems that require ‘multiple shots’, crowds can help augment expert perspectives by developing emergent solutions to evolving problems. The processes of information aggregation, transparent decision-making and effective feedback loops are essential here – all concepts which will be familiar to those interested in complex systems thinking.
Although the research is narrow, preliminary and based on mostly on theoretical simulations, the HDI work does point towards a more structured way of understanding the limits and possibilities of crowdsourcing. As such, it could be a constructive way to start to navigate some of the entrenched debates we have seen to date. Ultimately the research suggests that we shouldn’t be asking ‘does crowdsourcing work or not?’, but rather ‘when does it work, why, how, and with what benefits?’
This is not to say the answers will always be clear-cut or unambiguous, but asking the right questions will surely get us closer.
Now all we need is for some aid researchers to pick these concepts and questions up and run with them.
Or maybe an aid crowd would be better?