Bias is an inescapable element of research, especially in fields… that strive to isolate cause–effect relations in complex systems in which relevant variables and phenomena can never be fully identified or characterized…”
The field Sarewitz is writing here about is biomedicine, but he could easily be describing development or humanitarian work. The fundamental problem, as he sees it, is that biases are not random but systemic: “if biases were random, then multiple studies ought to converge on truth [but] evidence is mounting that biases are not random.”
This claim is not new, of course. As the piece argues, systematic positive bias was identfied in clinical trials funded by the pharmaceutical industry back in the mid-1990s. More recently, reviews of so-called ‘landmark’ studies in fields such as cancer research has shown that positive results could only be replicated in a minority of cases.
However, these previous assessments tended to assume that the problem was not with science per se, but rather with those forces that sought to co-op it: industry, government, special interests, and so on. Reduce the influence of these interests, the argument went, and you would eradicate such biases.
But it is now emerging that there are some serious underlying problems within science itself. The cases are wide-ranging across biomedicine: “evidence of systematic positive bias [is] turning up in research ranging from basic to clinical, and on subjects ranging from genetic disease markers to testing of traditional Chinese medical practices.”
The two major faultlines, according to Sarewitz, are the methodological narrowness of the approaches employed to generate evidence, and the culture and incentives of scientists and science funders.
The first one is pertinent for readers of this blog. Researchers seek to reduce bias “through tightly controlled experimental investigations. In doing so, however, they are also moving farther away from the real-world complexity in which scientific results must be applied to solve problems.” Ironically, “the canonical tenets of ‘scientific excellence'” are threatening to undermine the whole enterprise. One rather shocking (for me, at least) example relates to the latest developments in research on mice, where a lot of resources and funds have been poured into the cloning of genetically identical animals, in order to enable fully controlled, replicable experiments and rigorous hypothesis-testing. Any sense of moral repugnance aside, perhaps the worst thing about this endeavour is that the findings of the research subsequently undertaken have turned out to be useless when applied in the real world.
Sarewitz also writes about the lack of incentives to ‘report negative results, replicate experiments or recognize inconsistencies, ambiguities and uncertainties’. There are also challenges around the various cultural and attitudinal positions taken toward science among funders, scientists, the media and the public at large. Sound familiar?
It should – such issues are not a problem for biomedicine alone:
[they are] likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on. The cracks will be there, they are just harder to spot because it is harder to test research results through direct technological applications… and straightforward indicators of desired outcomes…
Sarewitz closes with one potential solution, which may also be of relevance for work in development and humanitarian fields:
Scientists rightly extol the capacity of research to self-correct. But the lesson coming from biomedicine is that this self-correction depends… on the close ties between science and its application that allow society to push back against biased and useless results.
So what can we in the aid sector do about such bias, if indeed it is present in our work?
The first idea is the one that Sarewitz suggests: “societal push back”. Sadly, despite the rhetoric and growing practice of participation, the scope for Southern stakeholders – especially aid recipients – to ‘push back’ against useless results in development and humanitarian research is still severely limited. This doesn’t mean we should stop the effort, however, and perhaps new technologies and feedback processes can help us here.
The second strategy might be to address issues of the incentives and cultures which perpetuate such biases. But we seem to be far too concerned with developing country actor incentives and motivations to look at those in our own organisations. As one participant at a recent ODI event put it: “why do we always say that developing country leaders have mixed motives at best whereas the motives of donors [and other aid actors] are always considered impeccable?” We should find a way to ensure that these aid “physicians” first heal themselves.
The final course of action is to try to expand and adapt the concepts and models used in our work. This effort (of which this blog is one small part) is still very much a work-in-progress, but the growing interest among researchers and practitioners should give us some small cause for hope. After all, the key to paradigm shifts in science – and in other fields – is not just logical argument and experimental proof. In the words of Thomas Kuhn:
as in political revolutions, so in paradigm choice—there is no standard higher than the assent of the relevant community.”