This week a breaking story in the UK focused on how unemployed jobseekers are being forced to complete bogus psychometric tests designed by the government’s Behavioural Insights Team (commonly known as the “nudge” unit). The story raises important issues for ethical experimentation which are very pertinent for aid efforts.
The Guardian reported the story as follows:
The test called My Strength… has been exposed by bloggers as a sham with results having no relation to the answers given. Some of the 48 statements on the DWP test include: “I never go out of my way to visit museums,” and: “I have not created anything of beauty in the last year.” People are asked to grade their answers from “very much like me” to “very much unlike me”. When those being tested complete the official online questionnaire, they are assigned a set of five positive “strengths” including “love of learning” and “curiosity” and “originality”. However, those taking the supposed psychological survey have found that by clicking on the same answer repeatedly, users will get the same set of personality results as those entering a completely opposite set of answers.
The Behavioural Insights Team, meanwhile, argue that their intervention was based on sound evidence and good intentions, and had decent results. The latter includes the finding from randomised controlled trials (RCTs) that the survey had led to ‘building psychological resilience and wellbeing for those who are still claiming after 8 weeks through ‘expressive writing’ and strengths identification.’
For many critics, however, any potential positive benefits of the exercise were diminished by the fact that jobseekers were warned that the survey was compulsory and not filling it out would lead to allowances being curtailed. Instead of building wellbeing, this exercise simply gave the unemployed something else to worry about.
Clearly, there are some fundamental ethical problems with the way that this whole effort was designed and implemented. And of course, this is not unique to ‘nudge’ efforts, but extends to all kinds of social policy interventions. But the experimental approach of nudge interventions does open up a range of ethical quandaries that we need to be looking at more closely.
What the admirable efforts of the UK blogger community highlight for me is that aid recipients in developed countries do at least have some means for addressing their grievances about such experimental processes – even if (as in this case) they are indirect and work through informal rather than formal channels of accountability.
However, the poor in developing countries have few such channels for voicing their grievances and issues. As one statistician put it back in 2010 in a review of RCTs:
In conducting research with people, the need for guidance and adherence to ethical standards is of the utmost importance. Most areas of research involving human subjects have compulsory or voluntary codes of conduct and ethical rules, and many countries have strict processes in place to ensure that ethical standardsare met by any research involving human experimental units. There seems to be a gap, however, in research that involves human subjects carried out in the context of international development. We do not have a system of checks and balances that ensures adherence to high ethical standards. This may be because the jurisdiction of research committees does not extend to the areas where some of this research is conducted.
When RCTs are proposed for impact evaluation, the issue of consent from participants is not discussed. Telling a group of people that they will be included in an experiment, but not implementing a development intervention that might benefit them, is something that most people working in international development would find difficult.