The international development sector has been in a tug of war around the ‘results agenda’ for the past few months. This post explores the tensions and suggests a way to bring the sides together by focusing on the relevance and appropriateness of different approaches.*
I: The Results Tug of War
Development results is one of many areas where discussion and debate seem increasingly polarised. On one side of the results tug of war are those calling for more and better results, more rigour in analysis and more discipline in reporting. The failure of development, they argue, is basically about the failure to focus on results. ‘Modern management techniques’, especially those that are embodied by ‘results-based management’ are seen as the answer.
On the other side are those who argue for a ‘push back’ against this approach. Such reductionist approaches are seen as only suitable for certain kinds of development interventions, and that at their worst, these approaches inhibit the creativity and innovation needed to achieve results in the first place. The danger here is that we throw out the results baby with the reductionist bathwater (see here for a previous Aid on the Edge post on this).
What is increasingly evident is that, in the diverse and dynamic aid landscape we face today, all agencies attempting to genuinely strengthen accountability and learning face a number of common challenges. This is a preliminary list, I am sure readers will be able to think of more.
- Data availability, coverage and quality are perennial problems
- Participation and ownership – as Robert Chambers might ask: ‘whose results count?’
- Incentives and disincentives to use information and results, especially when they run counter to individual and institutional interests
- Bureaucratic inertia: all too often results-related work is placed on top of and increases the already considerable bureaucratic and administrative burden on aid agencies, rather than simplifying and reducing it
- Risks and fear of failure: How can we manage and be transparent about the different kinds of risk failures inherent to development projects & programmes?
- Many conflicting imperatives: learning vs accountability, policy vs operations, domestic vs international
The key point is that these apply equally to both sides of the results tug of war. As a result, a lot of effort is being wasted, with problems being dealt with in entrenched intellectual silos rather than in a collective manner.
So what to do to move beyond the ‘tug of war’? I would argue that a first step would to think about how to bring the different results approaches together to establish a more constructive dialogue. What is needed is a more flexible and differentiated approach to results, one which takes account of the diversity of the development and humanitarian portfolio.
II: A Draft ‘Portfolio of Results’ Framework
What might such a portfolio-based approach look like? There are a number of useful approaches from academia, civil society and business strategy that can help here. These include Brenda Zimmerman’s simple-complicated-complex distinction, the Cynefin framework of Cognitive Edge, work done by Alnoor Ebrahim at Harvard University, work done by Eliot Stern on relevance of different approaches to impact assessment and finally a recent model put forward by Patrick Moriarty of IRC.
All of these suggest in their different ways that appropriate strategic approaches (and by extension, results approaches) need to be based on:
(a) the nature of the intervention we are looking at, and
(b) the context in which it is being delivered.
Reading across these approaches we can suggest a preliminary framework which may prove useful in bringing together different results approaches in a productive and mutually beneficial way.
First, imagine an agencies projects and programmes being distributed across a spectrum of the ‘nature of interventions’, placing relatively simple interventions on one end, and more complex issues, at the other.
Then let’s add in a vertical axes on context. Again, think of a spectrum, this time from stable / identical to dynamic / diverse.
This gives us a 2 by 2 framework for analysing and mapping different development interventions – in effect, this is a draft ‘portfolio of results’ framework. Where an intervention is positioned on this framework has implications for the kinds of results orientation we can take, as shown below.
In the top left corner of simple interventions in identical stable settings, is the Plan and Control zone – here ‘traditional’ results-based management approach, conventional value for money analyses and randomised control trials work well.
The bottom right corner of complex interventions in diverse, dynamic settings is what I have termed Managing Turbulence – here the philosophy is less ‘Ready, Aim, Fire’ (as in the Plan and Contol zone) and more ‘Fire, Ready, Aim’. Here we need to learn from the work of professional crisis managers, the military and others working in dynamic and fluid contexts.
In between is what I have called Adaptive Management, where either because of the nature of the intervention or the nature of the context, multiple parallel experiments need to be undertaken, with real-time learning to check their relative effectiveness, scaling up those that work and scaling down those that don’t.
III: Applying a Portfolio of Results Approach: A health-focused illustration
By way of illustration, let’s look at three health interventions – vaccines, HIV-AIDs, and rebuilding national health systems. I would argue that they could be distributed on the matrix something like this.
So if we are looking at simple interventions in a stable / identical environment, or what might be called the plan and control domain, randomised control trials, traditional cost-based ‘value for money and results-based management approaches work great. Vaccines are perhaps the best example here. And as the ongoing MSF campaign on reforming GAVI suggests, a focus numbers and bean-counting can be of vital importance to ensuring effectiveness.
But we may find ourselves managing interventions that are more complex, in stable contexts. We can also think about situations where the intervention is simple but the context is dynamic. In both of these instances we may need to move away from blueprints towards a more adaptive management approach, trying out multiple parallel experiments and monitoring progress and rates of success and adapting to context. In HIV-AIDS responses, the optimal mix of responses is key and almost always locally determined (see previous Aid on the Edge post here). Also increasingly relevant are global malaria responses which need to adapt to the changing patterns of incidence and the evolution of resistance (ditto here).
Finally, in environments where our interventions are complex and the context is dynamic and diverse, we have to take a leaf out of the book of those who work in high risk environments – professional crisis managers, military and so on. Programmes to rebuild health systems, especially in fragile states, are a good example here. Here we need to be doing action research, real-time assessments and learning by doing.
This is not a rigid framework and there is overlap between the different areas. But different approaches to results can be shown to be more or less effective in different domains. In general terms, you can do a detailed RCT in the bottom right quadrant, but it may be a thankless task and not the best use of resources. You can do an RCT in the top right quadrant, but it could well prove to be a necessary but not sufficient condition for success. And so on.
(This also helps think about the concerns of one side of the tug of war – that there is a pressure to push development to the top left domain, and a widespread misapplication of the top-left tools for the other domains.)
Obviously this is a preliminary framework based on reflection and discussion, and is open to critique and debate. The key principle is that a more nuanced approach to results will have to be based on a systematic assessment of, at a minimum, our interventions and the context we are working within.
IV: Taking the Results 2.0 agenda forward
This kind of framework can also be used to think strategically about our overall portfolio of projects and programmes. How is our overall spend allocated between these ‘domains’? What are the implications for risk? I think there is a useful analogy with investment portfolio managers are used to diversifying their portfolios in order to reduce their exposure (see diagram below).
We urgently need to develop new ways of analysing the different elements of our portfolio. Through this we can start to unpack and understand the diversity of our efforts, and ensure we don’t take a ‘one-size-fits-all’ approach to results and all that entails.
There are a number of follow-on issues about how we might take this area of work forward.
- We will need to refine or adjust the draft ‘portfolio of results’ framework, based on more in-depth analysis, discussion and debate. Of course, we may need something completely different to what is proposed here (all feedback, however critical is warmly welcomed!), but the key is that we need something to bring diverse constituencies and approaches together.
- We need to think about which sectors are amenable to a portfolio type approach to results, where we can pilot a ‘Results 2.0 process’ and we need to think about what new kinds of tools and methods might be required. I think health would be a great sector to start on.
- Different kinds of interventions will need different kinds of information, which will call for different tools for managing this information. New kinds of tools and techniques will be necessary. Importantly, these should help to consolidate and simplify, rather than just increase, the reporting and administrative burden on the sector.
- We urgently need to think about how this affects development communications, and how we can start to develop more sophisticated framing and messaging of positive and negative results, based on the different elements of our portfolio. This will be perhaps the hardest part of this new results agenda, as it means that we will have to tell our key stakeholders things like ‘we don’t know’, or even worse, ‘we failed’. This may mean riding with punches in the short-term. But this will also mean we will need to think hard about what different stakeholders expectations are, and how they can be best met. The overall legitimacy and sustainability of such efforts demands greater involvement of national governments, civil society and poor communities.
I want to close with this thought from a cross-country study of results-based management looking at Western countries – that results are not an end in themselves, but are a means by which to establish trust in the system. I would add: and within the system.
Because we do so many different things in development, we have to do different things to earn trust of our diverse constituencies. (We may also have to accept that in some quarters, trust will never be established, but that is another story.) What we cannot do is move forward without finding ways of trusting each other, whatever our methodological or conceptual background and prejudices.
Bringing our diverse opinions and ideas together to test their relevance and appropriateness seems like an essential first step.
* This is the summary of a talk I gave at the June 2011 IDS-ODI roundtable on results with the UK Secretary of State Andrew Mitchell, revised following useful comments from participants. Special thanks go to Robert Chambers and Simon Maxwell for thoughtful and constructive feedback.
Fellow participants have also blogged on the meeting: