Update 01/02/2011: the first evaluation cited is specific to UN agencies, the second to donors. Have also clarified the specific references in the UN report. Thanks to Michael Keizer for pointing this out.

One of the recurring themes of this blog is the idea that aid agencies need to become more flexible and responsive, both to changing contexts and the needs of developing countries. Unfortunately the ongoing focus on ‘results’ – as it is currently being shaped and implemented – seems to be in direct conflict with this idea. Moreover, the repeated criticisms that have been made of the ‘results agenda’ doesn’t seem to affect its influence or the broad approach. How are we to move beyond this situation?

There is a engineering theory that underlies many ‘modern management’ approaches, which involves taking a “reductionist approach to problem solving.” The aim is to decompose a complicated problem into more manageable and well-defined sub-problems that can be separated and dealt with as discrete issues, like so:

The benefits of this reductionist approach are that it is methodical, conceptually intuitive, and it allows any given operating environment to be defined with precision and accuracy. It supports transparency, it benefits hierarchical decision-making, it aids trouble-shooting, and it facilitates planning and control.

It is also, in quite a few cases, complete nonsense.

Even in the best case scenario, such approaches can only be followed loosely because real-world systems cannot be divided up and controlled in neat and tidy ways. The mismatches between reality and the managerial assumptions manifest themselves in numerous ways. Two are especially relevant to current debates on improving aid:

  • First, when failure occurs, where it is acknowledged, it becomes common and logical to highlight the precise points in the linear chain where the failure occurred. The implicit assumption is that through such precise narratives and their subsequent take-up, future failures can be avoided. Those with access to these narratives just need to make the necessary modifications during design or planning stages. Failures that indicate wider systemic issues may be highlighted from time to time. But they are usually found to be too challenging to address in the context of ongoing work, and are therefore dismissed as ‘just not practical’.
  • Second, when faced with uncertainty, the value of standard operating procedures and plans tends to diminish considerably. But the need for pre-defined purposes and actions makes design principles such as ‘be adaptable to novel conditions’ seem ambiguous and vague. The common way around this it to develop contingencies and build these into strategies and plans. This involves the paradoxical attempt to pre-define the kinds of responses that will be followed when faced with unanticipated changes. Such contingencies are next to useless because they usually do not allow for re-orientation of resources around these changed realities. They are also increasingly meaningless in a world characterised by dynamic, turbulent change.

Such reductionist approaches are hard-wired into the international aid system. We see them manifested in the form of logical frameworks, results-based management approaches, and the more recent interest in value-for-money. The Paris Declaration for aid effectiveness and related documents repeatedly state the importance of ‘modern management’ techniques, essentially bringing New Public Management (with all its ails) to bear on issues of global poverty and crises.

There is a lot of anecdotal evidence about these approaches – ranging from vaguely negative to downright hysterical. Even some of the major donors who support them have admitted that: ‘we don’t pretend these things describe reality’. But what systematic evidence is there on how well are these approaches are doing? Do they show aid agencies navigating the challenges outlined above, finding ways to make these approaches relevant to a dynamic and interconnected world? Are these results-based approaches succeeding on their own terms?

Sadly, it appears not. An evaluation of results-based management approaches among UN agencies identifies a range of concerns with the approach at both conceptual and practical levels. It found that:

  • the formalistic approach to codifying how to achieve outcomes, inherent to RBM, can stifle the innovation and flexibility required to achieve those outcomes
  • results-based management as implemented across the UN takes no account of the fact that outcomes are influenced by multiple actors and external risk factors
  • results-based management processes has been found to have made virtually no contribution to strategic decisions in any of the reviewed organisations
  • the determination of development success does not lend itself to impartial, transparent and precise measurement
  • Many of the results planned for have been expressed in a self-serving manner, lack credible methods for verification and involve reporting based on subjective judgement
  • although aspirational results are used to justify approval of budgets, the actual attainment or non-attainment of results is of no discernible consequence to subsequent resource allocation or other decision-making.

It concluded damningly that “…RBM in the United Nations has been an administrative chore of little value to accountability and decision-making…”

It doesn’t stop there. Andrew Natsios, former head of USAID has argued convincingly that measurability of the kind propagated by existing RBM systems is inversely proportional to development relevance. And an earlier report by the OECD-DAC on RBM in donors found that:

    there are dangers in designing performance measurement systems too much from the top-down. Unless there is a sense of ownership or “buy-in” by project/program management and partners, the performance data are unlikely to be used in operational decision-making. Moreover, imposed, top-down systems may lack relevance to actual project/program results, may not sufficiently capture their diversity, and may even lead to program distortions as managers try to do what is measurable rather than what is best. Field managers need some autonomy if they are going to manage-for-results. Some operational level flexibility is needed for defining, measuring, reporting, and using results data that are appropriate to the specific project/program and to its country setting.

The abject failures outlined here do not appear to have been a major cause for concern for those promoting and supporting such approaches. As the UN review cited above glumly concluded, despite these conceptual and practical shortcomings, the RBM agenda is ‘here to stay’. Ironically, results-based management and accompanying top-down control processes do not themselves appear to need results to be championed and implemented with ever-greater enthusiasm.

This supports the findings of social complexity thinkers, who have found that that when faced with ‘wicked problem’-style challenges to the engineering mindset, the predominate tendency is to try to apply the existing mindset more firmly: “to stay safe within the existing cognitive framework and try to find a solution within it.”

The danger is that being wedded to a particular approach in the face of repeated failure risks the overall legitimacy and relevance of what is being done. This problem extends beyond management to scientific thought.

In his infamous work on new paradigms in natural sciences, Thomas Kuhn noted dryly: “the research scientist is not an innovator but a solver of puzzles, and the puzzles upon which he concentrates are just those which he believes can be both stated and solved within the existing scientific tradition.”

How to move beyond this problem? Kuhn argues that it is the slow accumulation of anomalies that leads to breakthroughs in scientific thinking. When weaknesses in the old paradigm are revealed and resolved (or not), this highlights the problems with the existing way of doing things. This slowly builds up to the point that the old paradigm reaches crisis point and is supplanted.

So the key is not to throw the results baby out with the reductionist bathwater. Instead, if the results agenda is here to stay, maybe the key is to reform the results agenda to make it more relevant to the complex, ambiguous world we live and work in.

Nancy Birdsall put her thoughts forward succinctly in a blog last year:

For a country to get results might not require more money but a reconfiguration of local politics, the cleaning up of bureaucratic red tape, local leadership in setting priorities or simply more exposure to the force of local public opinion.

Let aid be more closely tied to well-defined results that recipient countries are aiming for; let donors and recipients start reporting those results to their own citizens;

let there be continuous evaluation and learning about the mechanics of how recipient countries and societies get those results…

[let’s focus on] their institutional shifts, their system reforms, their shifting politics and priorities…

And to add one more point in closing:  let’s also make sure we demand the same results of the ‘results agenda’ as it demands of every other agenda.

Join the conversation! 18 Comments

  1. Thanks for this post – an important to counterbalance in the perceived divide in our sector. F(or example, see today’s Global Dashboard, “In Praise of Results” – see link below). I don’t think the sides are really that far apart. An adage I follow: No stories without numbers, no numbers without stories.


  2. Great write up, as always!

    Your comment:

    “when faced with ‘wicked problem’-style challenges to the engineering mindset, the predominate tendency is to try to apply the existing mindset more firmly: “’to stay safe within the existing cognitive framework and try to find a solution within it.’”

    reminds me of how people often react when speaking a language others don’t manage well. When someone doesn’t understand them, they keep repeating the same phrase in a louder and/or slower voice thinking eventually they will be understood.

    A better solution is often re-wording the idea until they come upon a way to say it that is understood.

  3. I find the way that you cite the UN report more than a bit misleading.

    First of all, it was not an “evaluation of results-based management approaches among international agencies”, but an evaluation of its introduction specifically and solely within the UN system.

    Secondly, of the critiques you say are mentioned in the report, only the first one was suggested as a general critique on RBM; the others related to the specific implementation that was under consideration and not to RBM in general.

    As far as I am concerned, this cavalier treatment of at least one source sadly invalidates this article.

  4. Thanks for the comments so far.

    JL, thanks for the link and the adage. SS, the metaphor of language is a great one.

    MK, thanks for pointing this out, I have updated the post accordingly – believe me it was not my conscious intent to mislead.

    As for the points about RBM as a concept vs RBM as implementation, again, I have clarified this where appropriate.

    However, I think this report is useful because it does seem to resonate so strongly with findings from other reviews – for example, among donor agencies. There is also quite a lot of overlap with reviews of the LFA in NGOs too. Finally, the findings also chime with how RBM-style approaches have been applied in government departments dealing with domestic issues in developed countries.


  5. This discussion points to the solution, but perhaps I can take the argument a step further. Any development intervention is premised on the idea that it will make a difference, through some sequence of events (from activities to outputs to outcomes to impacts, in the parlance). The problem is that the sequence is rarely articulated in any detail – so it is not possible to go out and find out whether it is unfolding in the way that was anticipated.

    People working in development in the field have this sequence or logic in their heads, of course – but getting it down on paper in sufficient detail as a results chain (not only as a logframe) is always valuable. Firstly, it provides an opportunity to get everyone in the team on the same page (quite literally) – staff, partners and others.

    Secondly, it shows more clearly what the assumptions are – and enables the team to go out and see if they are valid. If the whole programme is based on the idea that x will lead to y, and that y will lead to z, surely it is important to go out and see if indeed x is leading to y etc… If people got trained, are they doing anything differently as a result, for example.

    This is not a one-off exercise, but needs to be updated regularly in the light of changing circumstances, increasing understanding and knowledge. It is the way that development people in the field can regain the results measurement initiative; the alternative is having a method imposed on them, that they generally don’t like very much.

  6. Hi Ben,

    Great post. I’ve been using the conclusion of the UN report you mention as an example of the pitfalls of unthinking RBM for years. Super powerful!

    To my mind, we need to develop practical alternatives for aid managers, to help them do their difficult job better. I recently wrote up a ten point management agenda, building on a similar critique of RBM and a different definition of what ‘performance’ means for NGOs: contributing to other people’s efforts to improve their lives and societies. It’s all up at http://www.ngoperformance.org

    In particular, we really need better ways of assessing performance. Feedback systems seem to offer some encouraging and responsive approaches. I’m just starting to wonder if they can be married up to Cash On Delivery Aid – wouldn’t that be fascinating?

  7. Thanks for a very interesting posting.

    As a former UN staff who had to oversee the design and application of dozens of project logframes I have come to the conclusion that the main weakness of the RBM approach is the consistent lack of validation of the assumptions underpinning the construction of the logframe (the theory of change behind the project design), both at the project design stage and during its implementation. Monitoring processes do not review/revise assumptions, and therefore, adaptive management changes during project implementation are usually superficial. Project participants should be willing to consider that the project design may have been based on incomplete or wrong assumptions and should have the courage to face the budgetary, reporting, etc. consequences. This is hardly the case because both donors and agency managers do not want to hear about failures.

  8. […] a Comment This is a familiar refrain of this blog, but I was reminded of it by Ben Ramalingan’s dissection of the failures of Results-Based Management in (UN funded) aid projects. He makes some good and depressing points about the inflexibility in approach and other limitations […]

  9. One of the main issues I had with the process was that managing change was such a bureaucracy. You start up a project and hit a snag. the whole business seems to be geared to keep your head down and not go for fast changes.

    It is not the need for results that stifle, it is the need to get permission.

    Donors should ask for access tot the (internal) quarterly reports in order to be able to push for the needed changes when things go wrong.

  10. I am lately very much interested in the Scrum methodology for project management. You have very short cycles of work – feedback – adjust – plan – work. This authorises to start working fast, and adapt on the way. Crossing the river like an elephant. Just like with the controversy between autocracy and democracy, I find a feedback based adaptive methodology, with rapid response to failure, in the longer term more predictable than the stop start long term strategic framework alternative.

  11. […] is a familiar refrain of this blog, but I was reminded of it by Ben Ramalingan’s dissection of the failures of Results-Based Management in (UN funded) aid projects. He makes some good and depressing points about the inflexibility in approach and other limitations […]

  12. You have crystallized the essence of the issues. After 35+ years in the trade, beginning with Logical Frameworks in the 1970s, I have seen the avoidance of learning simply produce more frameworks with the same non-“results” and lack of follow-up discussion on the uses of the frameworks.

    With a lot of experience in interviewing, one can perhaps in 30 minutes or less detect which organizations (Ministries or research organizations or NGOs or communities or even donor officers) are potentially willing to learn and produce some substantial results, and report honestly on progress, screw- ups and changes in tactics . These are relatively few. Alas, Irrespective of which “results” framework is used.

    At the essential macro level — government (sorry, governance), one can read the local newspapers, talk to people – ministers to communities, read about and discuss their political history, review for example the EIU, etc. Better still live there and engage, a habit the chattering and research classes in Washington seem to avoid. Improvements in overall governance is, in any country including the USA or Britain, impossible to framework.

  13. […] Ben Ramalingan was talking much more generally than about the failure of a single project when he criticised the Results-Based Management framework that I discussed in my previous post, the wrong management framework is another good candidate. […]

  14. […] activities and not enough into activities that try to improve systems and rebalance power? Ben Ramalingam shared a useful graph that mapped he results context by (1) the nature of the intervention (simple […]

  15. […] On the other side are those who argue for a ‘push back’ against this approach. Such reductionist approaches are seen as only suitable for certain kinds of development interventions, and that at their worst, these approaches inhibit the creativity and innovation needed to achieve results in the first place. The dange here is that we throw out the results baby with the reductionist bathwater (see here for a previous Aid on the Edge post on this). […]

  16. […] Ben Ramalingam pointed out the other day, the results agenda can be conceived in ways which are linear and formalistic, and so stifle the […]

  17. […] and reacting to results.  Natsios himself calls for ‘a new measurement system’. But – as Ben argued  last year – we must ensure that the results agenda is applied in a way which is relevant to the complex, […]

  18. […] and reacting to results.  Natsios himself calls for ‘a new measurement system’. But – as Ben argued  last year – we must ensure that the results agenda is applied in a way which is relevant to the complex, […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

About Ben Ramalingam

I am a researcher and writer specialising on international development and humanitarian issues. I am currently working on a number of consulting and advisory assignments for international agencies. I am also writing a book on complexity sciences and international aid which will be published by Oxford University Press. I hold Senior Research Associate and Visiting Fellow positions at the Institute of Development Studies, the Overseas Development Institute, and the London School of Economics.


Accountability, Evaluation, Innovation, Institutions, Knowledge and learning, Public Policy, Strategy