ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Managing better for results, not just measuring them better: lessons on complexity for the results agenda

Written by Harry Jones

Explainer

Recent reforms at DFID, USAID and elsewhere attempt to improve the quality of aid by stressing a more robust focus on results. So far, this has largely translated into a more rigorous measurement of impact, tying impact assessments to existing systems that shape implementation. Unfortunately, lessons from elsewhere suggest this will be insufficient to ensure ‘smarter aid’ delivers the envisaged increase in effectiveness, unless agencies start to rethink the underlying systems and accountabilities themselves. More specifically, agencies need to recognise the complexity of the many problems they face, and adjust implementation structures accordingly.

Initiatives under the ‘results agenda’ often rest on the judgement that  evaluation in development has paid insufficient attention – or insufficiently rigorous attention – to the effect a programme has on its surroundings, and to the wellbeing of people in developing countries. This is a valid judgement and reflects a solid appraisal (as evidenced by an ODI study of evaluation in development agencies), and will direct resources and political capital to an area where they are much-needed.  However, a good deal of the debate – from myself included – has focused on which methodology to use to measure results, rather than the broader context of how information on results can feed into systems for decision-making and accountability.

By and large this heightened focus on measurement and evaluation is being integrated into the same elaborate planning mechanisms and models of results-based management. Although highly appropriate for some policy problems, a growing body of evidence shows that these tools have systematic side effects when applied to complex problems. Examples include: delivering effective health or education services across large, diverse developing countries; promoting institutional change and ‘good governance’; and enabling economic growth. These complex problems pose particular challenges for implementation, because there may often be distributed capacities to address problems; change may be inherently uncertain and causality context-specific; and change processes frequently require management and negotiation of conflicting perspectives and aims.

The prevailing models used to shape implementation do not recognise the challenges of complexity. Incentives and rewards within agencies are geared towards presenting figures that show neat, quantified progress, while more difficult messages, and ambiguous, frequently political judgements and trade-offs, are ‘swept under the carpet’ according to Ros Eyben. My colleague Alina Rocha Menocal recently argued that programming for short-term visible results does not always provide the foundations to support effective, resilient and responsive states and institutions in the long term. Hiding complexity leaves to chance processes, skills and work that are crucial in making a real difference in complex challenges. Formal processes become less relevant and useful to the real work of development and, as Andrew Natsios has argued, we are already seeing a widening divide between those involved in development programming and those charged with managing and supporting it. As David Booth writes, accountability to funders can only be nominally served in this situation, at the cost of the quality of the intervention.
But, having recently completed research on how to implement policies and programmes in the face of complex problems, I believe there are alternatives:

  1. Agencies should take a ‘networked governance’ approach to meet the challenge of distributed capacities, devolving responsibilities and powers to various levels.  Units and organisations could be held accountable for fulfilling their mission and functional role, as well as delivering on outcomes which may be outside their control. Collaboration should take the form of co-management and power-sharing rather than just contractual relationships.
  2. In order to manage units in the face of uncertainty, emphasis should shift from elaborate ex-ante assessments towards accountability for how they adapt and learn throughout an intervention. Responsibilities can be tied to clear principles for action, rather than just results or fixed plans, and programmes may need to work towards learning objectives as well as performance goals.
  3. To meet the challenge of multiple knowledge and perspectives, individuals and organisations must become more responsible in how they manage interaction between knowledge and power in policy-making and implementation. Key decisions need to be made in an inclusive, deliberative way, with judgements made by peers as well as through detached technical assessments.

These principles and some examples of how they are being applied in practice are elaborated in my recent working paper. While not one comprehensive model, they mark out the key dimensions of a solution, and there are pieces of the jigsaw puzzle waiting to be put into place by innovative and forward-thinking decision-makers in donor organisations. Furthermore, building and sharing workable models would allow agencies to begin to provide the right checks and incentives for working with complexity.

Attempts are being made to build capacity and discussions are underway regarding the selection of appropriate tools. If those responsible for supporting implementation take the initiative to seize on these approaches, development agencies could be at the forefront of modern public service delivery, rather than following behind domestic institutions. Right now, many countries are seeing a ‘high water mark’ of public and political commitment to aid, which should give agencies the confidence to grasp the nettle of complexity.