Our Programmes



Sign up to our newsletter.

Follow ODI

Development evaluation: The institutional challenge

Time (GMT +00) 13:15 14:30


Marta Foresti - Research Fellow, Poverty and Public Policy Group, ODI

David Peretz - Chair, DFID's Independent Advisory Committee on Development Impact (IACDI)


Alison Evans - Director of Programmes, Poverty and Public Policy Group, ODI


Nick York - Head, Evaluation Department, DFID

The first presentation by Marta Foresti focused on the findings of the comparative study on evaluation policies and practices commissioned by the French Development Agency and their implications for the institutional gap in development evaluation. The overall findings of the study suggest that:

  • Development agencies’ evaluation units share ccommon ‘paradigms’, similar mandates and challenges, are all faced with increased complexity and all share commitment to harmonisation and joint evaluation. Yet, there is a lot of diversification in practice, in terms of approaches, roles, products, management etc.
  • Development evaluation can be described as a ‘function in search of identity’: for some agencies, the search is more advanced than in others.
  • The observation of the internal structures and functioning of development agencies’ evaluation units suggests that there is s disconnect between the rhetoric on the strategic importance of development evaluation, and the actual practice in development agencies
  • These findings suggest that there appears to be an ‘institutional gap’ in development evaluation. A clarification of the institutional role of evaluation, at different levels is called for, including greater attention to issues of quality of evaluation and use of evaluation findings.

David Peretz described the role of the Independent Advisory Committee on Development Impact (IACDI) recently established at DFID (link to presentation). This includes:

  • To assure the independence and quality of evaluations at DFID, and to agree a programme of independent evaluations
  • To try to ensure that recommendations and lessons of evaluations are followed up
  • This is a challenge for a part time committee meeting 3 times a year, although the committee is a highly expert and talented group.

David proposed a number of questions for the audience to discuss. A major concern of IACDI, is the question of the independence of evaluation. The ODI report suggests that DFID is an outlier, with EvD more embedded in management structure than most other evaluation units and that relying on external consultants does not guarantee independence. Other factors such as rules for budget allocation, appointment of staff are also important. What do others think? Is EvD’s independence and “clout” compromised? Even if not, does appearance matter? How to achieve independence without isolation?

Comments and questions raised during the discussion

There is a good news story worth emphasising here in that:

  • There is evidence that evaluation systems are converging around certain evaluation criteria and approaches. Professionalisation of evaluation units and their roles within agencies is moving apace. This is coupled with evidence of a tightening of accountabilities and management responses (which has been a challenge in the past).
  • There is also evidence of evaluation moving up the value chain, with more strategic evaluations being undertaken (and not necessarily to the exclusion of good quality project evaluation).
  • There is also an acknowledgement about the importance of plurality of method (more by default than by design) and recognition of a range of methodologies, which is potentially good news.
  • There is increased emphasis on communications and learning, with good disclosure policies and new evaluation communications products (i.e. shorter, punchier and better targeted) that are reaching key stakeholders. However, internal communications and learning within agencies remains a challenge – the internal feedback loop is lacking.
  • There are some international initiatives to join up evaluation work, with a recent effort in particular on impact evaluation. Existing fora include three donor networks: at OECD – DAC, the UN, and amongst the multilateral development banks. 3IE is a new initiative lead by the CGD (and with finance from the Hewlett and Gates Foundations) to direct more funding to impact evaluation. There is also a new ‘network of networks’ on impact evaluation, which started to develop about 18 months ago and which has made progress on the issue of southern capacity development as it now has 15 or 20 new members from LDCs. This needs to be complemented by a much better connection between the rich body of experience and expertise in northern (global and regional) evaluation associations and the south. This also raises an immediate question regarding acceptable approaches and standards, which are not necessarily the same in an LDC context.

However, the study and associated discussions also illustrated some areas that are missing or need more attention:

  • The study is (deliberately) very focused around improving the quality and standard of evaluation units in individual agencies. What is missing here is coverage of the international evaluation architecture dimension and potential savings or synergies to be realised through doing things in a more collaborative, international manner (with due attention paid to ensuring that transaction costs, typically very high, remain manageable). Joint evaluations have not really broken through on this issue and it is not clear that they are the answer in doing so. While efforts to undertake more peer reviews may help here, we need standards to be set for these to avoid collective action dilemmas.
  • The policy environment is changing and development evaluation has to raise its game accordingly. For instance, where is development evaluation in terms of helping us on the MDGs and the provision of global public goods and whole of government approaches? Overall, there is a sense in which the emphasis on units within agencies misses a trick in terms of response to global public policy challenges.
  • A particularly depressing area is the limited attention paid to evaluation capacity development in southern countries for evaluation (excepting some good work at the World Bank and DANIDA) and more fundamentally in building constituencies that use evaluation outputs in the south, since there is very limited demand for evaluation in LDCs. This is a clear failure of the development community in addressing what is a core Paris Declaration principle (i.e. ownership). Moreover, it is much easier to talk about than to do anything substantive to address.
  • While the evaluation criteria themselves seem relatively well established, there is a need within the evaluation community to establish some clearer ground rules on the use of mixed methods, including a conscious effort to link evaluation design to a programme’s theory, i.e. there must be a much clearer link to the programme’s underlying logic (what one discussant described as a ‘theory of action’). Ground rules for combining methodologies are needed: we must be prepared to triangulate between approaches (e.g. between quantitative and qualitative approaches) and to demonstrate how this has been done. We need to be more willing to consider counterfactuals (not just in the strong experimental design sense but also through intelligent consideration of alternatives). The evaluation community is very good at logical framework type approaches that posit very simple, linear logical relationships but very bad at dealing with the types of feedback loops encountered in complex development contexts.
  • A further fundamental problem identified was the weak take up of the lessons from evaluations in feeding back into programme design. This is related to a lack of internal institutional incentives within agencies for useful, robust and utilised internal evaluations. Agencies need to be a lot more serious about follow up and this will require measures to address underlying incentives.

More specifically, the role of DfID’s new Advisory Committee on Development Impact (IACDI) was also discussed:

  • Different agencies are setting up different advisory committees to tighten their national evaluation practices. Perhaps there should be a single international committee on evaluation, but if anything at the moment the trend is in the opposite direction, towards a fragmented set of committees and approaches. With all due respect to the role of IACDI, what is its agenda around the global evaluation issues highlighted? What about the broader agenda and connection with global initiatives on development evaluation?
  • The most important thing for IACDI will be to address today’s evaluation challenges and not to rehash past debates based on an aid world which is passing away (new evaluation tools and approaches are demanded by the Paris Declaration and General Budget Support for example). As regards the tension identified between independence and integration of evaluators, the question is no longer about bias, but about whether you are prepared to be strategic. Few recent high level evaluations have kept their eye on the long term strategic questions because they responded too much to the operational needs of donor and country offices and too little to the question at hand. A further pressure which has tended to weaken evaluation efforts is the need to comply with old fashioned rules (the DAC Evaluation framework for example is a seriously deadening influence), thereby preventing evaluators from rising to the strategic questions which provide the genuine challenge of evaluation today.
  • Ironically, the two questions that senior politicians in the UK ask (“does aid work?” and “Is DFID any good at it?”) are typically not addressed by evaluators. The former question is really a research question, while the comparative question is never asked openly as it is too politically sensitive. Still, with DfID’s budget growing at 11% per year we need to be very careful to ensure we are getting value for money – IACDI can certainly help here.


The question of whether ‘aid works’ is currently at the heart of the political debate. On the one hand, we have seen commitments to increase aid, designed to accelerate the achievement of the MDGs, accompanied by some evidence that actual flows are beginning to rise. On the other hand, public expenditure is facing tight constraints in developed countries, not least as the outlook for the global economy becomes somewhat more pessimistic. It is not surprising that development ministers, and the public who elected them, are facing tough questions about impact and value for money. Expectations are higher than ever before with regard to the knowledge, evidence and expertise that development evaluation should deliver in support of policy decision-making and resource allocation.

A recent comparative study conducted by ODI on behalf of the French Development Agency found that despite agreed overall frameworks and common standards, practice in development agencies varies substantially depending on a number of factors, including: the amount of resources invested, the independence of the evaluation function, and the methodologies adopted, to name a few. Is it time for a new, more consistent institutional approach to development evaluation? If so, what would this imply for UK actors?

At this ODI event, Marta Foresti will present on the main findings of the comparative study and their implications for the institutional gap in development evaluation. David Peretz will focus on the role of the Independent Advisory Committee on Development Impact (IACDI) recently established at DFID.