The question of whether ‘aid works’ is at the heart of the political debate. On the one hand, we have seen commitments to increase aid, designed to achieve the MDGs, and some evidence that the actual flows are beginning to rise. New instruments to deliver aid and support developing countries are also being introduced, for example the airline tax proposal pioneered by France. On the other hand, public expenditure is facing tight constraints in all western countries, not least as the outlook for the global economy becomes somewhat more pessimistic. It is not surprising that development ministers and the public who elected them are facing hard questions about impact and value for money.
Evaluation is clearly a strategic feature of this debate. In particular, expectations are higher than ever before with regard to the knowledge, evidence and expertise that evaluation should deliver in support of policy decision-making and resource allocation.
The fact of the matter is that despite agreed overall frameworks and common standards - such as the DAC criteria for evaluation - evaluation practice in development agencies varies very much, depending on a number of factors: resources invested, independence of the evaluation function, methodologies adopted, to name a few.
This study was commissioned to map and compare evaluation practices across development agencies, primarily with a view to stimulate an internal debate within AFD on its evaluation systems during a time of reform. However, we believe that the report can stimulate a wider debate within the development evaluation community as it complements the existing literature on donor policies and the individual case studies of good practice.
Nine agencies were reviewed as part of this study. The results show that there is indeed variation among the strategies and practices adopted by different evaluation units, in
terms of both internal arrangements and different roles and responsibilities fulfilled. This is partly explained by the fact that units are increasingly expected to fulfil a variety of different roles and to engage in a wide range of activities. However, evaluation units share a number of common features and challenges, and are on a similar journey (although at different
stages), from a relatively straightforward model of project evaluation aimed at internal management and accountability, towards a more complex model of policy, country and partner-
ship-led evaluation, which requires new skills, roles and organisational arrangements.
Summing up, the report describes an apparent disconnect between the rhetoric on the strategic and growing importance of development evaluation, and evaluation practice in many development agencies. This “institutional evaluation gap” calls for greater attention to institutional approaches to evaluation, arrangements and capacity. And perhaps for a more collective
effort among the key players in development evaluation.
The study has been a useful and interesting collaboration between the Agence Française de Développement (AFD) and the Overseas Development Institute (ODI): we were able to draw on our respective experience and expertise and we hope that other agencies will find these findings interesting.
Chief Economist, Agence Française de Développement
Director, Overseas Development Institute