This Opinion argues that the dominance of experimental or quasi-experimental approaches to impact evaluaion (IE) could undermine learning and accountability.
Particular types of IE that are seen as ‘the gold standard’ are often based on ‘counter-factual’ methodology (comparing what has happened to what would have happened without the programme). Proponents see this as the only objective way to evaluate interventions, while painting other methods as merely collecting ‘opinions’.
Second, these impact evaluations are commissioned mainly for upwards accountability and ‘legitimation’ purposes (Raitzer and Winkel, 2005; Jones et al., 2009). Proving to donors that an intervention has had some impact protects existing funding and boosts the chances of funding in the future. Equally, where projects, programmes and even whole sectors struggle to demonstrate impact, they may lose funding.
Donors and those commissioning evaluation, need a more balanced view of ‘rigour’ and ‘evidence’. Experimental methodologies are only one way of looking at the impact of an intervention, and other methodologies can be just as rigorous and objective. Evaluation of a counter-factual is only one way to look at causality, and is applicable to less than 25% of policy areas. It is crucial to recognise that experimental and quasi-experimental IEs are just one method, and that, despite promises of demonstrating impact in line with agency goals, this is not always possible.