A different and potentially radical approach to increasing effectiveness has been taken by Abhijit Banerjee and colleagues at the Abdul Latif Jameel Poverty Lab at MIT , the main tenets of which are discussed in 'Making Aid Work'. Focusing less on process and more on outcomes, Banerjee and colleagues argue that aid should be subject to the rigours of Randomised Control Trials (RCTs) to increase efficiency and efficacy. Aid thinking, Banerjee suggests, is lazy thinking: limited numbers of weak evaluations contribute to a lack of consensus around that simplest of questions – what works?
Aid interventions, so the Banerjee argument goes, are too diffuse to be effective, with anecdotal and suggestive findings from poorly-conducted evaluations often framed as social scientific fact. Such lazy thinking and the costs imparted by using weak evaluative tools, inappropriate methods and inference only reinforce the arguments of aid pessimists. Banerjee and colleagues, self-professed aid optimists who sit at MIT across from Massachusetts General Hospital , believe they have an answer to develop a more robust evidence base for aid: to import the medical model of RCTs into the aid business.
So, what is a RCT? A randomized control trial in social science is an evaluation of a public policy intervention. Research is structured to answer a counterfactual question: how would participants' welfare have altered if the intervention did not take place? This can involve ‘before and after' and ‘with and without' comparisons. The former are not dissimilar to more conventional evaluation tools that use baseline data, and may suffer from difficulties in isolating the effects of an intervention from wider societal changes. The latter create a robust comparison group who are not directly exposed to the intervention, and whose outcomes would have been similar to participants if the intervention had not taken place. Such ‘with and without' comparisons allow researchers to estimate the average effect of the intervention across the participant group. The main difficulty is in minimizing selection bias for the two groups – hence the importance of randomization.
There are certainly merits to an RCT approach. Increasing the evidence base for pro-poor decision-making must be welcomed: for example, ‘with or without' RCT comparisons that use a naturally-occurring experimental design allows for a very straightforward interpretation of project interventions. However, Banerjee and colleagues go further than this: they imply that all aid disbursements should be based on RCT evidence.
Above and beyond questions around contextual specificity, the most salient arguments in the invited responses to Banerjee's essay can be clustered under four headings: the scale and reach of evaluations; technical concerns; moral and ethical issues; and political dimensions, for both donors and governments.
Scale and reach of evaluations: Whilst RCTs appear ideally suited to small-scale projects, they are not suitable for evaluating broad policy changes: macro-economic policies such as exchange rate policy or trade regimes are not suitable for RCTs ( Goldin et al , Bhagwati ) neither are labour market reforms or investments in infrastructure – such as the creation of a power plant or road construction – or the provision of basic services in health and education ( White ).
Technical concerns: Whilst RCTs may increase the efficiency in the allocation of aid flows – getting the most bang for your buck, if you like – ‘before and after' RCTs require baseline data that may not be obtainable, and the time required to ensure interventions are firmly embedded may conflict with the short time horizons of donors and governments ( Goldin et al ).
Moral and ethical issues: The case for RCTs in the aid industry must confront similar moral concerns to those faced in medicine some time ago: path dependence, institutional histories, and an aversion to accepting that some interventions just don't work (see Ben Goldacre's column here ). However, the ethical case against using groups of the population as a control group may be strong: are we able to make a case for withholding an intervention from potential recipients as part of a trial when the intervention may save lives? ( Goldin et al ).
Political dimensions: The case for RCTs on efficiency grounds may ignore some wider political factors contributing to aid flows: that donors have strategic interests, and that a country's status may be enhanced through giving aid, potentially undermining attention to efficiency (Moore). More importantly, aid flows now have very little project expenditure: direct budget support and a shift towards working on governance and institutional process severely limits the potential for RCT-style evaluations (Moore). A further political angle is highlighted by Bhagwati: RCT trials may increase aid effectiveness, but how does this intersect with country ‘ownership'? If an intervention is adjudged to be highly cost effective using RCT, but is rejected by government on political grounds, the effort spent on rigour and comparison may come to nothing. Such conundrums highlight how the challenge of aid is not just technocratic, but is a social and political endeavour: for example, how can RCTs contribute to efforts to combat enhance social inclusion and rights? Such questions open up potentially fruitful lines of enquiry: is it possible to combine RCTs with qualitative and participatory modes of research? If so, how? And on the basis of what philosophical standpoints?
The responses to Banerjee's pitch for randomised control trials are astute, but highly varied. It is clear that RCTs are not a panacea as Banerjee intimates, but they certainly offer a mode of increasing evidence-based decision making for particular policies at particular times. Locating the spaces and places where RCTs are most suitable, and how they can be combined and iterated with wider evaluation tools, are key questions that development scholars and practitioners should address - to both enhance the effectiveness of aid and counter the arguments of aid pessimists.