ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

‘What works’? Systematic reviews in international development research and policy

Written by Richard Mallett

Explainer

Although well established in the natural sciences, systematic reviews are relatively new to the world of international development research. But they are being increasingly promoted by the likes of the UK Department for International Development (DFID) and AusAID as an important step in strengthening evidence-informed policy-making amongst aid agencies.

Recently described by The Guardian’s Ben Goldacre as the ‘cleanest form of research summary’, systematic reviews involve synthesising and assessing all available evidence in order to answer tightly focused research questions, usually on the outcomes or impacts of specific programmes. If done well, they are considered to be ‘the most reliable and comprehensive statement about what works’ in terms of programme effectiveness. This is particularly important given the current demand for donor governments to demonstrate value for money in an economic climate defined largely by austerity and cutbacks.

The idea behind systematic reviews is hugely appealing: researchers follow a carefully designed review protocol, identify relevant studies from a broad range of sources and grade their quality against pre-determined scales, before synthesising their findings and drawing an objective conclusion about programme effectiveness. In theory, they should help decision-makers identify ‘what works’ in generating positive outcomes for beneficiaries – and therefore have the potential to shape future spending choices.

However, a new briefing paper by the Secure Livelihoods Research Consortium (SLRC) suggests that systematic reviews may not be all they’re cracked up to be. Drawing on researchers’ shared experiences of conducting eight systematic reviews into the impacts of a range of development interventions – from cash transfers to school feeding – the paper identifies a number of ways in which this approach may become problematic, suggesting that their use within international development research demands more careful consideration than has perhaps so far been the case.

Practical constraints

While the theory sounds good, in practice systematic reviews are not straightforward. In addition to being heavily resource intensive exercises – both in terms of cost and time – one of the major problems relates to researchers’ ability to objectively identify and retrieve all relevant evidence. The experiences of some SLRC and ODI researchers suggest a great deal of the literature on intervention impact in developing countries is located beyond peer-reviewed journals, which means manually ‘hand-searching’ institutional websites – a more subjective practice – becomes just as, if not more important, than plugging pre-defined search terms into academic databases. Identification of relevant material is further complicated by the remarkable prominence of vague, unclear study titles and abstracts within the development studies literature (something noted by Duncan Green last year)!

Thus, although considered to be objective and rigorous, there are no guarantees that systematic reviews, or rather the individuals conducting them, will successfully identify every relevant study, meaning that subsequent conclusions may only partially reflect the true evidence base.

What evidence counts?

Although understandable, the desire to assess evidence against uniform quality scales (such as the one described by Lawrence Sherman and colleagues) is both concerning and problematic. Systematic reviews tend to privilege one kind of method over another, with full-blown randomised controlled trials (RCTs) often representing the ‘gold standard’ of methodology and in-depth qualitative evidence not really given the credit it deserves. In addition, considerations of political economy, social relations and institutions are essential to understanding why particular interventions work in particular places at particular times. But by privileging research that aims to measure impact by introducing laboratory-like conditions in the field – effectively abstracting the intervention from its context – systematic reviews do not necessarily help us understand these important mediating factors.

The future

Systematic reviews can ultimately add value to development research: in addition to reducing researcher bias and increasing rigour, they place empirics centre-stage within literature reviews. But their strengths must be balanced against a number of practical and fundamental limitations, such as those outlined here.

Perhaps most importantly, we need to avoid turning discussions on the use of systematic reviews in international development – and indeed the social sciences more broadly – into an overly technical niche topic that only speaks to reviewers themselves. There are far broader implications of the systematic review debate which should be of interest to a much wider audience.

For example, the growing prevalence of systematic reviews opens up a number of questions about the nature and process of evidence-building; about how we, as researchers, reach and construct narratives on programme impact, and how decision-makers engage with those narratives. Their use also reminds us that questions of impact and effectiveness should always be driven by empirical evidence – not by anecdotes and received wisdom.

The big point in all of this is as follows: while it is now fairly well recognised that good international development policy relies, to a large extent, on good research and evidence, in order to reach a sound conclusion about ‘what works’ in terms of programme effectiveness, we must first wrestle with the question of ‘what works’ in evidence-building. Systematic reviews may well have a central role to play, but they must be handled with sense and diligence.