What we do



Follow ODI

From M&E to monitoring and learning

Hero image description: Yogyakarta, Indonesia, 2011 Image credit:Nugroho Nurdikiawan Sunjoyo/World Bank

This chapter provides a practical monitoring approach that builds reflective and evaluative practice into the work of influencing policy, to support decision-making and demonstrate progress.

Lewis Carroll points to an important consideration when embarking on a learning exercise: if we want to prove or improve our work then we need to be able to describe clear intentions to direct our learning.

We need to care where we are going. Without clear intentions, we are at liberty to define success in any way we like. This may sound appealing to some but is more likely to result in repetitive circles than learning.

The impact of academic research is traditionally evaluated via peer review to assess quality, relevance and accuracy; and citation analysis to assess uptake and reach. While both of these are important, neither helps us discover what influence the research may have had on policy (assuming it had an intention to do so), whether the research was worth undertaking and hence how to make it more effective. Consequently, all we learn is how to make our research more attractive to other researchers.

Traditional M&E approaches – which rely on a simple feedback model with predefined indicators, collecting data and assessing progress towards pre-set objectives – are simply not adequate in the context of policy-influencing interventions. As explored in Diagnose the problem, many of the results we are looking for cannot be projected ahead of time in a linear fashion.

The reality of the distributed capacities, divergent goals and uncertain change pathways that pervade many policy contexts means measuring progress along a predefined course is insufficient for monitoring.

Effective M&E requires a careful combination of sensing shifts in the wider context (policy, politics, economics, environmental, social), monitoring the relationships and behaviours of diverse actors, weighing up different sources of evidence, being open to unexpected effects and making sense of data in collaborative enquiry. This kind of monitoring may seem challenging - but it doesn’t have to be. 

ROMA aims to shift the emphasis from evaluation and more to ‘sense-making’ of monitoring information. This fits into current management practices to ensure decisions on responding to an unpredictable situation are evidence-based and owned widely.

Monitoring and learning principles

The principles that underlie ROMA's monitoring and learning approach can be summarised as:

  • Appropriate to purpose, scale and context. 

In ROMA, the primary driver for monitoring is the users and how they will use the data and insights. But scale and context are also determinants. A small-scale intervention will require much lighter monitoring than a long-term, multi-strategy intervention. If you are not sure of the scale of the intervention at this stage, this chapter will guide you through the planning process.

As with context, simple problems will require only routine monitoring and performance management, whereas problems exhibiting one or more signs of complexity will need more sophisticated, responsive and multi-purpose monitoring systems. If you are unsure about the level of complexity, Diagnose the problem will introduce you to three clear signs to look for.

  • Defines realistic results within the sphere of influence.

The influence of an intervention has a definite limit based on resources, time, reach, politics etc. Beyond the sphere of influence is the sphere of concern, which is where the results that really matter lie (such as better education, quality health care, secure livelihoods).

However, you have to rely on others to influence these results. ROMA considers results only within your sphere of influence. These are the ones that can be measured and can guide strategy and engagement.

The planning stages in the previous chapter as well as the monitoring areas and measures in this section are used to define the intervention and its sphere of influence. They point to the priority areas to monitor.

  • Focuses on actors and graduated change.

Much policy-influencing work revolves around people. It follows that monitoring policy influence should also revolve around people. In ROMA, an intervention is monitored through its effect on key stakeholders – those people or organisations within the sphere of influence of the intervention and whom the intervention seeks to influence directly or indirectly. 

ROMA recognises that effects can come in many guises and it is important to be able to pick up a broad spectrum – the simple, immediate responses that show you are on the right track as well as substantive commitments that indicate you are close to your goal.

  • Reasoned judgement about statistical significance.

ROMA is an inductive approach that seeks to generate evidence that can be used to increase our understanding of our effect on policy. It does not seek to determine a statistical, numerical measure of policy influence.

This chapter will go on to explore the following:

What to monitor and why

This first section describes why and what to monitor. Deciding what to monitor is an important first step in building a strong M&E plan. In order to choose which indicators to monitor, you need to know why you are doing it: the purposes behind M&E are usually viewed in terms of learning and accountability. But we need to be more specific.

How to monitor – collecting and managing data

The second section of this chapter divides data collection into real-time collection methods and retrospective study methods. Find out about which methods are particularly suited to monitoring policy influence and engagement.

Making sense of learning and decision-making

In the final part of this chapter, we explore why information and insight gathered through monitoring and evaluation is useless if you don't create space and time to make sense of your findings and decide what to do with them.

The ROMA guide to monitoring and learning - Simon Hearn