Our Programmes



Sign up to our newsletter.

Follow ODI

Strategy Development: Most Significant Change (MSC)


The Most Significant Change (MSC) technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the programme cycle and provides information to help people manage it. MSC contributes to evaluation because it provides data on impact and outcomes which can be used to help assess the performance of the programme as a whole.

Essentially, the process involves the collection of significant change (SC) stories emanating from the field level, and the systematic selection of the most important of these by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by 'searching' for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of the reported changes. When the technique is successfully implemented, whole teams of people begin to focus their attention on programme impact.

MSC has had several names since it was conceived, each emphasising a different aspect. Examples are: 'Monitoring-without-indicators' - MSC does not make use of predefined indicators, especially ones which have to be counted and measured; or the 'story approach' - the answers to the central question about change are often in the form of stories of who did what, when and why, and the reasons the event was important.

Detailed description of the process

  • The first step in MSC generally involves introducing a range of stakeholders to MSC and fostering interest in and commitment to participating. The next step is to identify the domains of change to be monitored. This involves selected stakeholders identifying broad domains - for example, 'changes in people's lives' - that are not precisely defined as are performance indicators, but deliberately left loose to be defined by the actual users. The third step is to decide how frequently to monitor changes taking place in these domains.
  • SC stories are collected from those most directly involved, such as participants and field staff. The stories are gathered by asking a simple question such as: 'during the last month, in your opinion, what was the most significant change that took place for participants in the programme?' It is initially up to respondents to allocate a domain category to their stories. In addition to this, respondents are encouraged to report why they consider a particular change to be the most significant.
  • The stories are then analysed and filtered up through the levels of authority typically found within an organisation or programme. Each level of the hierarchy reviews a series of stories sent to them by the level below and selects the single most significant account of change within each of the domains. Each group then sends the selected stories up to the next level of the programme hierarchy, and the number of stories is whittled down through a systematic and transparent process. Every time stories are selected, the criteria used to select them are recorded and fed back to all interested stakeholders, so that each subsequent round of story collection and selection is informed by feedback from previous rounds. The organisation is effectively recording and adjusting the direction of its attention - and the criteria it uses for valuing the events it sees there.
  • After this process has been underway for some time, perhaps a year, a document is produced including all stories selected at the uppermost organisational level in each domain of change over the given period. The stories are accompanied by the reasons for selection. The programme funders are asked to assess the stories in the document and select those which best represent the sort of outcomes they wish to fund. They are also asked to document the reasons for their choice. This information is fed back to project managers.
  • The selected stories can then be verified by visiting the sites where the described events took place. The purpose of this is twofold: to check that stories have been reported accurately and honestly, and to provide an opportunity to gather more detailed information about events seen as especially significant. If conducted some time after the event, a visit also offers a chance to see what has happened since the event was first documented.
  • The next step is quantification, which can take place at two stages. When an account of change is first described, it is possible to include quantitative information as well as qualitative information. It is also possible to quantify the extent to which the most significant changes identified in one location have taken place in other locations within a specific period. The next step after quantification is monitoring the monitoring system itself, which can include looking at who participated and how they affected the contents, and analysing how often different types of changes are reported. The final step is to revise the design of the MSC process to take into account what has been learned as a direct result of using it and from analysing its use.

In sum, the kernel of the MSC process is a question along the lines of: 'Looking back over the last month, what do you think was the most significant change in [particular domain of change]?' A similar question is posed when the answers to the first question are examined by another group of participants: 'From among all these significant changes, what do you think was the most significant change of all?'.

Key points/practical tips
MSC is an emerging technique, and many adaptations have already been made. These are discussed in Davies and Dart (2005). In sum, there are 10 steps:

  • How to start and raise interest
  • Defining the domains of change
  • Defining the reporting period
  • Collecting SC stories
  • Selecting the most significant of the stories
  • Feeding back the results of the selection process
  • Verification of stories
  • Quantification
  • Secondary analysis and meta-monitoring
  • Revising the system

Example: MSC in Bangladesh
In 1994, Rick Davies was faced with the job of assessing the impact of an aid project on 16,500 people in the Rajshahi zone of western of Bangladesh. The idea of getting everyone to agree on a set of indicators was quickly dismissed, as there was just too much diversity and too many conflicting views. Instead, Rick devised an evaluation method which relied on people retelling their stories of significant change they had witnessed as a result of the project. Furthermore, the storytellers explained why they thought their story was significant.

If Rick had left it there, the project would have had a nice collection of stories but the key stakeholders' appreciation for the impact the project would have been minimal. Rick needed to engage the stakeholders, primarily the region's decision makers and the ultimate project funders, in a process that would help them see (and maybe even feel) the change. His solution was to get groups of people at different levels of the project's hierarchy to select the stories they thought were most significant and explain why they made that selection.

Each of the four project offices collected a number of stories and was asked to submit one story for each of the four areas of interest to the head office in Dhaka. The Dhaka head office staff then selected one story from the 16 submitted. The selected stories and reasons for selection were communicated back to the level below and the original storytellers. Over time, the stakeholders began to understand the impact they were having and the project's beneficiaries began to understand what the stakeholders believed was important. People were learning from each other. The approach, MSC, systematically developed an intuitive understanding of the project's impact that could be communicated in conjunction with the hard facts.

Rick's method was highly successful: participation in the project increased; the assumptions and world views surfaced, in one case helping resolve an intra-family conflict over contraceptive use; the stories were used extensively in publications, educational material and videos; and the positive changes where identified and reinforced.

To date, although the application of MSC has been mostly confined to NGO programmes and other not-for-profit organisations, corporations are also recognising that issues such as culture change, communities of practice, learning initiatives generally and leadership development could benefit from an MSC approach.

This tool first appeared in the ODI Toolkit, Tools for Knowledge and Learning: A Guide for Development and Humanitarian Organisations.