ODI Logo ODI

Trending

What we do

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Monitoring Tools: Researcher Checklist

Toolkit/guidelines

Effective use of research has the potential to improve public policy, enhance public services and contribute to the quality of public debate. Further, knowledge of when and how funded research makes a difference should enable research funders to make better decisions about how and where they allocate research funds. At the same time it is important that non-instrumental social and economic research is also valued and supported. The routes and mechanisms through which research is communicated to places where it can make a difference are many and varied. The ways in which research is then used are also complex and multifaceted. For example, research may directly influence changes in policy, practices and behaviour. Or it may, in more subtle ways, change people's knowledge, understanding and attitudes towards social issues. Tracking these subtle changes is difficult, but is perhaps more important in the long run.

The RURU report of an ESRC symposium on assessing the non-academic impact of research addresses these issues. It lays out the reasons why we might want to examine the difference that research can make. It then explores different ways of approaching this problem, outlining the core issues and choices that arise when seeking to assess research impact. A wide range of key questions are raised in the paper, and consideration of these should help those wishing to develop work in this area. An aide-memoire for impact assessors is outlined below.

Initial questions for consideration when designing impact assessment

  • Who are the key stakeholders for research impact assessments, and why do they want information assessing specifically the non-academic impacts of research?
  • Is assessment for summative or formative purposes? How will the information gleaned feed into decision-making?
  • Will any impact assessment be primarily for learning (hence examinations of process may need to be emphasised)? Or will the assessment be primarily to enable judgements to be made (hence examinations of output and outcomes will necessarily be privileged)?
  • Will the dominant mode of assessment be quantitative or qualitative - and what are the implications of this?
  • For any programme of research work, what impacts are desired, expected, or reasonable, and can impact assessments be framed in the light of these expectations?
  • Should all research have identifiable impacts? What about the notion that individual studies should primarily feed into other academic work or into research synthesis?

Questions arising from more nuanced concepts of research use

  • What types of research use/impacts are of most interest (e.g. instrumental or conceptual; immediate or longer-term)? And what steps can be taken to guard against a bias towards privileging those impacts that are most instrumental, up-front and readily identifiable?
  • What settings for (potential) research use are to be examined? Who are the actual and potential research users? Can we identify them all, even tracking through unexpected avenues of diffusion?
  • What are the implications of casting the net close or wide when assessing potential impacts?
  • Assessing impacts on policy choices may be especially problematic as research that feeds into policy choices is often synthesised, integrated with other research/knowledge/expert opinion, and digested. How will this be addressed?
  • In such complex circumstances, how can we disentangle the specific impacts of research, pay attention to non-linearity of effects, address issues of attribution, and identify the additionality of any research contribution?

Further questions arising from a consideration of research use models

  • Are we interested primarily in outputs (what is produced by the research), impact processes (how research outputs are used), impacts per se (the initial consequences of research use in various decision arenas), or outcomes (the subsequent consequences of changes in decision arenas for clients or public)?
  • Can we identify research usage at the individual, organisational and system level?
  • Can we track all types of research impact, both expected and unexpected?
  • Should we try to identify and examine unintended and/or dysfunctional impacts, such as the misuse of research?
  • How will we access the hidden or tacit use of research?

Questions to ask that acknowledge the importance of context

  • Should impacts be assessed in the absence of initiatives to increase research uptake, or only in tandem with known effective approaches?
  • Should we judge/value research on its actual or on its potential impacts?
  • How can we take into account the receptivity of context, not just in terms of the concomitant strategies used to increase uptake but also in terms of the political acceptability of findings or propitiousness of message/timing?
  • In making judgements about impacts, how can we acknowledge the role played by serendipity and the opening up of windows of opportunity?

Further questions that reflect key methodological choices

  • What are the relative advantages/disadvantages of tracking forwards from research to impacts, or backwards from change to antecedent research?
  • Research impacts may be far removed temporally from the source research - so when should impacts be assessed? What timeframes are most appropriate given the competing pressures of leaving it long enough so that impacts can reasonably occur, but not so long that the trail traversed by the research goes cold?
  • How can we balance qualitative descriptions and subjective assessments of impacts with quantitative and more objective measures?
  • When does scoring the extent of impacts become a useful tool, and what are its potential dangers?
  • How can we aggregate across different sorts of impact?
  • How can (or indeed, should) impacts be valued?

Strategic questions for impact assessors

  • How can we draw policy implications from impact assessments?
  • What are the resource implications of carrying out impact assessments? How will we know what level of investment in impact assessment is worth it?
  • Could the need to demonstrate 'impact' influence funding bodies so that they alter priorities or even the nature of funded research?
  • Will knowledge of the role of impact assessments by researchers influence the nature of the questions posed and methods applied, e.g. to ensure production of readily absorbed 'policy messages' that challenge little but can readily be tracked through to impact?
  • Will the processes of impact assessment introduce new incentives/changed behaviours into the system: for gaming; misrepresentation, etc.? For example, will savvy researchers begin to employ not just professional communicators but also media relations consultants?
  • Will our systems of impact assessment be subtle enough to identify and discount inappropriate impacts, e.g. the tactical use of research deployed in support of pre-existing views; inappropriate application of findings beyond their realm of applicability; 'halo' effects of famous studies being cited without real purpose etc.?

This tool first appeared in the ODI Toolkit, Successful Communication, A Toolkit for Researchers and Civil Society Organisations.