ODI Logo ODI

Trending

What we do

Search

Newsletter

Follow ODI

The importance - and absence - of good governance indicators

Verena Fritz is a a former ODI Research Fellow. She now works as a Consultant on Poverty Reduction and Public Sector Governance for the World Bank in Washington DC.The views expressed are her own.

As governance indicators have proliferated in recent years, so has their use and the controversy that surrounds  them. As more and more voices are pointing out, existing indicators – many of them developed and launched in the 1990s – have a number of flaws. This is particularly disquieting at a time when governance is at the very top of the development agenda. 

Many questions of crucial importance to the development community – such as issues around the relationship between governance and (inclusive) growth, or about the effectiveness of aid in different contexts – are impossible to answer with confidence as long as we do not have good enough indicators, and hence data, on governance.

The litany of problems concerning existing governance indicators has been growing:

  1. Indicators produced by certain NGOs (e.g. the Heritage Foundation), but also by commercial risk rating agencies (such as the PRS Group), are biased towards particular  types of policies, and consequently, the assessment of governance becomes mingled with the assessment of policy choices;
  2. Many indicators rely on surveys of business people (e.g. the World Economic Forum's Executive Opinion Survey). While they have important insights into governance challenges given their interaction with government bureaucracies, the views of other stakeholders are also important and remain underrepresented, as are concerns about governance of less relevance to the business community (e.g. civil and human rights);
  3. The other main methodology used are indicators produced by individuals or small groups of external experts – for example, the World Bank’s Country Policy and Institutional Assessment (CPIA), Bertelsmann’s Transformation Index, and the French Development Agency’s Institutional Profiles. This entails the risk that different experts ‘feed’ on each other’s ratings; and the depth to which external raters are able to explore the dimensions they are rating can vary.
  4. Many of the underlying questions are rather imprecise or cannot be competently answered by respondents (be they individual experts rating an issue, or people responding to public opinion surveys). For example, the ICRG rating methodology acknowledges the different dimensions of corruption (bribes vs. patronage/favors-for-favors type of exchanges) but nonetheless assigns one overall number on the degree of corruption. The Bertelsmann Transformation Index asks experts to assign a rating to very broad questions such as: “are democratic institutions, including the administrative and judicial systems capable of performing?” The Public Expenditure & Financial Accountability (PEFA) assessments are a good contrasting example of a set of indicators that are specific as well as based on in-depth analysis.
  5. Some of the best known governance indicators (the World Bank Institute's Governance Indicators/WGI, and Transparency International's Corruption Perception Index/CPI) are aggregates of indicators that already exist. This technique has been used to generate indicators with universal coverage when no universal primary data gathering effort was available. While the transparency about underlying indicators and aggregation methods have much improved in recent years, aggregate indicators obviously include the flaws of the underlying data. There are important issues with comparability over time as additional data sources are being added to generate the aggregate figures; and not least, the sharpness of focus and conceptual clarity is further muddled by the aggregation process.

Given the scarcity of (public) governance indicators in the mid-1990s, and the limited opportunity to fund work on such indicators at the time, the WGI and the CPI  made very important contributions in raising the profile of governance issues and to looking at governance in a more systematic way. However, as the relative importance of governance to development thinking has grown, there is a need now to go a step further.

In light of the diagnosed flaws, some individuals and agencies oppose internationally comparable governance indicators on principle. But this amounts to giving up a potentially powerful tool for analysis and policy guidance.

Rather, we should focus on developing better indicators. No indicators, however scrupulously produced, will be perfect. An abstract and qualitative concept such as governance – and its various dimensions – will remain difficult to measure. However, improvements are possible, and better governance indicators would be an important global public good for development research and policy.

I propose that such indicators should be based on four key principles:

  1. A focus on primary data collection rather than aggregation;
  2. Ratings that are squarely focused on the most relevant concepts and questions from a development perspective;
  3. At least to some  degree, ratings that are  based on the views and perceptions of domestic stakeholders;
  4. Resources that are adequate to ensure that the ratings can be properly implemented and validated.

To generate such sets of governance indicators, development agencies should create a pool of funding (as they have done to assess the quality of Public Financial Management through PEFA assessments). Ideally, the actual task of designing and implementing the indicators should be given to an independent body – a university or independent think tank – to ensure wide credibility. This body could, in turn, be obliged to ensure transparency and accessibility of the resulting data. At least initially, existing governance indicators based on aggregations would most likely continue to be produced in parallel – and would, in fact, draw on the new primary data set(s).

Such an effort could include, or be combined with, sector and issue specific indicators, such as measuring the quality of the civil service, or of public investment spending. Specific indicators are needed to guide actual interventions, and to monitor progress in specific areas. This would help to support systematic and comparative lesson learning on many governance interventions – an aspect that has also been an important gap to date.

The international development community should act on these issues – and sooner rather than later – if it is serious about governance and its role in development processes. Building up data-sets of new indicators, and particularly the construction of time-series of such data, takes time – but it is an important investment to make.

Note:  Together with partners from the University of Florida, ODI has been producing governance indicators for a small sample of countries. These indicators follow some of the principles outlined above – they reflect the views of a broad range of domestic stakeholders regarding a set of 36 questions (grouped around 6 principles).  The indicators are based on in-country surveys of a range of ‘well informed persons’, including representatives of government, NGOs, media, business, academia, and others. For more information, see: http://www.odi.org.uk/WGA_governance/.

Useful resources on governance indicators: