ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Unpacking the real value of data for public finance

Expert comment

Written by James Stewart

Image credit:Markus Spiske / Unsplash

There is no shortage of enthusiasm around the potential of better data to formulate, implement and improve fiscal policy.

Simpler flows of data promise improved controls to ensure that money goes where it should, while a greater range of tools and skills to analyse that data should lead to better insights into whether funds are having the desired impact. Bringing these together offers further opportunities to identify how money can be used more efficiently.

'Through digitalization, government can potentially conduct current fiscal policy more effectively — doing what we do now, but better — and perhaps before too long, design policy in new ways — doing things, that is, that we do not, and cannot, do now. They can have better information, build better systems, and design and implement better policies.' (International Monetary Fund)

While the promise is clear, the reality is rarely so simple.

The gap between the promise and realities of better data

Too often, we get caught up in apparent tensions around the architecture of information systems; for example, whether they should be centralised or decentralised.

Highly centralised systems provide an integrated picture but are inflexible, capture generic data, and often lag behind as they wait for people to report through tools that are not a necessary part of their day-to-day work. It is only natural for people to deprioritise tasks that they see as non-critical bureaucracy, even if this leaves people in another part of the organisation stuck and especially if the tasks are not easy to perform.

Distributed systems are more tailored to the specific needs of line ministries or local governments, and so can provide a more detailed picture. Individually, however, they do not let you zoom out to see wider connections and opportunities, or provide simple ways to see who is following the rules.

And whichever extreme is chosen, most systems have slow change cycles. They are designed as closed systems that absorb data defined in certain ways, rather than being optimised to respond to new opportunities or expectations.

The traps we fall into trying to get better data

Standard responses to these challenges often fall into two categories: data warehousing and a quest for interoperability.

Data warehousing is the work of regularly extracting data from the day-to-day systems into a new system where broader analyses can take place. The appeal is clear, but in practice it is both high maintenance and calls for high overheads. It can be hard to bring a common structure to very diverse datasets, and there are many challenges in integrating with existing systems and keeping up-to-date as they change. Even when up and running, being able to improve data quality will involve tracing every inaccuracy back to where it came from and affecting change in many parts of an organisation.

Interoperability is about making sure that systems work together to the greatest degree possible and that different systems can exchange data as and when needed with as little work as possible. It is an important topic and a common discussion in many fields (Public Digital prepared a guide on it for leaders in the UK’s National Health Service), but all too often that discussion operates in abstracts – for example, ‘Do we have an interoperability strategy?’, ‘Is our new system interoperable?’, and so on – rather than focusing on specific types of data that need to be exchanged (and to what end) or on how you change your technology (and vendors) to actually support the standards you need. Good examples of interoperability exist – the internet and the web being the ones we see most often – but most interoperability initiatives end in a lot of meetings and documents, leading to very little progress.

Breaking out of these traps

To close the gap between the promises and realities of better data, we must break out of these two traps – data warehousing and the quest for interoperability. But doing so will require us to think in new and different ways.

Here are five concepts that might help:

  1. Registers: Well-defined, well-maintained lists of core data (like a list of companies registered in a given jurisdiction) give us the essential reference points that other data can connect with. Such registers are increasingly recognised as a core component of any data strategy. They are a deceptively simple concept; easy to talk about but harder to put in place as they require effort to get their definitions right and to ensure that they are working within clear boundaries, will stay up to date, and can be trusted. That is why the UK's work on ‘registers’ used the term to mean the lists themselves, but referred to registrars and registries as the people and organisations responsible for making them work.

  2. Flow of data: While having clear homes for particular registers and processes is vital, it is equally important that the flow of data is given at least as much prominence. Often, we think of data in terms of where it lives – in databases or data warehouses – and how we protect these repositories. Where data is first created or first stored is rarely where it will have the most utility. We need to think about how we most effectively make sure that data flows to where it is needed. The value of a definitive list of registered companies is not in the fact it exists somewhere; rather, it is in the fact that people and tools can connect to it and pull it through for a wide variety of uses. For example, the eGov Foundation’s iFIX platform has helped data flow between a central department and local bodies in a way that lets the right people take timely, corrective actions.

  3. Common foundations: For expediency – and to build trust – we need common foundations provided at the right level of granularity. Traditional interoperability or data exchange mechanisms often try to solve all the problems in one go, which can make them hard to implement, or even embed approaches and technologies that hamper progress (listen to Jen Pahlka talk about service buses, an outmoded approach to interoperability that is still too often mandated). These foundations include common authentication and identity standards so we can be clear where some data came from or who is asking for it; clear expectations of how data will be encrypted in transit; or common ways of letting people know when some data may have changed. Conveniently, most of those building blocks exist in well-established internet standards with mature open-source implementations. For example, widely used standards like OAuth 2.0, OpenID Connect and Security Assertion Markup Language (SAML) underpin many of our day-to-day experiences, including using employer sign-in systems or logging into a website through Google or Facebook accounts.

  4. Data as a service: To move beyond competing incentives for producers and consumers of data, we need to think in terms of data as a service. Data as a service means empowering teams whose role is to understand the downstream users of data (whether public servants, certain groups or others) and to ensure that upstream data collection or production supports that use. The best ‘data as a service’ teams do that by helping to ensure it is easier to produce high-quality, trustworthy data than not to, and by showing the value generated when data is able to flow. Paul Downey – who leads the Planning Data service in the UK’s Department for Levelling Up, Housing and Communities – expressed this in a recent blog post, where he talks about his team’s focus on supporting people who build planning tools by providing ‘data they can understand, use and trust enough to build into services to inform important decisions for their users’.

  5. Iteration: In order to get started and maintain momentum, we need to embrace the idea of iteration. We will never have perfect knowledge of what is needed today, much less in the next moment of crisis. During the Covid-19 pandemic, a clear pattern was established whereby the governments that were set up to respond digitally were those that had already invested in their digital teams and ways of working.

Where to start

When we think about making technological or data changes in public finance, we often start with the big things, like a major overhaul of the chart of accounts and a full-scale Integrated Financial Management Information System replacement. Big changes can be necessary, but if we really want to keep getting value from data, we need to embed an iterative mindset. We do that by picking one simple place to start: one register or one foundational standard that can be implemented and would make a difference. We then move on to the next one and build from there.

Many of the concepts highlighted above, and this type of iterative thinking, are behind our upcoming webinar, which explores the concept of standards for fiscal data exchange. We have put this webinar together because we believe that by thinking more intentionally, starting small and iterating on purposeful innovations, we can truly identify better technologies and approaches for using data in public finance. Standards represent just one area where we can start doing this today.