ODI Logo ODI

Trending

Our Programmes

Search

Newsletter

Sign up to our newsletter.

Follow ODI

Has AI ushered in an existential crisis of trust in democracy?

Expert comment

Written by Stephanie Diepeveen

Image credit:A data analyst using AI. Image license:Deemerwha studio/Shutterstock

Dis/misinformation is the most immediate global risk of the next two years, according to a recent survey of global policy-makers, private sector leaders and risk experts. False information and conspiracy theories are a ‘global menace’.

Multiple media and advocacy bodies (e.g. here and here) suggest that artificial intelligence (AI) could supercharge disinformation. The conjunction of generative AI – enabling more and better-targeted false information – political polarisation and dissatisfaction with democracy appear to be creating a perfect storm.

Some estimates suggest a 1,000% increase in AI-generated false articles between May and December 2023. Government-sponsored information campaigns are being used to influence online discussion, with the number of governments deploying actors to manipulate online discussions having doubled in the last 10 years. Governments have used misinformation as a justification for restricting internet access, further constraining people’s ability to reach open and trustworthy information. This is all the more concerning this year as one-third of the world’s population goes to the polls.

Why is generative AI such a concern? Scholars point out that the presence of false information in itself isn’t enough to determine what people consume, or how it influences their views or decisions. The reasons people consume and engage with information are context-specific, and shaped by values, experiences and interactions at individual and community levels. Labelling information as false or generated by AI is not necessarily sufficient or aligned to the factors that actually influence why people find information compelling.

The use of generative AI in politics does not necessarily equate with a decline in the trustworthiness of the public sphere. In fact, the uses and impact of technological innovations are difficult to predict. But this does not mean there is not cause for concern. Rather, it compels a rethink of the bigger challenges facing democratic processes and institutions. AI-generated disinformation is a risk because it accentuates and complicates wider challenges to citizens’ trust and engagement in democratic processes.

While most people seem to still want to live in a democracy, mistrust in the ability and intentions of democratic government and leaders to act in citizens’ best interests is growing. A survey of 30 countries commissioned by the Open Society Foundations found that citizens distrust politicians to work in their best interests, and in some cases distrust the efficacy of existing laws to protect citizens. Climate and geopolitical crises, from Russia’s invasion of Ukraine to Israel’s invasion of Gaza, accentuate the challenges democracies face in delivering concrete outcomes for their citizens.

Global inequalities add another level of challenge to democratic processes. Growing inequality can correspond negatively with political trust. The 2008 financial crisis and the Covid-19 pandemic both seem to have contributed to inequality globally, along income, wealth, gender and ecological measures. Inequalities extend to power over AI innovation, with countries highly skewed in their investment, innovation and use of AI, as well as investment in content moderation of harmful content. Tech companies pay less attention to majority languages and contexts of the Global South.

Mitigating the risk

Calls for action to mitigate the risks of AI-generated disinformation fall into two broad areas. First, there is a focus on the role of tech companies, and the development and enforcement of policies around election-related content by firms that develop and deploy AI systems, and social media platforms where content is shared. OpenAI recently published policies for the use of its applications in elections, while there is ongoing concern about the level of commitment and resources by social media platforms to moderate AI-generated content.

Second, there are appeals to government and inter-governmental bodies to better regulate the activity of AI, from the EU’s AI Act to the US Executive Order to the G7’s AI Principles and Code of Conduct. Some tech leaders, too, have called on governments to set guidelines for AI innovation (e.g. OpenAI), putting the onus on governments to define limits to their activities.

Both are important to set guardrails on the scale and scope of AI-generated content in relation to democratic processes. However, their focus is on disinformation per se, and managing the visibility and spread of AI-generated content. Their focus is not on the underlying factors that underpin citizens’ mistrust and disengagement with democracies, which precede generative AI. The potential for AI to ‘supercharge’ disinformation is in turn ‘supercharged’ by underlying challenges to democratic trust and delivery.

Addressing the threat of AI-generated disinformation to the functioning and outcomes of democratic elections requires addressing deeper issues of mistrust and inequality in democracy, and refocusing on the experience, agency and wellbeing of citizens. Democracies that do not deliver give little reason for engagement. Economic and political structures that make it difficult for citizens to gain understanding and control within their economic and political lives can foster disengagement and mistrust.

There are steps that tech firms and governments can take to start to mitigate these challenges, or at least avoid accentuating them with technology, for example greater transparency in content moderation across country and language contexts. But mitigating the risk of AI-generated disinformation also means looking at the wider experiences that shape citizens’ trust, agency and engagement with democracy.

Explore similar content