Our Programmes



Sign up to our newsletter.

Follow ODI

Why social media law-making needs to be more youth inclusive

Written by Kathryn Nwajiaku-Dahou, Louise Shaxson, Aaron Bailey-Athias

Image credit:Group of men protest against the Anti-Social Media bill brought to the Nigerian Parliament, in Lagos, Nigeria, November 2019.

In an increasingly digital world, legislatures and governments across the globe are grappling with the challenge of how to effectively tackle mis/disinformation, hate speech and online harms. A barrage of new laws, some of which go against constitutional provisions (France) and even international human rights law, have become increasingly widespread. The European Commission’s Digital Services Act also seeks to put in place an overarching digital regulatory framework.

In Germany the government’s NetzDG (Network Enforcement) law has been heavily criticised for setting ‘a dangerous precedent’ by forcing companies to censor on the government’s behalf, despite subsequent efforts to strengthen user rights. While last June, the Brazilian senate approved a 'fake news’ bill, which although not yet enshrined in law, has received a backlash from activists across the political spectrum due to its focus on ‘traceability’ and users’ identity.

And in the UK, identity verification for social media accounts has gained traction, including a recent petition which MPs will debate in parliament, as the government tries to monitor online platforms more stringently.

There are two main problems with the current race to regulate the social media space. First is the threat that regulating online content, through more legislation, poses for human rights, freedom of speech and data privacy. Second, knee-jerk regulatory responses run the risk of being ineffective because they do not sufficiently reflect the views and perspectives of the main users: young people.

Young people lead on social media – involving them meaningfully and systematically in policy-making processes to address online harms and mis/disinformation could not only spearhead innovation but policies that are more relevant and more effective.

The dangers of regulation for human rights

From a human rights perspective, the now popular proposal in the UK which would require identity verification to set up a social media account is problematic for three reasons.

First, user anonymity can pave the way to online abuse and harm. Anonymity enables freedom of expression in situations where the risks of being identified are high. Think of whistleblowers, but also members of vulnerable groups such as LGBTQI+ individuals or people living in contexts where residency rights are insecure and contested. Many online platforms flourished precisely because of user anonymity. While anonymity can of course create problems, it’s hard to imagine an online environment without it.

Second, requiring identification to engage on social media would restrict access solely to those who are documented. It would discriminate against people who, for whatever reason, don’t have or can’t obtain appropriate documentation. This includes refugees or migrants, many of whom already struggle to access basic services. .

Third, poorly thought-through approaches to regulation around digital identity could solve one problem while opening up a host of others — particularly in contexts where internet freedoms are already curbed, media diversity is low, or state news is perceived as fake news. Regulation might be tempting, but in a fluid and fast changing technological environment, rushing to regulate could also stifle the flourishing of precisely those sorts of initiatives that might help address online harms but also digital privacy.

While advocates of more regulation tend to put the spotlight on the problem of online harms and mis/disinformation, human rights defenders quite rightly put the spotlight on the creeping and insidious problem of data privacy which has received comparatively less attention. This cuts to the core of the question of how we tackle online harms without infringing on the right to freedom of opinion and expression.

Most social media users are unaware of the extent of the risks to their data privacy. Low levels of digital literacy, combined with weak or ‘porous’ data protection laws and few incentives for social media companies to share terms and conditions in local languages means there is little transparency over how personal data is being used, gathered, stored and shared.

While governments appear unwilling to act, data privacy is a new corporate battleground: Apple and Google are currently changing their approaches to ensuring data privacy, in effect becoming the world’s most powerful privacy regulators, though social media companies are fighting back and some commentators are suspicious of the real reasons behind the tech giants’ calls for data privacy laws.

A human-rights informed approach to digital governance, as outlined initiatives like the Christchurch Call, necessarily challenges governments and companies to ‘put their own houses in order’ first, to defend the fundamental rights of internet users to privacy, rather than simply policing them. The question is whether we are content to let private global corporations whose business model is premised on the ability to sell user data, battle it out on our behalf, or whether we as individuals need to engage more actively; purposefully bringing youth voices into the debate.

Regulation needs to be informed by social media’s main users: young people

Laws introduced to regulate online behaviour have failed to effectively engage (at least upstream) with the main users, most of whom are young and tend to be stigmatised as ‘instigators of violence and online harm’ rather than seen as part of the solution to the problem.

In 2019, over half of all internet users across the world were young people, admittedly a highly diverse socio-economic and political category, under the age of 34.

But over the pandemic, while the digital divide persists in low, middle and high-income countries and between them, some evidence suggests that young people are more connected with government services than they were previously, and that they increasingly engage more directly with positive messaging and social issues online. There is in fact a long history of young people using tech in innovative ways for positive activism, working out for themselves how to build trust-based communities that link online and offline behaviours.

So if poorly designed regulation is problematic, then getting regulation right means involving those who it will affect most – young people of diverse backgrounds living in different geographies, who use social media the most, but who tend to be excluded from policy and decision-making about the future of digital spaces and how to address online harms.

Effective approaches to regulation require not just consulting young people but giving them agency, going to where they are having conversations about the issues that concern them most and creating the conditions that enable them to build up the evidence base, resources and tools they need to combat online harms and mis/disinformation on social media.

Broad coalitions with youth to tackle the digital challenges of our time

Researchers and those invested in improving online experiences for all need to listen to and support the efforts of young people to articulate the challenges they face and craft their own narratives. This means a commitment to understanding what data they think is relevant and to collecting that data, not just data on their social media consumption patterns. Doing this will help to build young people’s analytical infrastructure and support their efforts to bring about positive social transformation. A good deal of work is already being done to mobilise youth voices around jobs and the world of work: we need a similar level of effort invested in listening to what young people themselves have to say about social media and how best to regulate it.