The data privacy and security of Rohingya refugees in Bangladesh has reportedly been jeopardised by the UN Refugee Agency. In an exposé published on 15 June by Human Rights Watch (HRW), UNHCR stands accused of improperly collecting the Rohingya’s biometric information and later sharing it with the Myanmar government without the Rohingya’s consent. Refugees said they had been told to register to receive aid, but the risks of sharing their biometrics had not been discussed, and the possibility this information would be shared with Myanmar was not mentioned.
The potential harm of sharing information with a regime that has a long history of manipulating registration systems to exclude and marginalise Rohingya populations is obvious. That biometrics are involved makes it worse. Unlike names or other personal information, biometrics are sticky – it’s not something you can change or escape.
HRW’s report has rightly triggered outcry, although UNHCR insists it followed proper procedures. Yet these issues are far from unique to the Rohingya crisis. As biometric registration of refugees has been rolled out in more and more places over the past decade, questionable practices around informed consent, data sharing and accountability have been a troublingly consistent feature.
Just how informed is consent?
A basic principle of any kind of data collection, informed consent means that people should be able to make a meaningful choice about whether or not to share their data. To do this, they need to understand how the information will be used and be confident that they won’t face negative consequences if they decline. This formula is complicated by extreme power imbalances – like those characterising the relationship between service providers and refugees. Here, gaining consent may need to involve extensive trust-building to be meaningful – it’s harder to say ‘no’ to something when the person asking for your data also has a say over whether you eat tomorrow.
In Bangladesh, refugees either didn’t understand how their data will be used, or felt they didn’t have a choice. Syrian refugees in Jordan – where biometric registration started in 2012 – reported similar experiences in recent interviews with HPG. Again, there was a startling lack of information available: as one Syrian explained, ‘They didn’t tell us what it was for, and we didn’t ask’. None of the people we spoke to knew of anyone who had refused to offer their biometrics because, as another put it, ‘I will give my information if it means assistance’.
Sloppy or outright coercive practices around informed consent have been documented elsewhere: in four out of five countries reviewed in a 2016 UNHCR internal audit, inadequate information was provided to refugees. In Ethiopia, refugees who didn’t provide their data were cut out of aid distributions, leading some to return to their country of origin.
Concerns around sharing biometrics are deeply linked with how that data might be used
The nature of registration as a joint exercise between UNHCR and host governments means that data frequently feeds into wider identification and surveillance systems. In Kenya, the biometric system was purposefully designed to crossmatch between national and humanitarian databases. According to HPG’s forthcoming research, UNHCR’s data is 100% accessible to the Government of Jordan, which crosschecks it with their counter-terrorism data.
For many refugees, having their personal information fall into the hands of repressive regimes back home is a worst-case scenario. Yet the fact that this possibility is often a feature of registration processes – and the risks that this may involve – is rarely discussed with those affected.
Why does this all keep happening?
Partially, it’s a question of who biometrics are really for. Despite assertions that biometric IDs can make receiving aid more efficient and more dignified, not only does this not necessarily hold true (as HPG’s research in Jordan suggests) this is rarely the main driver for roll out. Instead, fraud reduction, efficiency savings, security requirements, or simply path dependence – ‘this is how we do things now’ – play a much more prominent role. These considerations are much more squarely aligned with the interests of aid agencies, donors, host governments, and technology providers, than with refugees.
This top-down imposition of biometrics as something done to crisis-affected people rather than for them is exacerbated by the wider challenge of accountability within the humanitarian system.
With little meaningful power or representation, refugees have time and again been shut out of decisions as to how biometrics should be rolled out, what data gets shared with whom, what the risks are, and how they should best be mitigated, During registration processes, even the basic step of explaining what is going on does not appear to be a priority, let alone a wider consultative process.
Re-centring refugees in the process
It’s likely that biometrics are here to stay for the foreseeable future. This makes it all the more important to make changes now, so that the same problems don’t become more deeply ‘locked in’ to how things are done.
Organisations involved in biometric registration need to make sure their commitments to data responsibility make the jump from policy documents to practice. This means re-centring process around the needs of refugees.
Refugees should be involved in deciding whether biometrics are appropriate at all – as is the case with cash transfer programming– rather than being provided with individual opt-outs once the process has already started. This may not always be possible as, increasingly, the use of biometrics in refugee registration is becoming a requirement of host governments. In such cases, approaches need to move away from ‘sensitisation’ and towards co-design, so that refugees have a meaningful say in ensuring systems respond to their own priorities.