CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

TECHNOLOGY AND HUMAN RIGHTS

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

The Principles on Identification for Sustainable Development (the Principles), the creation of which was facilitated by the World Bank’s Identification for Development (ID4D) initiative in 2017, provide one of the few attempts at global standard-setting for the development of digital identification systems across the world. They are endorsed by many global and regional organizations (the “Endorsing Organizations”) that are active in funding, designing, developing, and deploying digital identification programs across the world, especially in developing and less developed countries.

Digital identification programs are coming up across the world in various forms, and will have long term impacts on the lives and the rights of the individuals enrolled in these programs. Engagement with civil society can help ensure the lived experience of people affected by these identification programs inform the Principles and the practices of International Organizations. 

Access Now, Namati, and the Open Society Justice Initiative co-organized a Civil Society Organization (CSO) consultation in August 2020 that brought together over 60 civil society organizations from across the world for dialogue with the World Bank’s ID4D Initiative and Endorsing Organizations. The consultation occurred alongside the first review and revision of the Principles, which has been led by the Endorsing Organizations during 2020. 

The consultation provided a platform for civil society feedback towards revisions to the Principles as well as dialogue around the roles of International Organizations (IOs) and Civil Society Organizations in developing rights-respecting digital identification programs. 

This new civil society-drafted report presents a summary of the top-level comments and discussions that took place in the meeting, including recommendations such as: 

  1. There is an urgent need for human rights criteria to be recognized as a tool for evaluation and oversight of existing and proposed digital identification systems, including throughout the Principles document 
  2. Endorsing Organizations should commit to the application of these Principles in practice, including an affirmation that their support will extend only with identification programs that align with the Principles 
  3. CSOs need to be formally recognized as partners with governments and corporations in designing and implementing digital identification systems, including greater country-level engagement with CSOs from the earliest stages of potential digital identification projects through to monitoring ongoing implementation
  4. Digital identification systems across the globe are already being deployed in a manner that enables repression through enhanced censorship, exclusion, and surveillance, but centering transparent and democratic processes as drivers of the development and deployment of these systems can mitigate these and other risks

Following the consultation and in line with this new report, we welcome the opportunity to further integrate the principles of the Universal Declaration of Human Rights and other sources of human rights in international law into the Principles of Identification and the design, deployment, and monitoring of digital identification systems in practice. We encourage the establishment of permanent and formal structures for the engagement of civil society organizations in global and national-level processes related to digital identification, in order to ensure identification technologies are used in service of human agency and dignity and to prevent further harms in the exercise of fundamental rights in their deployment. 

We call on United Nations and regional human rights mechanisms, including the High Commissioner on Human Rights, treaty bodies, and Special Procedures, to take up the severe human rights risks involved in the context of digital identification systems as an urgent agenda item under their respective mandates.

We welcome further dialogue and engagement with the World Bank’s ID4D Initiative and other Endorsing Organizations and promoters of digital identification systems in order to ensure oversight and guidance towards human rights-aligned implementation of those systems.

This post was was originally published as a press release on December 17, 2020

  1. Access Now
  2. AfroLeadership
  3. Asociación por los Derechos Civiles (ADC)
  4. Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
  5. Derechos Digitales
  6. Development and Justice Initiative 
  7. Digital Welfare State and Human Rights Project, Center for Human Rights and Global Justice
  8. Haki na Sheria Initiative 
  9. Human Rights Advocacy and Research Foundation (HRF)
  10. Myanmar Centre for Responsible Business (MCRB) 
  11. Namati

Statements of the Digital Welfare State & Human Rights Project do not purport to represent the views of NYU or the Center, if any.

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

TECHNOLOGY & HUMAN RIGHTS

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

On November 23, 2020, the Center for Human Rights and Global Justice’s Digital Welfare State and Human Rights Project hosted the third virtual conversation in its “Transformer States: A Conversation Series on Digital Government and Human Rights” series. Christiaan van Veen and Victoria Adelmant interviewed Eve Vincent, senior lecturer in the Department of Anthropology at Macquarie University and author of a crucial report on the lived experiences of one of the first Cashless Debit Card trials in Ceduna, South Australia.

The Cashless Debit Card is a debit card which is currently used in parts of Australia to deliver benefit income to welfare recipients. Vitally, it is a tool of compulsory income management: the card “quarantines” 80% of a recipient’s payment, preventing this 80% from being withdrawn as cash and blocking attempted purchases of alcohol or gambling products. It is similar to, and intensifies, a previous scheme of debit card-based income management, known as the “Basics Card.” This earlier card was introduced after a 2007 report into child sexual abuse in indigenous communities in Australia’s Northern Territory which identified alcoholism, substance abuse, and gambling as major causes of such abuse. One of the measures taken was the requirement that indigenous communities’ benefit income be received on a Basics Card which quarantined 50% of benefit payments. The Basics Card was later extended to non-indigenous welfare recipients, but it remained disproportionately targeted at indigenous communities.

Following a 2014 report by mining magnate Andrew Forrest on inequality between indigenous and non-indigenous groups in Australia, the government launched the Cashless Debit Card to gradually replace the Basics Card. The Cashless Debit Card would quarantine 80% of benefit income on the card, and the card would block spending where alcohol is sold or where gambling takes place. Initial trials were targeted, again, in remote indigenous areas. The communities in the first trials were presented as parasitic on the welfare state and in crisis with regard to alcohol abuse, assault, and gambling. It was argued that drastic intervention was warranted: the government should step in to take care of these communities as they were unable to look after themselves. Income management would assist in this paternalistic intervention, fostering responsibility and curbing alcoholism and gambling through blocking their purchases. Many of Eve’s research participants found these justifications offensive and infantilizing. The Cashless Debit Card is now being trialed in more populous areas with more non-indigenous people, and the narrative has shifted. Justifications for cards for non-indigenous people have focused more on the need to teach financial literacy and budgeting skills.

Beyond the humiliating underlying stereotypes, the Cashless Debit Card itself leads cardholders feeling stigmatized. While the non-acceptance of Basics Cards at certain shops had led to prominent “Basics Card not accepted here” signs, the Cashless Debit Card was intended to be more subtle. It is integrated with EFTPOS technology, meaning it can theoretically be used in any shop with one of these ubiquitous card-reading devices. ETPOS terminals in casinos or pubs are blocked, but these establishments can arrange with the government to have some discretion. A pub can arrange to allow Cashless Debit Card-holders to pay for food but not alcohol, for example, thereby not excluding them entirely. Despite this purported subtlety, individuals reported feeling anxious about using the card as the technology was proving unreliable and inconsistent, accepted one day but not the next. When the card was declined, sometimes seemingly randomly, this was deeply humiliating. Card-holders would have to gather their shopping and return it to the shelves under the judging gaze of others, potentially of people they know.

Separately, some card-holders had to use public computers to log into their accounts to check their cards’ balance, highlighting the reliance of such schemes on strong digital infrastructure and on individuals’ access to connected devices. But some Cashless Debit Card-holders were quite positive about the card: there is, of course, a diversity of opinions and experiences. Some found that the card’s fortnightly cycle had helped them with budgeting and thought the app upon which they could check their balance was a user-friendly and effective budgeting tool.

The Cashless Debit Card scheme is run by a company named Indue, continuing decades-long trends of outsourcing welfare delivery. Many participants in Eve’s research spoke positively of their experience with Indue, finding staff on helplines to be helpful and efficient. But many objected to the principle that the card is privatized and that profits are being made on the basis of their poverty. The Cashless Debit Card costs AUD 10,000 per participant per year to administer: many card-holders were outraged that such an expense is outlaid to try to control how they spend their very meager income. Recently, the biggest four banks in Australia and government-owned Australia Post have been in talks about taking over the management of the scheme. This raises an interesting parallel with South Africa, where social grants were originally paid through a private provider but, following a scandal regarding the tender process and the financial exploitation of poor grant recipients, public providers stepped in again.

As an anthropologist, Eve’s research takes as a starting point the importance of listening to the people affected and foregrounding their lived experience, resonating with a common approach to human rights research. Interestingly, many Cashless Debit Card-holders used the language of human rights to express indignation about the scheme and what it represents. Reminiscent of Sally Engle Merry’s work on the ‘vernacularization’ of human rights, card-holders invoked human rights in a manner quite specific to the Aboriginal Australian context and history. Eve’s research participants often compared the Cashless Debit Card trials to the past, when the wages of indigenous peoples had been stolen and their access to money was tightly controlled. They referred to that time as the “time before rights”; before legislative equal citizen rights had been gained. Today, they argued, now that indigenous communities have rights, this kind of intervention and control of communities by the government is unacceptable. As one of Eve’s research participants put it, the government has through the Cashless Debit Card “taken away our rights.”

December 4, 2020. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

“We are not Data Points”: Highlights from our Conversation on the Kenyan Digital ID System

TECHNOLOGY AND HUMAN RIGHTS

Seeing the Unseen: Inclusion and Exclusion in Kenya’s Digital ID
System

On October 28, 2020, the Digital Welfare State and Human Rights Project held a virtual conversation with Nanjala Nyabola for the second in the Transformer States Conversation Series on the topic of inclusion and exclusion in Kenya’s digital ID system. Nanjala is a writer, political analyst, and activist based in Nairobi and author of Digital Democracy, Analogue Politics: How the Internet Era is Transforming Politics in Kenya. Through an energetic and enlightening conversation with Christiaan van Veen and Victoria Adelmant, Nanjala explained the historical context of the Huduma Namba system, Kenya’s latest digital ID scheme, and pointed out a number of pressing concerns with the project.

Kenya’s new digital identity system, known as Huduma Namba, was announced in 2018 and involved the establishment of the Kenyan National Integrated Identity Management System (NIIMS). According to its enabling legislation, NIIMS is intended to be a comprehensive national registration and identity system to promote efficient delivery of public services, by consolidating and harmonizing the law on the registration of persons. This ‘master database’ would, according to the government, become the ‘single source of truth’ on Kenyans. A “Huduma Namba” (a unique identifying number) and “Huduma Card” (a biometric identity card) would be assigned to Kenyan citizens and residents.

Huduma Namba is the latest in a long series of biometric identity systems in Kenya that began with colonization. Kenya has had a form of mandatory identification under the Kipande system since the Native Registration Ordinance of 1915 under the British colonial government. The Kipande system required black men over the age of 16 to be fingerprinted and to carry identification that effectively restricted their freedom of movement and association. Non-compliance carried the threat of criminal punishment and forced labor. Rather than repealing this “cornerstone of the colonial project” upon independence, the government instead embraced and further formalized the Kipande system, making it mandatory for all men over 18. New ID systems were introduced, but always maintained several core elements: biometrics, the collection of ethnic data, and punishment. ID remained necessary for accessing certain buildings, opening bank accounts, buying or selling property and free movement both within and out of Kenya. The fact that women were not included in the national ID system until 1978 further reveals the exclusionary nature of such systems, in this instance along gendered lines.

While, in theory, these ID systems have been mandatory such that anyone should be able to demand and receive an ID, in practice, Kenyans from border communities must be “vetted” before receiving their ID. They must return to their paternal family village to be “vetted” by the local chief as to their community membership. Given the contested nature of Kenya’s borders, many Kenyans who may be ethnically Somali or Masai can face significant difficulty in proving they are “Kenyan” and obtaining the necessary ID. The vetting process can also serve to significantly delay applications. Nanjala explained that some ethnically Somali Kenyans who struggled to gain access to legal identification and therefore were excluded from basic entitlements had resorted to registering as refugees in order to access services.

Given the history of legal identity systems in Kenya, Huduma Namba may offer a promising break from the past and may serve to better include marginalized groups. Huduma Namba is supposed to give a “360 degree legal identity” to Kenyan citizens and residents; it includes women and children; and it is more than just a legal identity, it is also a form of entitlement. For example, Huduma Namba has been said to provide the enabling conditions for universal healthcare, to “facilitate adequate resource allocation” and to “enable citizens to get government services”. However, Nanjala also emphasized that Huduma Namba does not address any of the pre-existing exclusions experienced by certain Kenyans, especially those from border communities. Nanjala noted that the Huduma Namba is “layered over a history of exclusion,” and preserves many of the discriminatory practices experienced under previous systems. As residents must present existing identity documents in order to obtain a Huduma Card, vetting practices will still hinder border communities’ access to the new system, and thereby hinder access to the services to which Huduma Namba will be tied.

Over the course of the conversation Nanjala drew on her rich knowledge and experience to highlight what she sees as a number of ‘red flags’ raised by the Huduma Namba project. These go to the need to properly examine the true motivations behind such digital ID schemes and the actors who promote them. In brief, these are:

  • The false promise of the efficiency argument, being that “efficient’ technological solutions and data will fix social problems. This argument ignores the social, political and historical context and complexities of governing a state, and merely perpetuates the ‘McKinseyfication’ of government (being an increasing pervasiveness of management consultancy in development). Further, there is little evidence that such efficient solutions will actually work, as was seen in relation to the Integrated Financial Management Information System (IFMIS) rolled out in Kenya in 2013. Such arguments detract attention from examining why problems such as poor infrastructure, healthcare or education systems have arisen or have not been addressed. Nanjala noted that the ongoing COVID-19 pandemic has made the risks of this clear: while the Kenyan government has spent over $6million on the Huduma Namba system, the country has only 518 ICU beds.
  • The fact that the government is relying on threats and intimidation to “encourage” citizens to register for Huduma Namba. Nanjala posited that if a government is offering citizens a real service or benefit, it should be able to articulate a strong case for adoption such that citizens will see the benefit and willingly sign up.
  • The lack of clear information and analysis, including any cost benefit analysis or clear articulation of the why and how of the Huduma Namba system, available to citizens or researchers.
  • The complex political motivations behind the government’s actions, which hinge primarily on the current administration’s campaign promises and eye to the next election, rather than centring longer-term benefits to the population.
  • The risks associated with unchecked data collection, which include improper use and monetization of citizens’ data by government.

While much of the conversation addressed clear concerns with the Huduma Namba project, Nanjala also discussed how human rights law, movements and actors can help bring about more positive developments in this area. Firstly, this year’s decision by the Kenyan High Court, which was brought by the Kenyan Human Rights Commission, Kenya National Commission on Human Rights and Nubian Rights Forum, held that the Huduma Namba scheme could not proceed without appropriate data protection and privacy safeguards, was an inspiring example of the effectiveness of grassroots activism and rights-based litigation.

Further, this case provided an example of how human rights frameworks can enable transnational conversations about rights issues. Nanjala reminded us to question why it is that the UK can vote to avoid digital ID systems while British companies are simultaneously deploying digital ID technologies in the developing world, that is, why digital ID might be seen to be good enough for the colonized, but not the colonizers. And as digital ID systems are being widely promulgated by the World Bank throughout the Global South, Nanjala identified the successful south-south collaboration and knowledge exchange between Indian and Kenyan activists, lawyers and scholars in relation to India’s widely criticized digital ID system, Aadhaar. By learning about the Indian experience, Kenyan organizations were able to more effectively push back against some of the particular concerns with Huduma Namba. Looking at the severe harms that have arisen from the centralized biometric system in India can also help demonstrate the risks of such schemes.

Digital ID systems risk reducing humanity to mere data points, and so, to the extent that they do so, should be resisted. We are not just data points, and considering data as the “new” gold or oil positions our identities as resources to be exploited by companies and governments as they see fit. Nanjala explained that the point of government is not to oversimplify or exploit the human experience, but rather to leverage the resources that government collects to maximize the human experience of its residents. In the context of ever increasing intrusions into privacy cloaked in claims of making life “easier”, Nanjala’s comments and critique provided a timely reminder to focus on the humans at the center of ongoing debates about our digital lives, identities and rights.

Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

TECHNOLOGY & HUMAN RIGHTS

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

In 2019, the United States’s Social Security program comprised 23% of the federal budget. Apart from retirement benefits, the Social Security program provides Supplemental Security Income (SSI) and Social Security Disability Insurance (SSDI), which are disability benefits for disabled individuals unable to work. A multimillion-dollar disability fraud case in 2014 provoked the Social Security Administration to evaluate their controls in place to identify and prevent disability fraud. The review found that social media played a ‘critical role’ in this fraud case, “as disability claimants were seen in photos on their personal accounts, riding on jet skis, performing physical stunts in karate studios, and driving motorcycles”. Although Social Security Disability fraud is rare, the Social Security Administration has since adopted social media monitoring tools which use social media posts as a factor in determining when disability fraud is being committed by an ineligible individual. Although human rights advocates have evaluated how such digitally enabled fraud detection tools violate privacy rights, few explore other human rights violations resulting from new digital tools employed by governments in the fight against benefit fraud.

To help fill this gap, this summer I conducted research to provide a voice to disabled individuals applying for and receiving Social Security disability benefits, whose experiences are largely invisible in society. From these interviews, it became clear that automated tools such as social media monitoring perpetuate the stigmatization of disabled people. Interviewees reported that, when aware of being monitored on social media, they felt compelled to modify their behavior to fit within the stigma associated with how disabled people should look and behave. These behavior modifications prevent disabled individuals from integrating into society and accessing services necessary to their survival.

Since the creation of social benefits, disabled people have been stigmatized in society, oftentimes being viewed as either incapable or unwilling to work. Those who work are perceived as incapable employees, while those who are unable to work are viewed as lazy. Social media monitoring is the product of that stigma as it relies on assumptions about how a disabled person should look and act. One individual I interviewed recounted that when they sought advice on the application process people told them, “You can never post anything on social media of you having fun ever. Don’t post pictures of you smiling, not until after you are approved and even then, you have to make sure you’re careful and keep it on private.” Being unable to smile or outwardly express happiness ties to family and professionals underestimating a disabled individual’s quality of life. This underestimation can lead to the assumption that “real” disabled people have a poor quality of life and are unable to be happy.

The social media monitoring tool’s methodology relies on potentially inaccurate data because social media does not give a comprehensive view into a person’s life. People typically create an exaggerated, positive lens of their lives on social media which glosses over more difficult elements. Schwartz and Halegoua describe this perception as “spatial self”, which refers to how individuals “document, archive, and display their experience and/or mobility within space and place in order to represent or perform aspects of their identity to others.” Scholars on social media activity have published numerous studies on how people use images, videos, status updates, and comments on social media to present themselves in a very curated way.
Contrary to the positive spin most individuals put on their social media, disabled individuals actually feel compelled to “curate” their social media activity in a way that presents them as weak and incapable to fit the narrative of who deserves disability benefits. For them, receiving disability benefits is crucial to survive and pay for basic necessities.

The individuals I interviewed shared how such surveillance tools not only modify their behavior but also prevent them from exercising a whole range of human rights through social media. These rights are essential for all people but particularly for disabled individuals because the silencing of their voices strips away their ability to advocate for their community and form social relationships. Although social media offers avenues for socialization and political engagement to all social media users, social media significantly opens up opportunities to disabled individuals. Participants expressed that without social media they would be unable to form these relationships offline where accommodations for their disability do not exist. Disabled individuals greatly value sharing on social media as the medium enables them to highlight aspects of their identity beyond being disabled. An individual expressed to me how important social media is for socializing particularly during the Covid-19 pandemic, “I use Facebook mostly as a method of socializing especially right now with the pandemic going on, and occasionally political engagement.”Participants expressed that they feel like they need to modify their behavior on social media, with one participant saying, “I don’t think anybody feels good being monitored all the time and that’s essentially what I feel like now post-disability. I can’t have fun or it will be taken away.” This is fundamentally a human rights issue.

These human rights issues include equality in social life, and the ability to participate in the broader community online. Long-term these inequalities can harm their human rights as their voices and experiences are not taken into account by people outside of the disability community. In many reports on the disability community, the majority consensus rests on the fact that the exclusion of disabled people and their input undermines the well-being of disabled individuals. Ignoring or silencing the voices of disabled people prevents them from using their voices to advocate for themselves and participate in decisions involving their lives, making them vulnerable to disability discrimination, exclusion, violence, poverty and untreated health problems. For example, a participant I interviewed shared how the process reinforces disability discrimination through behavior modification:

There was no room for me to focus on anything I could still do. Because the disability process is exactly that, it’s finding out what you can’t do. You have to prove that your life sucks. That adds to the disability shame and stigma too. So anyways, dehumanizing.

In addition to the social and economic rights mentioned above, social media monitoring also impacts the enjoyment of civil and political rights for disabled individuals applying for and receiving Social Security disability benefits. Richards and Hartzog write, “Trust within information relationships is critical for free expression and a precursor to many kinds of political engagement.” They highlight how the Internet and social media have been used both for access to political information and political engagement, which has a large impact on politics in general. Participants revealed to me that they used social media as a primary method for engaging in activism and contributing to political thought. The individuals I interviewed shared that they use social media to engage with political representatives on disability-related legislation and to bring awareness of disability-related issues to their political representatives. Social media monitoring restricting freedom of expression can remove disabled individuals from participating in the political sphere and exercising other civil and political rights.

I am a disabled person who recently qualified for disability benefits, so I personally understand this pressure to prove I deserve the benefits and accommodations allocated to people who are “actually” disabled. Social media monitoring perpetuates this harmful narrative that disabled individuals applying for and receiving disability benefits need to prove their eligibility by modifying their behavior to fit disability stereotypes. This behavior modification restricts our access to form meaningful relationships, push against disability stigma and advocate for ourselves through political engagement. As social media monitoring pushes us out of social media platforms, our voices are silenced and this exclusion leads to further social inequalities. As disability rights activism continues to transform in the United States, I hope that this research will inspire future studies into disability rights, experiences applying for and receiving SSI and SSDI, and how they may intersect with human rights beyond privacy rights.

October 29, 2020. Sarah Tucker, Columbia University Human Rights graduate program. She uses her experiences as a disabled woman working in tech to advocate for the Disability community.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

UN Special Rapporteur on Extreme Poverty and Human Rights

INEQUALITIES

UN Special Rapporteur on Extreme Poverty and Human Rights

Philip Alston served as UN Special Rapporteur on extreme poverty and human rights from June 2014 to April 2020. The Special Rapporteur is an independent expert appointed by the UN Human Rights Council to monitor, advise, and report on how government policies are realizing the rights of people in poverty around the world.

During his mandate, Professor Alston carried out 11 official country visits and authored 12 thematic reports to the UN General Assembly and Human Rights Council. His thematic and country reports are available below. He also issued a large body of press releases and communications to states and other actors.

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

TECHNOLOGY & HUMAN RIGHTS

Nothing is Inevitable! Main Takeaways from an Event on Techno-Racism and Human Rights

A Conversation with the UN Special Rapporteur on Racism

On July 23, 2020, the Digital Welfare State and Human Rights Project hosted a virtual event on techno-racism and human rights. The immediate reason for organizing this conversation was a recent report to the Human Rights Council by the United Nations Special Rapporteur on Racism, Tendayi Achiume, on the racist impacts of emerging technologies. The event sought to further explore these impacts and to question the role of international human rights norms and accountability mechanisms in efforts to address these. Christiaan van Veen moderated the conversation between the Special Rapporteur, Mutale Nkonde, CEO of AI for the People, and Nanjala Nyabola, author of Digital Democracy, Analogue Politics.

This event and Tendayi’s report come at a moment of multiple international crises, including a global wave of protests and activism against police brutality and systemic racism after the killing of George Floyd, and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia, and intolerance are within our societies. Just last month, as Tendayi explained during the event, the Human Rights Council held a historic urgent debate on systemic racism and police brutality in the United States and elsewhere, which would have been inconceivable just a few months ago.

The starting point for the conversation was an attempt to define techno-racism and provide varied examples from across the globe. This global dimension was especially important as so many discussions on techno-racism remain US-centric. Speakers were also asked to discuss not only private use of technology or government use within the criminal justice area, but to address often-overlooked technological innovation within welfare states, from social security to health care and education.

Nanjala started the conversation by defining techno-racism as the use of technology to lock in power disparities that are predicated on race. Such techno-racism can occur within states: Mutale discussed algorithmic hiring decisions and facial recognition technologies used in housing in the United States, while Tendayi mentioned racist digital employment systems in South America. But techno-racism also has a transnational dimension: technologies entrench power disparities between States that are building technologies and States that are buying them; Nanjala called this “digital colonialism.”

The speakers all agreed that emerging technologies are consistently presented as agnostic and neutral, despite being loaded with the assumptions of their builders (disproportionately white males educated at elite universities) about how society works. For example, the technologies increasingly used in welfare states are designed with the idea that people living in poverty are constantly attempting to defraud the government; Christiaan and Nanjala discussed an algorithmic benefit fraud detection tool used in the Netherlands, which was found by a Dutch court to be exclusively targeting neighborhoods with low-income and minority residents, as an excellent example of this.

Nanjala also mentioned the ‘Huduma Namba’ digital ID system in Kenya as a powerful example of the politics and complexity underneath technology. She explained the racist history of ID systems in Kenya – designed by colonial authorities to enable the criminalization of black people and the protection of white property – and argued that digitalizing a system that was intended to discriminate “will only make the discrimination more efficient”. This exacerbation of discrimination is also visible within India’s ‘Aadhaar’ digital ID system, through which existing exclusions have been formalized, entrenched, and anesthetized, enabling those in power to claim that exclusion, such as the removal of hundreds of thousands of people from food distribution lists, simply results from the operation of the system rather than from political choices.

Tendayi explained that she wrote her report in part to address her “deep frustration” with the fact that race and non-discrimination analyses are often absent from debates on technology and human rights at the UN. Though she named a report by the Center Faculty Director Philip Alston, prepared in cooperation with the Digital Welfare State and Human Rights Project, as one of few exceptions, discussions within the international human rights field remain focused upon privacy and freedom of expression and marginalize questions of equality. But techno-racism should not be an afterthought in these discussions, especially as emerging technologies often exacerbate pre-existing racism and enable a completely different scale of discrimination.

Given the centrality of Tendayi’s Human Rights Council report to the conversation, Christiaan asked the speakers whether and how international human rights frameworks and norms can help us evaluate the implications of techno-racism, and what potential advantages global human rights accountability mechanisms can bring relative to domestic legal remedies. Mutale expressed that we need to ask, “who is human in human rights?” She noted that the racist design of these technologies arises from the notion that Black people are not human. Tendayi argued that there is, therefore, also a pressing need to change existing ways of thinking about who violates human rights. During the aforementioned urgent debate in the Human Rights Council, for example, European States and Australia had worked to water down a powerful draft resolution and blocked the establishment of a Commission of Inquiry to investigate systemic racism specifically in the United States, on the grounds that it is a liberal democracy. Mutale described this as another indication that police brutality against Black people in a Western country like the United States is too easily dismissed as not of international concern.

Tendayi concurred and expressed her misgivings about the UN’s human rights system. She explained that the human rights framework is deeply implicated in transnational racially discriminatory projects of the past, including colonialism and slavery, and noted that powerful institutions (including governments, the UN, and international human rights bodies) are often “ground zero” for systemic racism. Mutale echoed this and urged the audience to consider how international human rights organs like the Human Rights Council may constitute a political body for sustaining white supremacy as a power system across borders.

Nanjala also expressed concerns with the human rights regime and its history, but identified three potential benefits of the human rights framework in addressing techno-racism. First, the human rights regime provides another pathway outside domestic law for demanding accountability and seeking redress. Second, it translates local rights violations into international discourse, thus creating potential for a global accountability movement and giving victims around the world a powerful and shared rights-based language. Third, because of its relative stability since the 1940s, human rights legal discourse helps advocates develop genealogies of rights violations, document repeated institutional failures, and establish patterns of rights violations over time, allowing advocates to amplify domestic and international pressure for accountability. Tendayi added that she is “invested in a future that is fundamentally different from the present,” and that human rights can potentially contribute to transforming political institutions and undoing structures of injustice around the world.

In addressing an audience question about technological responses to COVID-19, Mutale described how an algorithm designed to assign scarce medical equipment such as ventilators systematically discounted black patient viability. Noting that health outcomes around the world are consistently correlated with poverty and life experiences (including the “weathering effects” suffered by racial and ethnic minorities), she warned that, by feeding algorithms data from past hospitalizations and health outcomes, “we are training these AI systems to deem that black lives are not viable.” Tendayi echoed this, suggesting that our “baseline assumption” should be that new technologies will have discriminatory impacts simply because of how they are made and the assumptions that inform their design.
In response to an audience member’s concern that governments and private actors will adopt racist technologies regardless, Nanjala countered that “nothing is inevitable” and “everything is a function of human action and agency.” San Francisco’s decision to ban the use of facial recognition software by municipal authorities, for example, demonstrates that the use of these technologies is not inevitable, even in Silicon Valley. Tendayi, in her final remarks, noted that “worlds are being made and remade all of the time” and that it is vital to listen to voices, such as those of Mutale, Nanjala, and the Center’s Digital Welfare State Project, which are “helping us to think differently.” “Mainstreaming” the idea of techno-racism can help erode the presumption of “tech neutrality” that has made political change related to technology so difficult to achieve in the past. Tendayi concluded that this is why it is so vital to have conversations like these.

We couldn’t agree more!

To reflect that this was an informal conversation, first names are used in this story. 

July 29, 2020. Victoria Adelmant, and Adam Ray. 

Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

 

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

HUMAN RIGHTS MOVEMENT

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

The Global Justice Clinic, twenty-three other human rights organizations, and a number of individuals signed on to a letter calling for the government of Haiti to cancel a planned gang raid that it announced on Friday April 24, 2020.

In a statement to the press, Haiti’s Minister of Justice and Public Security said that residents of the impoverished community of Village de Dieu in Port-au-Prince had 72 hours to evacuate their homes and their neighborhood.  The government would conduct a gang raid, and beyond the 72 hour window, indicated that they absolved themselves of responsibility for what happens in the area.  There is extreme and understandable concern within Haiti that the gang raid may turn into indiscriminate violence.  As the letter explains, in the past two years, the government has been implicated in massacres against civilians.  Further, there is evidence that a former police officer who allegedly perpetrated past massacres has been coordinating with the Haitian National Police to enact Monday’s raid.  The signatory organizations and individuals call on the government of Haiti to cancel the raid and to protect the human rights and physical safety of all Haitian people.

As of Wednesday, April 29, 2020, the raid has not occurred.  However, human rights organization in Haiti and beyond continue to pressure the Haitian government to publicly declare that they will cancel the raid, and that they will address insecurity in a way that respects the human rights of Haitian people, particularly the most vulnerable.

Profiling the Poor in the Dutch Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Profiling the Poor in the Dutch Welfare State

Report on court hearing in litigation in the Netherlands about digital welfare fraud detection system (‘SyRI’)

On Tuesday, October 29, 2019, I attended a hearing before the District Court of The Hague (the Netherlands) in litigation by a coalition of Dutch civil society organizations challenging the Dutch government’s System Risk Indication (“SyRI”). The Digital Welfare State and Human Rights Project at NYU Law, which I direct, recently collaborated with the United Nations Special Rapporteur on extreme poverty and human rights in preparing an amicus brief to the District Court. The Special Rapporteur became involved in this case because SyRI has exclusively been used to detect welfare fraud and other irregularities in poor neighborhoods in four Dutch cities and affects the right to social security and to privacy of the poorest members of Dutch society. This litigation may also set a highly relevant legal precedent with impact beyond Dutch borders in an area that has received relatively little judicial scrutiny to date.

Lies, damn lies, and algorithms

What is SyRI? The formal answer can be found in legislation and implementing regulations from 2014. In order to coordinate government action against illicit use of government funds and benefits in the area of social security, tax benefits and labor law, Dutch law allows for the sharing of data between municipalities, welfare authorities, tax authorities and other relevant government authorities since 2014. A total of 17 categories of data held by government authorities may be shared in this context, from employment and tax data, to benefit data, health insurance data and enforcement data, among other categories of digitally stored information. Government authorities wishing to cooperate in a concrete SyRI project request the Minister for Social Affairs and Employment to use the SyRI tool by pooling and analyzing the relevant data from various authorities using an algorithmic risk model.

The Minister has outsourced the tasks of pooling and analyzing the data to a private foundation, somewhat unfortunately named ‘The Intelligence Agency (‘Inlichtingenbureau’). The Intelligence Agency pseudonymizes the data pool, analyzes the data using an algorithmic risk model and creates a file for those individuals (or corporations) who are deemed to be at a higher risk of being involved in benefit fraud and other irregularities. The Minister then analyzes these files and notifies the cooperating government authorities of those individuals (or corporations) are considered at higher risk of committing benefit fraud or other irregularities (‘risk notification’). Risk notifications are included in a register for two years. Those who are included in the register are not actively notified of this registration, but they can receive access to their information in the register after a specific request.

The preceding understanding of how the system works can be derived from the legislative texts and history, but a surprising amount of uncertainty remains about how exactly SyRI works in practice. This became abundantly clear in the hearing in the SyRI-case before the District Court of The Hague on October 29. The court is assessing the plaintiffs’ claim that SyRI, as legislated in 2014, violates norms of applicable international law, including the rights to privacy, data protection and a fair trial recognized in the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union, the International Covenant on Civil and Political Rights and the EU General Data Protection Regulation.  In a courtroom packed with representatives from the 8 plaintiffs, reporters and concerned citizens from areas where SyRI has been used, the first question by the three-judge panel was to clarify the radically different views held by the plaintiffs and the Dutch State as to what SyRI is exactly.

According to the State, SyRI merely compares data from different government databases, operated by different authorities, in order to find simple inconsistencies. Although this analysis is undertaken with the assistance of an algorithm, the State underlined that this algorithm operates on the basis of pre-defined indicators of risk and that the algorithm is not of the ‘learning’ type. The State further emphasized that SyRI is not a Big Data or data-mining system, but that it employs a targeted analysis on the basis of a limited dataset with a clearly defined objective. It also argued that a risk notification by SyRI is merely a – potential – starting point for further investigations by individual government authorities and does not have any direct and automatic legal consequences such as the imposition of a fine or the suspension or withdrawal of government benefits or assistance.

But plaintiffs strongly contested the State’s characterization of SyRI. They claimed instead that SyRI is not narrowly targeted but instead aims at entire (poor) neighborhoods, that diverse and unconnected categories of personal data are brought together in SyRI projects, and that the resulting data exchange and analysis occur on a large scale. In their view, SyRI projects could therefore be qualified as projects involving problematic uses of Big Data, data-mining and profiling. They also made clear that it is exceedingly difficult for them or the District Court to assess what SyRI actually is or is not doing, because key elements of the system remain secret and the relevant legislation does not restrict the methods used, including the request to cooperating authorities to undertake a SyRI project, the risk model used, and the ways in which personal data can be processed.  All of these elements remain hidden from outside scrutiny.

Game the system, leave your water tap running

The District Court asked a series of probing and critical follow-up questions in an attempt to clarify the exact functioning of SyRI and to understand the justification for the secrecy surrounding it. One can sympathize with the court’s attempt to grasp the basic facts about SyRI in order to enable it to undertake its task of judicial oversight. Pushed by the District Court to clarify why the State could not be more open about the functioning of SyRI, the attorney for the State warned about welfare beneficiaries ‘gaming the system’. Referring to a pilot project pre-dating SyRI, in which welfare authority data about individuals claiming low-income benefits was matched with usage data held by publicly-owned drinking water companies to identify beneficiaries who committed fraud by falsely claiming they were living alone while actually living together (to claim a higher benefit level), the attorney for the State claimed that making it known that water usage is a ‘risk indicator’ could lead beneficiaries to leave their taps running to avoid detection. Some individuals attending the hearing could be heard snickering when this prediction was made.

Another fascinating exchange between the judges and the attorney for the State dealt with the standards applied by the Minister when assessing a request for a SyRI project by municipal and other government authorities. According to the State’s attorney, what would commonly happen is that a municipality has a ‘problem neighborhood’ and wants to tackle its problems, which are presumed to include welfare fraud and other irregularities, through SyRI. The request to the Minister is typically based ‘on the law, experience and logical thinking’ according to the State. Unsatisfied with this reply, the District Court probed the State for a more concrete justification of the use of SyRI and the precise standards applied to justify its use: ‘In Bloemendaal (one of the richest municipalities of the Netherlands) a lot of people enjoy going to classical concerts; in a problem neighborhood, there are a lot of people who receive government welfare benefits; why is that a justification for the use of SyRI?’, the Court asked. The attorney for the State had to admit that specific neighborhoods were targeted because those areas housed more people who were on welfare benefits and that, while participating authorities usually have no specific evidence that there are high(er) levels of benefit fraud in those neighborhoods, this higher proportion of people on benefits is enough reason to use SyRI.

Finally, and of great relevance to the intensity of the Court’s judicial scrutiny, the question of the gravity of the invasion of human rights – more specifically, the right to privacy – was a central topic of the hearing. The State argued that the data being shared and analyzed was existing data and not new data. It furthermore argued that for those individuals whose data was shared and analyzed, but who were not considered a ‘higher risk’, there was no harm at all: their data had been pseudonymized and was removed after the analysis. The opposing view by plaintiffs was that the government-held data that was shared and analyzed in SyRI was not originally collected for the specific purpose of enforcement. Plaintiffs also argued that – due to the wide categories of data that were potentially shared and analyzed in SyRI – a very intimate profile could be made of individuals in targeted neighborhoods: ‘This is all about profiling and creating files on people’.

Judgment expected in early 2020

The District Court announced that it expects to publish its judgment in this case on 29 January 2020. There are many questions to be answered by the Court. In non-legal language, they include at least the following: How does SyRI work exactly? Does it matter whether SyRI uses a relatively straightforward ‘decision-tree’ type of algorithm or, instead, machine learning algorithms? What is the harm in pooling previously siloed government data? What is the harm in classifying an individual as ‘high risk’? Does SyRI discriminate on the basis of socio-economic status, migrant status, race or color? Does the current legislation underpinning SyRI give sufficient clarity and adequate legal standards to meaningfully curb the use of State power to the detriment of individual rights? Can current levels of secrecy be maintained in a democracy based on the rule of law?

In light of the above, there will be many eyes focused on the Netherlands in January when a potentially groundbreaking legal precedent will be set in the debate on digital welfare states and human rights.

November 1, 2019.  Christiaan van Veen, Digital Welfare State & Human Rights Project (2019-2022), Center for Human Rights and Global Justice at NYU School of Law.