Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

GJC Issues Statement on the Constitutional and Human Rights Crisis in Haiti

HUMAN RIGHTS MOVEMENT

GJC Issues Statement on the Constitutional and Human Rights Crisis in Haiti

The Global Justice Clinic, the International Human Rights Clinic at Harvard Law School, and the Lowenstein International Human Rights Clinic at Yale Law School issued a statement on February 13, 2021 expressing grave concern about the deteriorating human rights situation in Haiti. Credible evidence shows that President Jovenel Moïse has engaged in a pattern of conduct to create a Constitutional crisis and consolidate power that undermines the rule of law in the country. The three clinics call on the U.S. government to denounce recent acts by President Moïse that have escalated the constitutional crisis. They urge the U.S. to halt all deportation and expulsion flights to Haiti in this fragile time; to condemn recent violence against protestors and journalists; and to call for the release of those arbitrarily detained. With long experience working in solidarity with Haitian civil society, the clinics urge the U.S. government to recognize the right of the Haitian people to self-determination by neither insisting on nor supporting elections without evidence of concrete measures to ensure that they are free, fair, and inclusive.

The Clinics also sent a letter expressing similar concerns to the member states of the United Nations Security Council ahead of their meeting on February 22, 2021, which is expected to include a briefing on Haiti from the Special Representative of the Secretary-General and head of the UN Integrated Office in Haiti (BINUH).

February 14, 2021

This post reflects the statement of the Global Justice Clinic, and not necessarily the views of NYU, NYU Law, or the Center for Human Rights and Global Justice.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

TECHNOLOGY AND HUMAN RIGHTS

CSOs Call for a Full Integration of Human Rights in the Deployment of Digital Identification Systems

The Principles on Identification for Sustainable Development (the Principles), the creation of which was facilitated by the World Bank’s Identification for Development (ID4D) initiative in 2017, provide one of the few attempts at global standard-setting for the development of digital identification systems across the world. They are endorsed by many global and regional organizations (the “Endorsing Organizations”) that are active in funding, designing, developing, and deploying digital identification programs across the world, especially in developing and less developed countries.

Digital identification programs are coming up across the world in various forms, and will have long term impacts on the lives and the rights of the individuals enrolled in these programs. Engagement with civil society can help ensure the lived experience of people affected by these identification programs inform the Principles and the practices of International Organizations. 

Access Now, Namati, and the Open Society Justice Initiative co-organized a Civil Society Organization (CSO) consultation in August 2020 that brought together over 60 civil society organizations from across the world for dialogue with the World Bank’s ID4D Initiative and Endorsing Organizations. The consultation occurred alongside the first review and revision of the Principles, which has been led by the Endorsing Organizations during 2020. 

The consultation provided a platform for civil society feedback towards revisions to the Principles as well as dialogue around the roles of International Organizations (IOs) and Civil Society Organizations in developing rights-respecting digital identification programs. 

This new civil society-drafted report presents a summary of the top-level comments and discussions that took place in the meeting, including recommendations such as: 

  1. There is an urgent need for human rights criteria to be recognized as a tool for evaluation and oversight of existing and proposed digital identification systems, including throughout the Principles document 
  2. Endorsing Organizations should commit to the application of these Principles in practice, including an affirmation that their support will extend only with identification programs that align with the Principles 
  3. CSOs need to be formally recognized as partners with governments and corporations in designing and implementing digital identification systems, including greater country-level engagement with CSOs from the earliest stages of potential digital identification projects through to monitoring ongoing implementation
  4. Digital identification systems across the globe are already being deployed in a manner that enables repression through enhanced censorship, exclusion, and surveillance, but centering transparent and democratic processes as drivers of the development and deployment of these systems can mitigate these and other risks

Following the consultation and in line with this new report, we welcome the opportunity to further integrate the principles of the Universal Declaration of Human Rights and other sources of human rights in international law into the Principles of Identification and the design, deployment, and monitoring of digital identification systems in practice. We encourage the establishment of permanent and formal structures for the engagement of civil society organizations in global and national-level processes related to digital identification, in order to ensure identification technologies are used in service of human agency and dignity and to prevent further harms in the exercise of fundamental rights in their deployment. 

We call on United Nations and regional human rights mechanisms, including the High Commissioner on Human Rights, treaty bodies, and Special Procedures, to take up the severe human rights risks involved in the context of digital identification systems as an urgent agenda item under their respective mandates.

We welcome further dialogue and engagement with the World Bank’s ID4D Initiative and other Endorsing Organizations and promoters of digital identification systems in order to ensure oversight and guidance towards human rights-aligned implementation of those systems.

This post was was originally published as a press release on December 17, 2020

  1. Access Now
  2. AfroLeadership
  3. Asociación por los Derechos Civiles (ADC)
  4. Collaboration on International ICT Policy for East and Southern Africa (CIPESA)
  5. Derechos Digitales
  6. Development and Justice Initiative 
  7. Digital Welfare State and Human Rights Project, Center for Human Rights and Global Justice
  8. Haki na Sheria Initiative 
  9. Human Rights Advocacy and Research Foundation (HRF)
  10. Myanmar Centre for Responsible Business (MCRB) 
  11. Namati

Statements of the Digital Welfare State & Human Rights Project do not purport to represent the views of NYU or the Center, if any.

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

TECHNOLOGY & HUMAN RIGHTS

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

In 2019, the United States’s Social Security program comprised 23% of the federal budget. Apart from retirement benefits, the Social Security program provides Supplemental Security Income (SSI) and Social Security Disability Insurance (SSDI), which are disability benefits for disabled individuals unable to work. A multimillion-dollar disability fraud case in 2014 provoked the Social Security Administration to evaluate their controls in place to identify and prevent disability fraud. The review found that social media played a ‘critical role’ in this fraud case, “as disability claimants were seen in photos on their personal accounts, riding on jet skis, performing physical stunts in karate studios, and driving motorcycles”. Although Social Security Disability fraud is rare, the Social Security Administration has since adopted social media monitoring tools which use social media posts as a factor in determining when disability fraud is being committed by an ineligible individual. Although human rights advocates have evaluated how such digitally enabled fraud detection tools violate privacy rights, few explore other human rights violations resulting from new digital tools employed by governments in the fight against benefit fraud.

To help fill this gap, this summer I conducted research to provide a voice to disabled individuals applying for and receiving Social Security disability benefits, whose experiences are largely invisible in society. From these interviews, it became clear that automated tools such as social media monitoring perpetuate the stigmatization of disabled people. Interviewees reported that, when aware of being monitored on social media, they felt compelled to modify their behavior to fit within the stigma associated with how disabled people should look and behave. These behavior modifications prevent disabled individuals from integrating into society and accessing services necessary to their survival.

Since the creation of social benefits, disabled people have been stigmatized in society, oftentimes being viewed as either incapable or unwilling to work. Those who work are perceived as incapable employees, while those who are unable to work are viewed as lazy. Social media monitoring is the product of that stigma as it relies on assumptions about how a disabled person should look and act. One individual I interviewed recounted that when they sought advice on the application process people told them, “You can never post anything on social media of you having fun ever. Don’t post pictures of you smiling, not until after you are approved and even then, you have to make sure you’re careful and keep it on private.” Being unable to smile or outwardly express happiness ties to family and professionals underestimating a disabled individual’s quality of life. This underestimation can lead to the assumption that “real” disabled people have a poor quality of life and are unable to be happy.

The social media monitoring tool’s methodology relies on potentially inaccurate data because social media does not give a comprehensive view into a person’s life. People typically create an exaggerated, positive lens of their lives on social media which glosses over more difficult elements. Schwartz and Halegoua describe this perception as “spatial self”, which refers to how individuals “document, archive, and display their experience and/or mobility within space and place in order to represent or perform aspects of their identity to others.” Scholars on social media activity have published numerous studies on how people use images, videos, status updates, and comments on social media to present themselves in a very curated way.
Contrary to the positive spin most individuals put on their social media, disabled individuals actually feel compelled to “curate” their social media activity in a way that presents them as weak and incapable to fit the narrative of who deserves disability benefits. For them, receiving disability benefits is crucial to survive and pay for basic necessities.

The individuals I interviewed shared how such surveillance tools not only modify their behavior but also prevent them from exercising a whole range of human rights through social media. These rights are essential for all people but particularly for disabled individuals because the silencing of their voices strips away their ability to advocate for their community and form social relationships. Although social media offers avenues for socialization and political engagement to all social media users, social media significantly opens up opportunities to disabled individuals. Participants expressed that without social media they would be unable to form these relationships offline where accommodations for their disability do not exist. Disabled individuals greatly value sharing on social media as the medium enables them to highlight aspects of their identity beyond being disabled. An individual expressed to me how important social media is for socializing particularly during the Covid-19 pandemic, “I use Facebook mostly as a method of socializing especially right now with the pandemic going on, and occasionally political engagement.”Participants expressed that they feel like they need to modify their behavior on social media, with one participant saying, “I don’t think anybody feels good being monitored all the time and that’s essentially what I feel like now post-disability. I can’t have fun or it will be taken away.” This is fundamentally a human rights issue.

These human rights issues include equality in social life, and the ability to participate in the broader community online. Long-term these inequalities can harm their human rights as their voices and experiences are not taken into account by people outside of the disability community. In many reports on the disability community, the majority consensus rests on the fact that the exclusion of disabled people and their input undermines the well-being of disabled individuals. Ignoring or silencing the voices of disabled people prevents them from using their voices to advocate for themselves and participate in decisions involving their lives, making them vulnerable to disability discrimination, exclusion, violence, poverty and untreated health problems. For example, a participant I interviewed shared how the process reinforces disability discrimination through behavior modification:

There was no room for me to focus on anything I could still do. Because the disability process is exactly that, it’s finding out what you can’t do. You have to prove that your life sucks. That adds to the disability shame and stigma too. So anyways, dehumanizing.

In addition to the social and economic rights mentioned above, social media monitoring also impacts the enjoyment of civil and political rights for disabled individuals applying for and receiving Social Security disability benefits. Richards and Hartzog write, “Trust within information relationships is critical for free expression and a precursor to many kinds of political engagement.” They highlight how the Internet and social media have been used both for access to political information and political engagement, which has a large impact on politics in general. Participants revealed to me that they used social media as a primary method for engaging in activism and contributing to political thought. The individuals I interviewed shared that they use social media to engage with political representatives on disability-related legislation and to bring awareness of disability-related issues to their political representatives. Social media monitoring restricting freedom of expression can remove disabled individuals from participating in the political sphere and exercising other civil and political rights.

I am a disabled person who recently qualified for disability benefits, so I personally understand this pressure to prove I deserve the benefits and accommodations allocated to people who are “actually” disabled. Social media monitoring perpetuates this harmful narrative that disabled individuals applying for and receiving disability benefits need to prove their eligibility by modifying their behavior to fit disability stereotypes. This behavior modification restricts our access to form meaningful relationships, push against disability stigma and advocate for ourselves through political engagement. As social media monitoring pushes us out of social media platforms, our voices are silenced and this exclusion leads to further social inequalities. As disability rights activism continues to transform in the United States, I hope that this research will inspire future studies into disability rights, experiences applying for and receiving SSI and SSDI, and how they may intersect with human rights beyond privacy rights.

October 29, 2020. Sarah Tucker, Columbia University Human Rights graduate program. She uses her experiences as a disabled woman working in tech to advocate for the Disability community.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

TECHNOLOGY & HUMAN RIGHTS

Nothing is Inevitable! Main Takeaways from an Event on Techno-Racism and Human Rights

A Conversation with the UN Special Rapporteur on Racism

On July 23, 2020, the Digital Welfare State and Human Rights Project hosted a virtual event on techno-racism and human rights. The immediate reason for organizing this conversation was a recent report to the Human Rights Council by the United Nations Special Rapporteur on Racism, Tendayi Achiume, on the racist impacts of emerging technologies. The event sought to further explore these impacts and to question the role of international human rights norms and accountability mechanisms in efforts to address these. Christiaan van Veen moderated the conversation between the Special Rapporteur, Mutale Nkonde, CEO of AI for the People, and Nanjala Nyabola, author of Digital Democracy, Analogue Politics.

This event and Tendayi’s report come at a moment of multiple international crises, including a global wave of protests and activism against police brutality and systemic racism after the killing of George Floyd, and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia, and intolerance are within our societies. Just last month, as Tendayi explained during the event, the Human Rights Council held a historic urgent debate on systemic racism and police brutality in the United States and elsewhere, which would have been inconceivable just a few months ago.

The starting point for the conversation was an attempt to define techno-racism and provide varied examples from across the globe. This global dimension was especially important as so many discussions on techno-racism remain US-centric. Speakers were also asked to discuss not only private use of technology or government use within the criminal justice area, but to address often-overlooked technological innovation within welfare states, from social security to health care and education.

Nanjala started the conversation by defining techno-racism as the use of technology to lock in power disparities that are predicated on race. Such techno-racism can occur within states: Mutale discussed algorithmic hiring decisions and facial recognition technologies used in housing in the United States, while Tendayi mentioned racist digital employment systems in South America. But techno-racism also has a transnational dimension: technologies entrench power disparities between States that are building technologies and States that are buying them; Nanjala called this “digital colonialism.”

The speakers all agreed that emerging technologies are consistently presented as agnostic and neutral, despite being loaded with the assumptions of their builders (disproportionately white males educated at elite universities) about how society works. For example, the technologies increasingly used in welfare states are designed with the idea that people living in poverty are constantly attempting to defraud the government; Christiaan and Nanjala discussed an algorithmic benefit fraud detection tool used in the Netherlands, which was found by a Dutch court to be exclusively targeting neighborhoods with low-income and minority residents, as an excellent example of this.

Nanjala also mentioned the ‘Huduma Namba’ digital ID system in Kenya as a powerful example of the politics and complexity underneath technology. She explained the racist history of ID systems in Kenya – designed by colonial authorities to enable the criminalization of black people and the protection of white property – and argued that digitalizing a system that was intended to discriminate “will only make the discrimination more efficient”. This exacerbation of discrimination is also visible within India’s ‘Aadhaar’ digital ID system, through which existing exclusions have been formalized, entrenched, and anesthetized, enabling those in power to claim that exclusion, such as the removal of hundreds of thousands of people from food distribution lists, simply results from the operation of the system rather than from political choices.

Tendayi explained that she wrote her report in part to address her “deep frustration” with the fact that race and non-discrimination analyses are often absent from debates on technology and human rights at the UN. Though she named a report by the Center Faculty Director Philip Alston, prepared in cooperation with the Digital Welfare State and Human Rights Project, as one of few exceptions, discussions within the international human rights field remain focused upon privacy and freedom of expression and marginalize questions of equality. But techno-racism should not be an afterthought in these discussions, especially as emerging technologies often exacerbate pre-existing racism and enable a completely different scale of discrimination.

Given the centrality of Tendayi’s Human Rights Council report to the conversation, Christiaan asked the speakers whether and how international human rights frameworks and norms can help us evaluate the implications of techno-racism, and what potential advantages global human rights accountability mechanisms can bring relative to domestic legal remedies. Mutale expressed that we need to ask, “who is human in human rights?” She noted that the racist design of these technologies arises from the notion that Black people are not human. Tendayi argued that there is, therefore, also a pressing need to change existing ways of thinking about who violates human rights. During the aforementioned urgent debate in the Human Rights Council, for example, European States and Australia had worked to water down a powerful draft resolution and blocked the establishment of a Commission of Inquiry to investigate systemic racism specifically in the United States, on the grounds that it is a liberal democracy. Mutale described this as another indication that police brutality against Black people in a Western country like the United States is too easily dismissed as not of international concern.

Tendayi concurred and expressed her misgivings about the UN’s human rights system. She explained that the human rights framework is deeply implicated in transnational racially discriminatory projects of the past, including colonialism and slavery, and noted that powerful institutions (including governments, the UN, and international human rights bodies) are often “ground zero” for systemic racism. Mutale echoed this and urged the audience to consider how international human rights organs like the Human Rights Council may constitute a political body for sustaining white supremacy as a power system across borders.

Nanjala also expressed concerns with the human rights regime and its history, but identified three potential benefits of the human rights framework in addressing techno-racism. First, the human rights regime provides another pathway outside domestic law for demanding accountability and seeking redress. Second, it translates local rights violations into international discourse, thus creating potential for a global accountability movement and giving victims around the world a powerful and shared rights-based language. Third, because of its relative stability since the 1940s, human rights legal discourse helps advocates develop genealogies of rights violations, document repeated institutional failures, and establish patterns of rights violations over time, allowing advocates to amplify domestic and international pressure for accountability. Tendayi added that she is “invested in a future that is fundamentally different from the present,” and that human rights can potentially contribute to transforming political institutions and undoing structures of injustice around the world.

In addressing an audience question about technological responses to COVID-19, Mutale described how an algorithm designed to assign scarce medical equipment such as ventilators systematically discounted black patient viability. Noting that health outcomes around the world are consistently correlated with poverty and life experiences (including the “weathering effects” suffered by racial and ethnic minorities), she warned that, by feeding algorithms data from past hospitalizations and health outcomes, “we are training these AI systems to deem that black lives are not viable.” Tendayi echoed this, suggesting that our “baseline assumption” should be that new technologies will have discriminatory impacts simply because of how they are made and the assumptions that inform their design.
In response to an audience member’s concern that governments and private actors will adopt racist technologies regardless, Nanjala countered that “nothing is inevitable” and “everything is a function of human action and agency.” San Francisco’s decision to ban the use of facial recognition software by municipal authorities, for example, demonstrates that the use of these technologies is not inevitable, even in Silicon Valley. Tendayi, in her final remarks, noted that “worlds are being made and remade all of the time” and that it is vital to listen to voices, such as those of Mutale, Nanjala, and the Center’s Digital Welfare State Project, which are “helping us to think differently.” “Mainstreaming” the idea of techno-racism can help erode the presumption of “tech neutrality” that has made political change related to technology so difficult to achieve in the past. Tendayi concluded that this is why it is so vital to have conversations like these.

We couldn’t agree more!

To reflect that this was an informal conversation, first names are used in this story. 

July 29, 2020. Victoria Adelmant, and Adam Ray. 

Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

 

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

HUMAN RIGHTS MOVEMENT

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

The Global Justice Clinic, twenty-three other human rights organizations, and a number of individuals signed on to a letter calling for the government of Haiti to cancel a planned gang raid that it announced on Friday April 24, 2020.

In a statement to the press, Haiti’s Minister of Justice and Public Security said that residents of the impoverished community of Village de Dieu in Port-au-Prince had 72 hours to evacuate their homes and their neighborhood.  The government would conduct a gang raid, and beyond the 72 hour window, indicated that they absolved themselves of responsibility for what happens in the area.  There is extreme and understandable concern within Haiti that the gang raid may turn into indiscriminate violence.  As the letter explains, in the past two years, the government has been implicated in massacres against civilians.  Further, there is evidence that a former police officer who allegedly perpetrated past massacres has been coordinating with the Haitian National Police to enact Monday’s raid.  The signatory organizations and individuals call on the government of Haiti to cancel the raid and to protect the human rights and physical safety of all Haitian people.

As of Wednesday, April 29, 2020, the raid has not occurred.  However, human rights organization in Haiti and beyond continue to pressure the Haitian government to publicly declare that they will cancel the raid, and that they will address insecurity in a way that respects the human rights of Haitian people, particularly the most vulnerable.

Profiling the Poor in the Dutch Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Profiling the Poor in the Dutch Welfare State

Report on court hearing in litigation in the Netherlands about digital welfare fraud detection system (‘SyRI’)

On Tuesday, October 29, 2019, I attended a hearing before the District Court of The Hague (the Netherlands) in litigation by a coalition of Dutch civil society organizations challenging the Dutch government’s System Risk Indication (“SyRI”). The Digital Welfare State and Human Rights Project at NYU Law, which I direct, recently collaborated with the United Nations Special Rapporteur on extreme poverty and human rights in preparing an amicus brief to the District Court. The Special Rapporteur became involved in this case because SyRI has exclusively been used to detect welfare fraud and other irregularities in poor neighborhoods in four Dutch cities and affects the right to social security and to privacy of the poorest members of Dutch society. This litigation may also set a highly relevant legal precedent with impact beyond Dutch borders in an area that has received relatively little judicial scrutiny to date.

Lies, damn lies, and algorithms

What is SyRI? The formal answer can be found in legislation and implementing regulations from 2014. In order to coordinate government action against illicit use of government funds and benefits in the area of social security, tax benefits and labor law, Dutch law allows for the sharing of data between municipalities, welfare authorities, tax authorities and other relevant government authorities since 2014. A total of 17 categories of data held by government authorities may be shared in this context, from employment and tax data, to benefit data, health insurance data and enforcement data, among other categories of digitally stored information. Government authorities wishing to cooperate in a concrete SyRI project request the Minister for Social Affairs and Employment to use the SyRI tool by pooling and analyzing the relevant data from various authorities using an algorithmic risk model.

The Minister has outsourced the tasks of pooling and analyzing the data to a private foundation, somewhat unfortunately named ‘The Intelligence Agency (‘Inlichtingenbureau’). The Intelligence Agency pseudonymizes the data pool, analyzes the data using an algorithmic risk model and creates a file for those individuals (or corporations) who are deemed to be at a higher risk of being involved in benefit fraud and other irregularities. The Minister then analyzes these files and notifies the cooperating government authorities of those individuals (or corporations) are considered at higher risk of committing benefit fraud or other irregularities (‘risk notification’). Risk notifications are included in a register for two years. Those who are included in the register are not actively notified of this registration, but they can receive access to their information in the register after a specific request.

The preceding understanding of how the system works can be derived from the legislative texts and history, but a surprising amount of uncertainty remains about how exactly SyRI works in practice. This became abundantly clear in the hearing in the SyRI-case before the District Court of The Hague on October 29. The court is assessing the plaintiffs’ claim that SyRI, as legislated in 2014, violates norms of applicable international law, including the rights to privacy, data protection and a fair trial recognized in the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union, the International Covenant on Civil and Political Rights and the EU General Data Protection Regulation.  In a courtroom packed with representatives from the 8 plaintiffs, reporters and concerned citizens from areas where SyRI has been used, the first question by the three-judge panel was to clarify the radically different views held by the plaintiffs and the Dutch State as to what SyRI is exactly.

According to the State, SyRI merely compares data from different government databases, operated by different authorities, in order to find simple inconsistencies. Although this analysis is undertaken with the assistance of an algorithm, the State underlined that this algorithm operates on the basis of pre-defined indicators of risk and that the algorithm is not of the ‘learning’ type. The State further emphasized that SyRI is not a Big Data or data-mining system, but that it employs a targeted analysis on the basis of a limited dataset with a clearly defined objective. It also argued that a risk notification by SyRI is merely a – potential – starting point for further investigations by individual government authorities and does not have any direct and automatic legal consequences such as the imposition of a fine or the suspension or withdrawal of government benefits or assistance.

But plaintiffs strongly contested the State’s characterization of SyRI. They claimed instead that SyRI is not narrowly targeted but instead aims at entire (poor) neighborhoods, that diverse and unconnected categories of personal data are brought together in SyRI projects, and that the resulting data exchange and analysis occur on a large scale. In their view, SyRI projects could therefore be qualified as projects involving problematic uses of Big Data, data-mining and profiling. They also made clear that it is exceedingly difficult for them or the District Court to assess what SyRI actually is or is not doing, because key elements of the system remain secret and the relevant legislation does not restrict the methods used, including the request to cooperating authorities to undertake a SyRI project, the risk model used, and the ways in which personal data can be processed.  All of these elements remain hidden from outside scrutiny.

Game the system, leave your water tap running

The District Court asked a series of probing and critical follow-up questions in an attempt to clarify the exact functioning of SyRI and to understand the justification for the secrecy surrounding it. One can sympathize with the court’s attempt to grasp the basic facts about SyRI in order to enable it to undertake its task of judicial oversight. Pushed by the District Court to clarify why the State could not be more open about the functioning of SyRI, the attorney for the State warned about welfare beneficiaries ‘gaming the system’. Referring to a pilot project pre-dating SyRI, in which welfare authority data about individuals claiming low-income benefits was matched with usage data held by publicly-owned drinking water companies to identify beneficiaries who committed fraud by falsely claiming they were living alone while actually living together (to claim a higher benefit level), the attorney for the State claimed that making it known that water usage is a ‘risk indicator’ could lead beneficiaries to leave their taps running to avoid detection. Some individuals attending the hearing could be heard snickering when this prediction was made.

Another fascinating exchange between the judges and the attorney for the State dealt with the standards applied by the Minister when assessing a request for a SyRI project by municipal and other government authorities. According to the State’s attorney, what would commonly happen is that a municipality has a ‘problem neighborhood’ and wants to tackle its problems, which are presumed to include welfare fraud and other irregularities, through SyRI. The request to the Minister is typically based ‘on the law, experience and logical thinking’ according to the State. Unsatisfied with this reply, the District Court probed the State for a more concrete justification of the use of SyRI and the precise standards applied to justify its use: ‘In Bloemendaal (one of the richest municipalities of the Netherlands) a lot of people enjoy going to classical concerts; in a problem neighborhood, there are a lot of people who receive government welfare benefits; why is that a justification for the use of SyRI?’, the Court asked. The attorney for the State had to admit that specific neighborhoods were targeted because those areas housed more people who were on welfare benefits and that, while participating authorities usually have no specific evidence that there are high(er) levels of benefit fraud in those neighborhoods, this higher proportion of people on benefits is enough reason to use SyRI.

Finally, and of great relevance to the intensity of the Court’s judicial scrutiny, the question of the gravity of the invasion of human rights – more specifically, the right to privacy – was a central topic of the hearing. The State argued that the data being shared and analyzed was existing data and not new data. It furthermore argued that for those individuals whose data was shared and analyzed, but who were not considered a ‘higher risk’, there was no harm at all: their data had been pseudonymized and was removed after the analysis. The opposing view by plaintiffs was that the government-held data that was shared and analyzed in SyRI was not originally collected for the specific purpose of enforcement. Plaintiffs also argued that – due to the wide categories of data that were potentially shared and analyzed in SyRI – a very intimate profile could be made of individuals in targeted neighborhoods: ‘This is all about profiling and creating files on people’.

Judgment expected in early 2020

The District Court announced that it expects to publish its judgment in this case on 29 January 2020. There are many questions to be answered by the Court. In non-legal language, they include at least the following: How does SyRI work exactly? Does it matter whether SyRI uses a relatively straightforward ‘decision-tree’ type of algorithm or, instead, machine learning algorithms? What is the harm in pooling previously siloed government data? What is the harm in classifying an individual as ‘high risk’? Does SyRI discriminate on the basis of socio-economic status, migrant status, race or color? Does the current legislation underpinning SyRI give sufficient clarity and adequate legal standards to meaningfully curb the use of State power to the detriment of individual rights? Can current levels of secrecy be maintained in a democracy based on the rule of law?

In light of the above, there will be many eyes focused on the Netherlands in January when a potentially groundbreaking legal precedent will be set in the debate on digital welfare states and human rights.

November 1, 2019.  Christiaan van Veen, Digital Welfare State & Human Rights Project (2019-2022), Center for Human Rights and Global Justice at NYU School of Law.