Silencing and Stigmatizing the Disabled Through Social Media Monitoring

TECHNOLOGY & HUMAN RIGHTS

Silencing and Stigmatizing the Disabled Through Social Media Monitoring

In 2019, the United States’s Social Security program comprised 23% of the federal budget. Apart from retirement benefits, the Social Security program provides Supplemental Security Income (SSI) and Social Security Disability Insurance (SSDI), which are disability benefits for disabled individuals unable to work. A multimillion-dollar disability fraud case in 2014 provoked the Social Security Administration to evaluate their controls in place to identify and prevent disability fraud. The review found that social media played a ‘critical role’ in this fraud case, “as disability claimants were seen in photos on their personal accounts, riding on jet skis, performing physical stunts in karate studios, and driving motorcycles”. Although Social Security Disability fraud is rare, the Social Security Administration has since adopted social media monitoring tools which use social media posts as a factor in determining when disability fraud is being committed by an ineligible individual. Although human rights advocates have evaluated how such digitally enabled fraud detection tools violate privacy rights, few explore other human rights violations resulting from new digital tools employed by governments in the fight against benefit fraud.

To help fill this gap, this summer I conducted research to provide a voice to disabled individuals applying for and receiving Social Security disability benefits, whose experiences are largely invisible in society. From these interviews, it became clear that automated tools such as social media monitoring perpetuate the stigmatization of disabled people. Interviewees reported that, when aware of being monitored on social media, they felt compelled to modify their behavior to fit within the stigma associated with how disabled people should look and behave. These behavior modifications prevent disabled individuals from integrating into society and accessing services necessary to their survival.

Since the creation of social benefits, disabled people have been stigmatized in society, oftentimes being viewed as either incapable or unwilling to work. Those who work are perceived as incapable employees, while those who are unable to work are viewed as lazy. Social media monitoring is the product of that stigma as it relies on assumptions about how a disabled person should look and act. One individual I interviewed recounted that when they sought advice on the application process people told them, “You can never post anything on social media of you having fun ever. Don’t post pictures of you smiling, not until after you are approved and even then, you have to make sure you’re careful and keep it on private.” Being unable to smile or outwardly express happiness ties to family and professionals underestimating a disabled individual’s quality of life. This underestimation can lead to the assumption that “real” disabled people have a poor quality of life and are unable to be happy.

The social media monitoring tool’s methodology relies on potentially inaccurate data because social media does not give a comprehensive view into a person’s life. People typically create an exaggerated, positive lens of their lives on social media which glosses over more difficult elements. Schwartz and Halegoua describe this perception as “spatial self”, which refers to how individuals “document, archive, and display their experience and/or mobility within space and place in order to represent or perform aspects of their identity to others.” Scholars on social media activity have published numerous studies on how people use images, videos, status updates, and comments on social media to present themselves in a very curated way.
Contrary to the positive spin most individuals put on their social media, disabled individuals actually feel compelled to “curate” their social media activity in a way that presents them as weak and incapable to fit the narrative of who deserves disability benefits. For them, receiving disability benefits is crucial to survive and pay for basic necessities.

The individuals I interviewed shared how such surveillance tools not only modify their behavior but also prevent them from exercising a whole range of human rights through social media. These rights are essential for all people but particularly for disabled individuals because the silencing of their voices strips away their ability to advocate for their community and form social relationships. Although social media offers avenues for socialization and political engagement to all social media users, social media significantly opens up opportunities to disabled individuals. Participants expressed that without social media they would be unable to form these relationships offline where accommodations for their disability do not exist. Disabled individuals greatly value sharing on social media as the medium enables them to highlight aspects of their identity beyond being disabled. An individual expressed to me how important social media is for socializing particularly during the Covid-19 pandemic, “I use Facebook mostly as a method of socializing especially right now with the pandemic going on, and occasionally political engagement.”Participants expressed that they feel like they need to modify their behavior on social media, with one participant saying, “I don’t think anybody feels good being monitored all the time and that’s essentially what I feel like now post-disability. I can’t have fun or it will be taken away.” This is fundamentally a human rights issue.

These human rights issues include equality in social life, and the ability to participate in the broader community online. Long-term these inequalities can harm their human rights as their voices and experiences are not taken into account by people outside of the disability community. In many reports on the disability community, the majority consensus rests on the fact that the exclusion of disabled people and their input undermines the well-being of disabled individuals. Ignoring or silencing the voices of disabled people prevents them from using their voices to advocate for themselves and participate in decisions involving their lives, making them vulnerable to disability discrimination, exclusion, violence, poverty and untreated health problems. For example, a participant I interviewed shared how the process reinforces disability discrimination through behavior modification:

There was no room for me to focus on anything I could still do. Because the disability process is exactly that, it’s finding out what you can’t do. You have to prove that your life sucks. That adds to the disability shame and stigma too. So anyways, dehumanizing.

In addition to the social and economic rights mentioned above, social media monitoring also impacts the enjoyment of civil and political rights for disabled individuals applying for and receiving Social Security disability benefits. Richards and Hartzog write, “Trust within information relationships is critical for free expression and a precursor to many kinds of political engagement.” They highlight how the Internet and social media have been used both for access to political information and political engagement, which has a large impact on politics in general. Participants revealed to me that they used social media as a primary method for engaging in activism and contributing to political thought. The individuals I interviewed shared that they use social media to engage with political representatives on disability-related legislation and to bring awareness of disability-related issues to their political representatives. Social media monitoring restricting freedom of expression can remove disabled individuals from participating in the political sphere and exercising other civil and political rights.

I am a disabled person who recently qualified for disability benefits, so I personally understand this pressure to prove I deserve the benefits and accommodations allocated to people who are “actually” disabled. Social media monitoring perpetuates this harmful narrative that disabled individuals applying for and receiving disability benefits need to prove their eligibility by modifying their behavior to fit disability stereotypes. This behavior modification restricts our access to form meaningful relationships, push against disability stigma and advocate for ourselves through political engagement. As social media monitoring pushes us out of social media platforms, our voices are silenced and this exclusion leads to further social inequalities. As disability rights activism continues to transform in the United States, I hope that this research will inspire future studies into disability rights, experiences applying for and receiving SSI and SSDI, and how they may intersect with human rights beyond privacy rights.

October 29, 2020. Sarah Tucker, Columbia University Human Rights graduate program. She uses her experiences as a disabled woman working in tech to advocate for the Disability community.

Digital Identification and Inclusionary Delusion in West Africa

TECHNOLOGY & HUMAN RIGHTS

Digital Identification and Inclusionary Delusion in West Africa 

Over 1 billion persons have been categorized as invisible in the world, of which about 437 million persons are reported to be from sub-Saharan Africa. In West Africa alone, the World Bank has identified a huge “identification gap” and different identification projects are underway to identify millions of invisible West Africans.[1] These individuals are regarded as invisible not because they are unrecognizable or non-existent, but because they do not fit a certain measure of visibility that matches existing or new database(s) of an identifying institution[2], such as the State or international bodies.

One existing digital identification project in West Africa is the West Africa Unique Identification for Regional Integration and Inclusion (WURI) program initiated by the World Bank under its Identification for Development initiative. The WURI program is to serve as an umbrella under which West African States can collaborate with the Economic Community of West African States (ECOWAS) to design and build a digital identification system, financed by the World Bank, that would create foundational IDs (fID)[3] for all persons in the ECOWAS region.[4] Many West African States that have had past failed attempts at digitizing their identification systems have embraced assistance via WURI. The goal of WURI is to enable access to services for millions of people and ensure “mutual recognition of identities” across countries. The promise of digital identification is that it will facilitate development by promoting regional integration, security, social protection of aid beneficiaries, financial inclusion, reduction of poverty and corruption, healthcare insurance and delivery, and act as a stepping stone to an integrated digital economy in West Africa. This way, millions of invisible individuals would become visible to the state and become financially, politically and socially included.

Nevertheless, the outlook of WURI and the reliance on digital IDs by development agencies proposes a reliance on technologies, also known as techno-solutionism, as the approach to dealing with institutional challenges and developmental goals in West Africa. This reliance on digital technologies does not address some of the major root causes of developmental delays in the countries and may instead worsen the state of things by excluding the vast majority of people who are either unable to be identified or excluded by virtue of technological failures. This exclusion emerges in a number of ways, including the service-based structure and/or mandatory nature of many digital identification projects which adopt the stance of exclusion first before inclusion. This means that in cases where access to services and infrastructures, such as opening a bank account, registering sim cards, getting healthcare or receiving government aid and benefits, are made subject to registration and possession of a national ID card or unique identification number (UIN), individuals are first excluded unless they register for and possess the national ID card or UIN.

There are three contexts in which exclusion may arise. Firstly, an individual may be unable to register for a fID. For instance, in Kenya, many individuals without identity verification documents like birth certificates were excluded from the registration process of its fID, Huduma Namba. A second context arises where an individual may be unable to obtain a fID card or unique identification number (UIN) after registration. This is the case in Nigeria where the National Identity Management Commission has been unable to deliver ID cards to the majority of those who have registered under the identity program. The risk of exclusion of individuals may increase in Nigeria when the government conditions access to services on the possession of an fID card or UIN.

A third scenario involves the inability of an individual to access infrastructures after obtaining a fID card or UIN, due to the breakdown or malfunctioning of the technology for authentication by the identifying institution. In Tanzania, for example, although some individuals have the fID card or UIN, they are unable to proceed with their SIM registration process due to breakdown of the data storage systems. There are also numerous reports of people not getting access to services in India because of technology failures. This leaves a large group of vulnerable individuals, particularly where the fID is required to access key services such as SIM card registration. An unpublished 2018 poll carried out in Cote d’Ivoire reveals that over 65% of those who register for National ID used it to apply for SIM card services and about 23% for financial services.[5]

The mandatory or service-based model of most identification systems in West Africa take away powers or rights of access to and control of resources and identity from individuals and confers them on the State and private institutions, thereby raising some human rights concerns for those who are unable to fit the criteria for registration and identification. Thus, a person who ordinarily would move around freely, shop from a grocery store, open a bank account or receive healthcare from a hospital can only do that, upon commencement of mandatory use of the fID, through possession of the fID card or UIN. In Nigeria, for instance, the new national computerized identity card is equipped with a microprocessor designed to host and store multiple e-services and applications like biometric e-ID, electronic ID, payment application, travel document, and serve as the national identity card of individuals. A Thales publication also states that in a second phase for the Nigerian fID, driver’s license, eVoting, eHealth or eTransport applications are to be added to the cards. This is a long list of e-services for a country where only about 46% of its population is reported to have access to the internet. Where a person loses this ID card or is unable to provide the UIN that digitally represents that person, such person would be potentially excluded from access to all the services and infrastructures that the fID card or UIN serves as a gateway to. This exclusion risk is intensified by the fact that identifying institutions in remote or local areas may lack authentication technologies or electronic connection to the ID database to verify the existence of individuals at all times they seek to be identified, make a payment, receive healthcare, or travel.

It is important to note that exclusion does not only stem from mandatory fID systems or voluntary but service-integrated ID systems. There are also risks with voluntary ID systems where adequate measures are not taken to protect the data and interests of all those who are registered. An adequate data storage facility, data protection designs and data privacy regulation to protect the data of individuals is required, else individuals face increased risks of identity theft, fraud and cybercrime which would exclude and shut them off from fundamental services and infrastructures.

The history of political instability, violence and extremism, ethnic and religious conflicts, and disregard for the rule of law in many West African countries also heightens the risk of exclusion of individuals. Different instances of this abound, such as religious extremism, insurgences and armed conflicts in Northern Nigeria, civilian attacks and unrest in some communities in Burkina Faso, crisis and terrorist attacks in Mali, election violence, and military intervention in State governance. An OECD report accounts for over 3,317 violent events in West Africa between 2011 – 2019 with fatalities rising above 11,911 within those periods. A UN report also puts the number of deaths in Burkina Faso to over 1800 in 2019 and over 25,000 displaced persons in the same year. This instability can act as a barrier to registration for a fID and lead to exclusion where certain groups of persons are targeted and profiled by state and/or non-state (illegal) actors.

In addition to cases where registration is mandatory or where individuals are highly dependent on the infrastructures and services they wish to access, there might also be situations where people might opt to rely less on the use of the fID or decide not to register due to worries about surveillance, identity theft or targeted disciplinary control, thereby excluding them from resources they would have ordinarily gotten access to. In Nigeria, only about 20% of the population is reported to have registered for the Nigerian Identity Number (NIN) (this was about 6% in 2017). Similarly, though implementation of WURI program objectives in Guinea and Cote d’Ivoire commenced in 2018, as of date, the registration and identification output in both countries is still marginally low.

World Bank findings and lessons from Phase I reveal that digital identification can exacerbate exclusion and marginalization, while diminishing privacy and control over data, despite the benefits it may carry. Some of the challenges identified by the World Bank resonate with the major concerns listed here, and they include risks of surveillance, discrimination, inequality, distrust between the State and individuals, and legal, political and historical differences among countries. The solutions proposed, under the WURI program objectives, to address these problems – consultations, dialogues, ethnographic studies, provision of additional financing and capacity – are laudable but insufficient to dealing with the root causes. On the contrary, the solutions offered might reveal the inadequacies of a digitized State in West Africa where a large population of West African are digital illiterates, lack the means to access digital platforms, or operate largely in the informal sector.

Practically, the task of tactically addressing the root causes to most of the problems mentioned above, particularly the major ones involving political instability, institutional inadequacies, corruption, conflicts and capacity building, is an arduous one which may involve a more domestic/grassroot/bottom-up approach. However, the solution to these challenges is either unknown, difficult or less desirable than the “quick fix” offered by techno-solutionism and reliance on digital identification.

  1. It is uncertain why the conventional wisdom is that West African countries, many of whom have functional IDs, specifically need to have a national digital ID card system while some of their developed counterparts in Europe and North-America lack a national ID card but rely on different functional IDs
  2. Identifying institution is used here to refer to any institution that seeks to authenticate the identity of a person based on the ID card or number that person possesses.
  3. A foundational identity system is an identity system which enables the creation of identities or unique identification numbers used for general purposes, such as national identity cards. A functional identity system is one that is created for or evolves out of a specific use case but may likely be suitable for use across other sectors such as driver’s license, voter’s card, bank number, insurance number, insurance records, credit history, health record, tax records.
  4. Member States of ECOWAS include the Republic of Benin, Burkina Faso, Cape Verde, the Gambia, Ghana, Guinea, Guinea Bissau, Liberia, Mali, Niger, Nigeria, Senegal, Sierra Leone, Togo.
  5. See Savita Bailur, Helene Smertnik & Nnenna Nwakanma, End User Experience with identification in Côte d’Ivoire. Unpublished Report by Caribou Digital.

October 19, 2020. Ngozi Nwanta, JSD program, NYU School of Law with research interests in systemic analysis of national identification systems, governance of credit data, financial inclusion, and development.

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

TECHNOLOGY & HUMAN RIGHTS

Nothing is Inevitable! Main Takeaways from an Event on Techno-Racism and Human Rights

A Conversation with the UN Special Rapporteur on Racism

On July 23, 2020, the Digital Welfare State and Human Rights Project hosted a virtual event on techno-racism and human rights. The immediate reason for organizing this conversation was a recent report to the Human Rights Council by the United Nations Special Rapporteur on Racism, Tendayi Achiume, on the racist impacts of emerging technologies. The event sought to further explore these impacts and to question the role of international human rights norms and accountability mechanisms in efforts to address these. Christiaan van Veen moderated the conversation between the Special Rapporteur, Mutale Nkonde, CEO of AI for the People, and Nanjala Nyabola, author of Digital Democracy, Analogue Politics.

This event and Tendayi’s report come at a moment of multiple international crises, including a global wave of protests and activism against police brutality and systemic racism after the killing of George Floyd, and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia, and intolerance are within our societies. Just last month, as Tendayi explained during the event, the Human Rights Council held a historic urgent debate on systemic racism and police brutality in the United States and elsewhere, which would have been inconceivable just a few months ago.

The starting point for the conversation was an attempt to define techno-racism and provide varied examples from across the globe. This global dimension was especially important as so many discussions on techno-racism remain US-centric. Speakers were also asked to discuss not only private use of technology or government use within the criminal justice area, but to address often-overlooked technological innovation within welfare states, from social security to health care and education.

Nanjala started the conversation by defining techno-racism as the use of technology to lock in power disparities that are predicated on race. Such techno-racism can occur within states: Mutale discussed algorithmic hiring decisions and facial recognition technologies used in housing in the United States, while Tendayi mentioned racist digital employment systems in South America. But techno-racism also has a transnational dimension: technologies entrench power disparities between States that are building technologies and States that are buying them; Nanjala called this “digital colonialism.”

The speakers all agreed that emerging technologies are consistently presented as agnostic and neutral, despite being loaded with the assumptions of their builders (disproportionately white males educated at elite universities) about how society works. For example, the technologies increasingly used in welfare states are designed with the idea that people living in poverty are constantly attempting to defraud the government; Christiaan and Nanjala discussed an algorithmic benefit fraud detection tool used in the Netherlands, which was found by a Dutch court to be exclusively targeting neighborhoods with low-income and minority residents, as an excellent example of this.

Nanjala also mentioned the ‘Huduma Namba’ digital ID system in Kenya as a powerful example of the politics and complexity underneath technology. She explained the racist history of ID systems in Kenya – designed by colonial authorities to enable the criminalization of black people and the protection of white property – and argued that digitalizing a system that was intended to discriminate “will only make the discrimination more efficient”. This exacerbation of discrimination is also visible within India’s ‘Aadhaar’ digital ID system, through which existing exclusions have been formalized, entrenched, and anesthetized, enabling those in power to claim that exclusion, such as the removal of hundreds of thousands of people from food distribution lists, simply results from the operation of the system rather than from political choices.

Tendayi explained that she wrote her report in part to address her “deep frustration” with the fact that race and non-discrimination analyses are often absent from debates on technology and human rights at the UN. Though she named a report by the Center Faculty Director Philip Alston, prepared in cooperation with the Digital Welfare State and Human Rights Project, as one of few exceptions, discussions within the international human rights field remain focused upon privacy and freedom of expression and marginalize questions of equality. But techno-racism should not be an afterthought in these discussions, especially as emerging technologies often exacerbate pre-existing racism and enable a completely different scale of discrimination.

Given the centrality of Tendayi’s Human Rights Council report to the conversation, Christiaan asked the speakers whether and how international human rights frameworks and norms can help us evaluate the implications of techno-racism, and what potential advantages global human rights accountability mechanisms can bring relative to domestic legal remedies. Mutale expressed that we need to ask, “who is human in human rights?” She noted that the racist design of these technologies arises from the notion that Black people are not human. Tendayi argued that there is, therefore, also a pressing need to change existing ways of thinking about who violates human rights. During the aforementioned urgent debate in the Human Rights Council, for example, European States and Australia had worked to water down a powerful draft resolution and blocked the establishment of a Commission of Inquiry to investigate systemic racism specifically in the United States, on the grounds that it is a liberal democracy. Mutale described this as another indication that police brutality against Black people in a Western country like the United States is too easily dismissed as not of international concern.

Tendayi concurred and expressed her misgivings about the UN’s human rights system. She explained that the human rights framework is deeply implicated in transnational racially discriminatory projects of the past, including colonialism and slavery, and noted that powerful institutions (including governments, the UN, and international human rights bodies) are often “ground zero” for systemic racism. Mutale echoed this and urged the audience to consider how international human rights organs like the Human Rights Council may constitute a political body for sustaining white supremacy as a power system across borders.

Nanjala also expressed concerns with the human rights regime and its history, but identified three potential benefits of the human rights framework in addressing techno-racism. First, the human rights regime provides another pathway outside domestic law for demanding accountability and seeking redress. Second, it translates local rights violations into international discourse, thus creating potential for a global accountability movement and giving victims around the world a powerful and shared rights-based language. Third, because of its relative stability since the 1940s, human rights legal discourse helps advocates develop genealogies of rights violations, document repeated institutional failures, and establish patterns of rights violations over time, allowing advocates to amplify domestic and international pressure for accountability. Tendayi added that she is “invested in a future that is fundamentally different from the present,” and that human rights can potentially contribute to transforming political institutions and undoing structures of injustice around the world.

In addressing an audience question about technological responses to COVID-19, Mutale described how an algorithm designed to assign scarce medical equipment such as ventilators systematically discounted black patient viability. Noting that health outcomes around the world are consistently correlated with poverty and life experiences (including the “weathering effects” suffered by racial and ethnic minorities), she warned that, by feeding algorithms data from past hospitalizations and health outcomes, “we are training these AI systems to deem that black lives are not viable.” Tendayi echoed this, suggesting that our “baseline assumption” should be that new technologies will have discriminatory impacts simply because of how they are made and the assumptions that inform their design.
In response to an audience member’s concern that governments and private actors will adopt racist technologies regardless, Nanjala countered that “nothing is inevitable” and “everything is a function of human action and agency.” San Francisco’s decision to ban the use of facial recognition software by municipal authorities, for example, demonstrates that the use of these technologies is not inevitable, even in Silicon Valley. Tendayi, in her final remarks, noted that “worlds are being made and remade all of the time” and that it is vital to listen to voices, such as those of Mutale, Nanjala, and the Center’s Digital Welfare State Project, which are “helping us to think differently.” “Mainstreaming” the idea of techno-racism can help erode the presumption of “tech neutrality” that has made political change related to technology so difficult to achieve in the past. Tendayi concluded that this is why it is so vital to have conversations like these.

We couldn’t agree more!

To reflect that this was an informal conversation, first names are used in this story. 

July 29, 2020. Victoria Adelmant, and Adam Ray. 

Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

 

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

HUMAN RIGHTS MOVEMENT

Global Justice Clinic and Human Rights Organizations Call on Government of Haiti to Cancel a Planned Raid

The Global Justice Clinic, twenty-three other human rights organizations, and a number of individuals signed on to a letter calling for the government of Haiti to cancel a planned gang raid that it announced on Friday April 24, 2020.

In a statement to the press, Haiti’s Minister of Justice and Public Security said that residents of the impoverished community of Village de Dieu in Port-au-Prince had 72 hours to evacuate their homes and their neighborhood.  The government would conduct a gang raid, and beyond the 72 hour window, indicated that they absolved themselves of responsibility for what happens in the area.  There is extreme and understandable concern within Haiti that the gang raid may turn into indiscriminate violence.  As the letter explains, in the past two years, the government has been implicated in massacres against civilians.  Further, there is evidence that a former police officer who allegedly perpetrated past massacres has been coordinating with the Haitian National Police to enact Monday’s raid.  The signatory organizations and individuals call on the government of Haiti to cancel the raid and to protect the human rights and physical safety of all Haitian people.

As of Wednesday, April 29, 2020, the raid has not occurred.  However, human rights organization in Haiti and beyond continue to pressure the Haitian government to publicly declare that they will cancel the raid, and that they will address insecurity in a way that respects the human rights of Haitian people, particularly the most vulnerable.

Profiling the Poor in the Dutch Welfare State

TECHNOLOGY AND HUMAN RIGHTS

Profiling the Poor in the Dutch Welfare State

Report on court hearing in litigation in the Netherlands about digital welfare fraud detection system (‘SyRI’)

On Tuesday, October 29, 2019, I attended a hearing before the District Court of The Hague (the Netherlands) in litigation by a coalition of Dutch civil society organizations challenging the Dutch government’s System Risk Indication (“SyRI”). The Digital Welfare State and Human Rights Project at NYU Law, which I direct, recently collaborated with the United Nations Special Rapporteur on extreme poverty and human rights in preparing an amicus brief to the District Court. The Special Rapporteur became involved in this case because SyRI has exclusively been used to detect welfare fraud and other irregularities in poor neighborhoods in four Dutch cities and affects the right to social security and to privacy of the poorest members of Dutch society. This litigation may also set a highly relevant legal precedent with impact beyond Dutch borders in an area that has received relatively little judicial scrutiny to date.

Lies, damn lies, and algorithms

What is SyRI? The formal answer can be found in legislation and implementing regulations from 2014. In order to coordinate government action against illicit use of government funds and benefits in the area of social security, tax benefits and labor law, Dutch law allows for the sharing of data between municipalities, welfare authorities, tax authorities and other relevant government authorities since 2014. A total of 17 categories of data held by government authorities may be shared in this context, from employment and tax data, to benefit data, health insurance data and enforcement data, among other categories of digitally stored information. Government authorities wishing to cooperate in a concrete SyRI project request the Minister for Social Affairs and Employment to use the SyRI tool by pooling and analyzing the relevant data from various authorities using an algorithmic risk model.

The Minister has outsourced the tasks of pooling and analyzing the data to a private foundation, somewhat unfortunately named ‘The Intelligence Agency (‘Inlichtingenbureau’). The Intelligence Agency pseudonymizes the data pool, analyzes the data using an algorithmic risk model and creates a file for those individuals (or corporations) who are deemed to be at a higher risk of being involved in benefit fraud and other irregularities. The Minister then analyzes these files and notifies the cooperating government authorities of those individuals (or corporations) are considered at higher risk of committing benefit fraud or other irregularities (‘risk notification’). Risk notifications are included in a register for two years. Those who are included in the register are not actively notified of this registration, but they can receive access to their information in the register after a specific request.

The preceding understanding of how the system works can be derived from the legislative texts and history, but a surprising amount of uncertainty remains about how exactly SyRI works in practice. This became abundantly clear in the hearing in the SyRI-case before the District Court of The Hague on October 29. The court is assessing the plaintiffs’ claim that SyRI, as legislated in 2014, violates norms of applicable international law, including the rights to privacy, data protection and a fair trial recognized in the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union, the International Covenant on Civil and Political Rights and the EU General Data Protection Regulation.  In a courtroom packed with representatives from the 8 plaintiffs, reporters and concerned citizens from areas where SyRI has been used, the first question by the three-judge panel was to clarify the radically different views held by the plaintiffs and the Dutch State as to what SyRI is exactly.

According to the State, SyRI merely compares data from different government databases, operated by different authorities, in order to find simple inconsistencies. Although this analysis is undertaken with the assistance of an algorithm, the State underlined that this algorithm operates on the basis of pre-defined indicators of risk and that the algorithm is not of the ‘learning’ type. The State further emphasized that SyRI is not a Big Data or data-mining system, but that it employs a targeted analysis on the basis of a limited dataset with a clearly defined objective. It also argued that a risk notification by SyRI is merely a – potential – starting point for further investigations by individual government authorities and does not have any direct and automatic legal consequences such as the imposition of a fine or the suspension or withdrawal of government benefits or assistance.

But plaintiffs strongly contested the State’s characterization of SyRI. They claimed instead that SyRI is not narrowly targeted but instead aims at entire (poor) neighborhoods, that diverse and unconnected categories of personal data are brought together in SyRI projects, and that the resulting data exchange and analysis occur on a large scale. In their view, SyRI projects could therefore be qualified as projects involving problematic uses of Big Data, data-mining and profiling. They also made clear that it is exceedingly difficult for them or the District Court to assess what SyRI actually is or is not doing, because key elements of the system remain secret and the relevant legislation does not restrict the methods used, including the request to cooperating authorities to undertake a SyRI project, the risk model used, and the ways in which personal data can be processed.  All of these elements remain hidden from outside scrutiny.

Game the system, leave your water tap running

The District Court asked a series of probing and critical follow-up questions in an attempt to clarify the exact functioning of SyRI and to understand the justification for the secrecy surrounding it. One can sympathize with the court’s attempt to grasp the basic facts about SyRI in order to enable it to undertake its task of judicial oversight. Pushed by the District Court to clarify why the State could not be more open about the functioning of SyRI, the attorney for the State warned about welfare beneficiaries ‘gaming the system’. Referring to a pilot project pre-dating SyRI, in which welfare authority data about individuals claiming low-income benefits was matched with usage data held by publicly-owned drinking water companies to identify beneficiaries who committed fraud by falsely claiming they were living alone while actually living together (to claim a higher benefit level), the attorney for the State claimed that making it known that water usage is a ‘risk indicator’ could lead beneficiaries to leave their taps running to avoid detection. Some individuals attending the hearing could be heard snickering when this prediction was made.

Another fascinating exchange between the judges and the attorney for the State dealt with the standards applied by the Minister when assessing a request for a SyRI project by municipal and other government authorities. According to the State’s attorney, what would commonly happen is that a municipality has a ‘problem neighborhood’ and wants to tackle its problems, which are presumed to include welfare fraud and other irregularities, through SyRI. The request to the Minister is typically based ‘on the law, experience and logical thinking’ according to the State. Unsatisfied with this reply, the District Court probed the State for a more concrete justification of the use of SyRI and the precise standards applied to justify its use: ‘In Bloemendaal (one of the richest municipalities of the Netherlands) a lot of people enjoy going to classical concerts; in a problem neighborhood, there are a lot of people who receive government welfare benefits; why is that a justification for the use of SyRI?’, the Court asked. The attorney for the State had to admit that specific neighborhoods were targeted because those areas housed more people who were on welfare benefits and that, while participating authorities usually have no specific evidence that there are high(er) levels of benefit fraud in those neighborhoods, this higher proportion of people on benefits is enough reason to use SyRI.

Finally, and of great relevance to the intensity of the Court’s judicial scrutiny, the question of the gravity of the invasion of human rights – more specifically, the right to privacy – was a central topic of the hearing. The State argued that the data being shared and analyzed was existing data and not new data. It furthermore argued that for those individuals whose data was shared and analyzed, but who were not considered a ‘higher risk’, there was no harm at all: their data had been pseudonymized and was removed after the analysis. The opposing view by plaintiffs was that the government-held data that was shared and analyzed in SyRI was not originally collected for the specific purpose of enforcement. Plaintiffs also argued that – due to the wide categories of data that were potentially shared and analyzed in SyRI – a very intimate profile could be made of individuals in targeted neighborhoods: ‘This is all about profiling and creating files on people’.

Judgment expected in early 2020

The District Court announced that it expects to publish its judgment in this case on 29 January 2020. There are many questions to be answered by the Court. In non-legal language, they include at least the following: How does SyRI work exactly? Does it matter whether SyRI uses a relatively straightforward ‘decision-tree’ type of algorithm or, instead, machine learning algorithms? What is the harm in pooling previously siloed government data? What is the harm in classifying an individual as ‘high risk’? Does SyRI discriminate on the basis of socio-economic status, migrant status, race or color? Does the current legislation underpinning SyRI give sufficient clarity and adequate legal standards to meaningfully curb the use of State power to the detriment of individual rights? Can current levels of secrecy be maintained in a democracy based on the rule of law?

In light of the above, there will be many eyes focused on the Netherlands in January when a potentially groundbreaking legal precedent will be set in the debate on digital welfare states and human rights.

November 1, 2019.  Christiaan van Veen, Digital Welfare State & Human Rights Project (2019-2022), Center for Human Rights and Global Justice at NYU School of Law. 

Human Rights in the Digital Age: Can they Make a Difference?

TECHNOLOGY AND HUMAN RIGHTS

Human Rights in the Digital Age: Can they Make a Difference?

This event brought together international policymakers, human rights practitioners, leading academics and representatives from technology companies to discuss the relevance of the international human rights law framework in a world increasingly dominated by digital technologies.

In only a few decades, we have witnessed tremendous change through digital innovation, from personal computers, a globe-spanning Internet, and ubiquitous smartphones, to rapid advances in Artificial Intelligence. As we express ever more of ourselves digitally, the economy is built around the data generated, which is then used to predict and nudge our future behavior. Surveillance capitalism (Zuboff, 2019) is being matched by the digitization of government, whether in national security, policing, immigration or court systems. And postwar welfare states are rapidly becoming digital welfare states (Alston & Van Veen, 2019).

The speed, magnitude and complexity of these developments have left little or no time for reflection let alone resistance on the part of most of those affected.  Only now is the world waking up to the value-choices implicit in embracing many of these technological changes. And many of the initiatives designed to curb the excesses of the digital age are entirely voluntary, based in malleable conceptions of ethics, and themselves reliant upon technological solutions promoted by the very Big Tech firms these initiatives are supposed to regulate.

This event focused on the role of law, democratic institutions and human rights in the digital age. Can the societal impacts of digital technologies be meaningfully addressed in the language of rights? What difference does it make to insist on applying the lens of human rights law? What difference can international and domestic human rights accountability mechanisms make in the technology debate? Whose voices and issues are neglected in this debate and how can human rights law empower those on the margins of society?

The keynote speaker was Michelle Bachelet, United Nations High Commissioner for Human Rights; and the panel moderated by Ed Pilkington, Chief Reporter, Guardian US, featured:

  • Philip Alston, United Nations Special Rapporteur on extreme poverty and human rights and John Norton Pomeroy Professor of Law, New York University School of Law
  • Michelle Bachelet, United Nations High Commissioner for Human Rights
  • Chris Hughes, Co-founder of Facebook and Co-Chair of the Economic Security Project and Senior Advisor, Roosevelt Institute
  • Kumi Naidoo, Secretary General, Amnesty International
  • Shoshana Zuboff, Charles Edward Wilson Professor Emerita, Harvard Business School and author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019)

October 17, 2019. This event was co-hosted by the UN Special Rapporteur on extreme poverty and human rights, the Center for Human Rights and Global Justice at New York University School of Law and Amnesty International with the Guardian as a media partner.

GJC’s Ellie Happel Expert Witness in Case Blocking Trump Administration from Terminating TPS For Haiti

HUMAN RIGHTS MOVEMENT

GJC’s Ellie Happel Expert Witness in Case Blocking Trump Administration from Terminating TPS For Haiti

On Thursday, April 11, 2019 Judge Kuntz of the Eastern District of New York issued a nationwide Preliminary Injunction that blocks the Trump Administration from terminating TPS for Haiti.  Global Justice Clinic Haiti Project Director Ellie Happel was the first witness called by the plaintiffs in the case.  Ellie’s expert testimony was based both on her experience living in Haiti during the time under consideration (2010–2017), and on the facts presented in the Global Justice Clinic report, Extraordinary Conditions: A Statutory Analysis of Haiti’s Qualification for TPS

The Trump Administration ended TPS for Haiti in November, 2017.  Judge Kuntz ruled that the decision by the Department of Homeland Security (DHS) to terminate TPS for Haiti was improperly influenced by the White House.  The decision was “reverse engineered” to “get to no,” ruled Judge Kuntz, finding that the Plaintiffs were likely to succeed on claims they brought under both the Administrative Procedure Act (APA) and the Equal Protection Clause of the U.S. Constitution.  The judge found that there was significant evidence that the decision to terminate was a “preordained outcome,” including evidence that suggesting that, in fewer than 30 minutes, a DHS employee reworked a memo that favored extending TPS for Haiti to one that supported termination.  The Court found that the plaintiffs’ Equal Protection claim raises “serious concerns.”  “Based on the facts on this record, and under the [relevant legal framework], there is both direct and circumstantial evidence [that] a discriminatory purpose of removing non-white immigrants from the United States was a motivating factor behind the decision to terminate TPS for Haiti.”  Judge Kuntz concluded that “absent injunctive relief, Plaintiffs, as well as 50,000 to 60,000 Haitian TPS beneficiaries and their 30,000 U.S. Citizen children stand to suffer serious harm.”

In addition to Ellie’s role as an expert witness in this case, the Global Justice Clinic was involved in a FOIA lawsuit that divulged relevant records from the Department of Homeland Security (DHS) and the State Department.  These records were integral to this case and others challenging the Trump Administration’s termination of TPS for Haiti.  Professor Margaret Satterthwaite served as a plaintiff in the FOIA lawsuit.

April 16, 2019.

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

CLIMATE & ENVIRONMENT

Guyanese Indigenous Council Rejects Canadian Mining Company’s Flimsy Environmental and Social Impact Assessment, Calls for Rejection of Mining Permit

The Global Justice Clinic has been working with the South Rupununi District Council (SRDC) since 2016. Through the clinic, students have provided data analysis and legal support for monitoring activity undertaken by the SRDC. 

Last week, the South Rupununi District Council (SRDC), a legal representative institution for the Wapichan people, released a statement forcefully denouncing the procedurally and substantively defective environmental and social impact assessment (ESIA) submitted by a Canadian mining company (Guyana Goldstrike) seeking to begin large-scale mining operations on Marutu Taawa through its Guyanese subsidiary (Romanex). Marutu Taawa, also known as Marudi Mountain, stands deep in the traditional territory of the Wapichan and holds historical, cultural, spiritual and biological significance for the entire region. Because Marutu Taawa sits at a critical watershed, the environmental impact of large-scale mining operations would threaten the ability of the Wapichan people to continue living in the ancestral lands they have called home for centuries. Notwithstanding the threat to the Wapichan people posed by large-scale mining, SRDC finds that Guyana Goldstrike’s ESIA relies on incomplete, inaccurate, or decades-old information to ignore the substantial environmental, public health, and cultural consequences that would occur if such mining operations were allowed to proceed. The SRDC also strongly condemns the mining company’s failure to consult the Council, as a legal representative institution of the Wapichan people. This failure to meaningfully consult stands in direct violation of both Guyanese and international law.

Given the inadequacy of the ESIA and Guyana Goldstrike’s flouting of domestic and international law, the SRDC has strongly encouraged the Guyanese Environmental Protection Agency (EPA) to deny the Canadian company’s subsidiary the environmental permit needed to initiate large-scale operations in the territory. The SRDC also calls on the EPA to oversee a process that ensures that Guyana Goldstrike and Romanex adhere to Guyanese and international law and best practices in the international mining sector.

This post was originally published as a press release on September 28, 20218.

NYU Clinics File Lawsuit Seeking Disclosure of Trump Policy Behind Termination of TPS for Haitians

HUMAN RIGHTS MOVEMENT

NYU Clinics File Lawsuit Seeking Disclosure of Trump Policy Behind Termination of TPS for Haitians

On Thursday January 25, 2018, the National Immigration Project of the National Lawyers’ Guild and Margaret Satterthwaite, NYU School of Law professor and director of the Global Justice Clinic (GJC), filed a Freedom of Information lawsuit against the U.S. Department of Homeland Security, U.S. Department of State, and U.S. Immigration and Customs Enforcement to obtain records documenting the reasons behind the U.S. government’s decision to terminate Temporary Protected Status (TPS) for Haitians. NYU School of Law’s Immigrant Rights Clinic provided legal counsel.

On November 20, 2017, the Trump Administration terminated TPS for Haiti, stating that the conditions caused by the earthquake no longer exist.  Many reports, including Extraordinary Conditions:  A Statutory Analysis of Haiti’s Qualification for TPS, published by the GJC in October, show that families in Haiti continue to face displacement, homelessness, one of the worst cholera epidemics in the world, hunger, and other challenges that make Haiti unsafe for return. The termination will affect the estimated 58,000 Haitian TPS holders and their families. TPS is set to terminate in July of 2019.

President Trump’s recent racist statements towards certain foreign nations, including Haiti, make the public’s right to access information that influenced the decision to terminate TPS that much more urgent.

January 25, 2018. 

Communications from NYU clinics do not represent the institutional views of NYU School of Law or the Center, if any.