Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

TECHNOLOGY & HUMAN RIGHTS

Social Credit in China: Looking Beyond the “Black Mirror” Nightmare

The Chinese government’s Social Credit program has received much attention from Western media and academics, but misrepresentations have led to confusion over what it truly entails. Such mischaracterizations unhelpfully distract from the dangers and impacts of the realities of Social Credit. On March 31, 2021, Christiaan Van Veen and I hosted the sixth event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Dr. Chenchen Zhang, Assistant Professor at Queen’s University Belfast, to explore the much-discussed but little-understood Social Credit program in China.

Though the Chinese government’s Social Credit program has received significant attention from Western media and rights organizations, much of this discussion has often misrepresented the program. Social Credit is imagined as a comprehensive, nation-wide system in which every action is monitored and a single score is assigned to each individual, much like a Black Mirror episode. This is in fact quite far from reality. But this image has become entrenched in the West, as discussions and some academic debate has focused on abstracted portrayals of what Social Credit could be. In addition, the widely-discussed voluntary, private systems run by corporations, such as Alipay’s Sesame Credit or Tencent’s WeChat score, are often mistakenly conflated with the government’s Social Credit program.

Jeremy Daum has argued that these widespread misrepresentations of Social Credit serve to distract from examining “the true causes for concern” within the systems actually in place. They also distract from similar technological developments occurring in the West, which seem acceptable by comparison. An accurate understanding is required to acknowledge the human rights concerns that this program raises.

The crucial starting point here is that the government’s Social Credit system is a heterogeneous assemblage of fragmented and decentralized systems. Central government, specific government agencies, public transport networks, municipal governments, and others are experimenting with diverse initiatives with different aims. Indeed, xinyong, the term which is translated as “credit” in Social Credit, encompasses notions of financial creditworthiness, regulatory compliance, and moral trustworthiness, therefore covering programs with different visions and narratives. A common thread across these systems is a reliance on information-sharing and lists to encourage or discourage certain behaviors, including blacklists to “shame” wrongdoers and “redlists” publicizing those with a good record.

One national-level program called the Joint Rewards and Sanctions mechanism shares information across government agencies about companies which have violated regulations. Once a company is included on one agency’s blacklist for having, for example, failed to pay migrant workers’ wages, other agencies may also sanction that company and refuse to grant it a license or contract. But blacklisting mechanisms also affect individuals: the People’s Court of China maintains a list of shixin (dishonest) people who default on judgments. Individuals on this list are prevented from accessing “non-essential consumption” (including travel by plane or high-speed train) and their names are published, adding an element of public shaming. Other local or sector-specific “credit” programs aim at disciplining individual behavior: anyone caught smoking on the high-speed train is placed on the railway system’s list of shixin persons and subjected to a 6-month ban from taking the train. Localized “citizen scoring” schemes are also being piloted in a dozen cities. Currently, these resemble “club membership” schemes with minor benefits and have low sign-up rates; some have been very controversial. In 2019, in response to controversies, the National Development and Reform Commission issued guidelines stating that citizen scores must only be used for incentivizing behavior and not as sanctions or to limit access to basic public services. Presently, each of the systems described here are separate from one another.

But even where generalizations and mischaracterizations of Social Credit are dispelled, many aspects nonetheless raise significant concerns. Such systems will, of course, worsen issues surrounding privacy, chilling effects, discrimination, and disproportionate punishment. These have been explored at length elsewhere, but this conversation with Chenchen raised additional important issues.

First, a stated objective behind the use of blacklists and shaming is the need to encourage compliance with existing laws and regulations, since non-compliance undermines market order. This is not a unique approach: the US Department of Labor names and shames corporations that violate labor laws, and the World Bank has a similar mechanism. But the laws which are enforced through Social Credit exist in and constitute an extremely repressive context, and these mechanisms are applied to individuals. An individual can be arrested for protesting labor conditions or for speaking about certain issues on social media, and systems like the People’s Court blacklist amplify the consequences of these repressive laws. Mechanisms which “merely” seek to increase legal compliance are deeply problematic in this context.

Second, as with so many of the digital government initiatives discussed in the Transformer States series, Social Credit schemes exhibit technological solutionism which invisibilizes the causes of the problems they seek to address. Non-payment of migrant workers’ wages, for example, is a legitimate issue which must be tackled. But in turning to digital solutions such as an app which “scores” firms based on their record of wage payments, a depoliticized technological fix is promised to solve systemic problems. In the process, it obscures the structural reasons behind migrant workers’ difficulties in accessing their wages, including a differentiated citizenship regime that denies them equal access to social provisions.

Separately, there are disparities in how individuals in different parts of the country are affected by Social Credit. Around the world, governments’ new digital systems are consistently trialed on the poorest or most vulnerable groups: for example, smartcard technology for quarantining benefit income in Australia was first introduced within indigenous communities. Similarly, experimentation with Social Credit systems is unequally targeted, especially on a geographical basis. There is a hierarchy of cities in China with provincial-level cities like Beijing at the top, followed by prefectural-level cities, county-level cities, then towns and villages. A pattern is emerging whereby smaller or “lower-ranked” cities have adopted more comprehensive and aggressive citizen scoring schemes. While Shanghai has local legislation that defines the boundaries of its Social Credit scheme, less-known cities seeking to improve their “branding” are subjecting residents to more arbitrary and concerning practices.

Of course, the biggest concern surrounding Social Credit relates to how it may develop in the future. While this is currently a fragmented landscape of disparate schemes, the worry is that these may be consolidated. Chenchen stated that a centralized, nationwide “citizen scoring” system remains unlikely and would not enjoy support from the public or the Central Bank which oversees the Social Credit program. But it is not out of the question that privately-run schemes such as Sesame Credit might eventually be linked to the government’s Social Credit system. Though the system is not (yet) as comprehensive and coordinated as has been portrayed, its logics and methodologies of sharing ever-more information across siloes to shape behaviors may well push in this direction, in China and elsewhere.

April 20, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

TECHNOLOGY & HUMAN RIGHTS

Nothing is Inevitable! Main Takeaways from an Event on “Techno-Racism and Human Rights: A Conversation with the UN Special Rapporteur on Racism”

On July 23, 2020, the Digital Welfare State and Human Rights Project hosted a virtual event on techno-racism and human rights. The immediate reason for organizing this conversation was a recent report to the Human Rights Council by the United Nations Special Rapporteur on Racism, Tendayi Achiume, on the racist impacts of emerging technologies. The event sought to further explore these impacts and to question the role of international human rights norms and accountability mechanisms in efforts to address these. Christiaan van Veen moderated the conversation between the Special Rapporteur, Mutale Nkonde, CEO of AI for the People, and Nanjala Nyabola, author of Digital Democracy, Analogue Politics.

This event and Tendayi’s report come at a moment of multiple international crises, including a global wave of protests and activism against police brutality and systemic racism after the killing of George Floyd, and a pandemic which, among many other tragic impacts, has laid bare how deeply embedded inequality, racism, xenophobia, and intolerance are within our societies. Just last month, as Tendayi explained during the event, the Human Rights Council held a historic urgent debate on systemic racism and police brutality in the United States and elsewhere, which would have been inconceivable just a few months ago.

The starting point for the conversation was an attempt to define techno-racism and provide varied examples from across the globe. This global dimension was especially important as so many discussions on techno-racism remain US-centric. Speakers were also asked to discuss not only private use of technology or government use within the criminal justice area, but to address often-overlooked technological innovation within welfare states, from social security to health care and education.

Nanjala started the conversation by defining techno-racism as the use of technology to lock in power disparities that are predicated on race. Such techno-racism can occur within states: Mutale discussed algorithmic hiring decisions and facial recognition technologies used in housing in the United States, while Tendayi mentioned racist digital employment systems in South America. But techno-racism also has a transnational dimension: technologies entrench power disparities between States that are building technologies and States that are buying them; Nanjala called this “digital colonialism.”

The speakers all agreed that emerging technologies are consistently presented as agnostic and neutral, despite being loaded with the assumptions of their builders (disproportionately white males educated at elite universities) about how society works. For example, the technologies increasingly used in welfare states are designed with the idea that people living in poverty are constantly attempting to defraud the government; Christiaan and Nanjala discussed an algorithmic benefit fraud detection tool used in the Netherlands, which was found by a Dutch court to be exclusively targeting neighborhoods with low-income and minority residents, as an excellent example of this.

Nanjala also mentioned the ‘Huduma Namba’ digital ID system in Kenya as a powerful example of the politics and complexity underneath technology. She explained the racist history of ID systems in Kenya – designed by colonial authorities to enable the criminalization of black people and the protection of white property – and argued that digitalizing a system that was intended to discriminate “will only make the discrimination more efficient”. This exacerbation of discrimination is also visible within India’s ‘Aadhaar’ digital ID system, through which existing exclusions have been formalized, entrenched, and anesthetized, enabling those in power to claim that exclusion, such as the removal of hundreds of thousands of people from food distribution lists, simply results from the operation of the system rather than from political choices.

Tendayi explained that she wrote her report in part to address her “deep frustration” with the fact that race and non-discrimination analyses are often absent from debates on technology and human rights at the UN. Though she named a report by the Center Faculty Director Philip Alston, prepared in cooperation with the Digital Welfare State and Human Rights Project, as one of few exceptions, discussions within the international human rights field remain focused upon privacy and freedom of expression and marginalize questions of equality. But techno-racism should not be an afterthought in these discussions, especially as emerging technologies often exacerbate pre-existing racism and enable a completely different scale of discrimination.

Given the centrality of Tendayi’s Human Rights Council report to the conversation, Christiaan asked the speakers whether and how international human rights frameworks and norms can help us evaluate the implications of techno-racism, and what potential advantages global human rights accountability mechanisms can bring relative to domestic legal remedies. Mutale expressed that we need to ask, “who is human in human rights?” She noted that the racist design of these technologies arises from the notion that Black people are not human. Tendayi argued that there is, therefore, also a pressing need to change existing ways of thinking about who violates human rights. During the aforementioned urgent debate in the Human Rights Council, for example, European States and Australia had worked to water down a powerful draft resolution and blocked the establishment of a Commission of Inquiry to investigate systemic racism specifically in the United States, on the grounds that it is a liberal democracy. Mutale described this as another indication that police brutality against Black people in a Western country like the United States is too easily dismissed as not of international concern.

Tendayi concurred and expressed her misgivings about the UN’s human rights system. She explained that the human rights framework is deeply implicated in transnational racially discriminatory projects of the past, including colonialism and slavery, and noted that powerful institutions (including governments, the UN, and international human rights bodies) are often “ground zero” for systemic racism. Mutale echoed this and urged the audience to consider how international human rights organs like the Human Rights Council may constitute a political body for sustaining white supremacy as a power system across borders.

Nanjala also expressed concerns with the human rights regime and its history, but identified three potential benefits of the human rights framework in addressing techno-racism. First, the human rights regime provides another pathway outside domestic law for demanding accountability and seeking redress. Second, it translates local rights violations into international discourse, thus creating potential for a global accountability movement and giving victims around the world a powerful and shared rights-based language. Third, because of its relative stability since the 1940s, human rights legal discourse helps advocates develop genealogies of rights violations, document repeated institutional failures, and establish patterns of rights violations over time, allowing advocates to amplify domestic and international pressure for accountability. Tendayi added that she is “invested in a future that is fundamentally different from the present,” and that human rights can potentially contribute to transforming political institutions and undoing structures of injustice around the world.

In addressing an audience question about technological responses to COVID-19, Mutale described how an algorithm designed to assign scarce medical equipment such as ventilators systematically discounted black patient viability. Noting that health outcomes around the world are consistently correlated with poverty and life experiences (including the “weathering effects” suffered by racial and ethnic minorities), she warned that, by feeding algorithms data from past hospitalizations and health outcomes, “we are training these AI systems to deem that black lives are not viable.” Tendayi echoed this, suggesting that our “baseline assumption” should be that new technologies will have discriminatory impacts simply because of how they are made and the assumptions that inform their design.
In response to an audience member’s concern that governments and private actors will adopt racist technologies regardless, Nanjala countered that “nothing is inevitable” and “everything is a function of human action and agency.” San Francisco’s decision to ban the use of facial recognition software by municipal authorities, for example, demonstrates that the use of these technologies is not inevitable, even in Silicon Valley. Tendayi, in her final remarks, noted that “worlds are being made and remade all of the time” and that it is vital to listen to voices, such as those of Mutale, Nanjala, and the Center’s Digital Welfare State Project, which are “helping us to think differently.” “Mainstreaming” the idea of techno-racism can help erode the presumption of “tech neutrality” that has made political change related to technology so difficult to achieve in the past. Tendayi concluded that this is why it is so vital to have conversations like these.

We couldn’t agree more!

To reflect that this was an informal conversation, first names are used in this story. 

July 29, 2020. Victoria Adelmant, and Adam Ray. 

Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

 

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System

TECHNOLOGY & HUMAN RIGHTS

False Promises and Multiple Exclusion: Summary of Our RightsCon Event on Uganda’s National Digital ID System 

Despite its promotion as a tool for social inclusion and development, Uganda’s National Digital ID System is motivated primarily by national security concerns. As a result, the ID system has generated both direct and indirect exclusion, particularly affecting women and older persons.

On June 10, 2021, the Center for Human Rights and Global Justife at NYU School of Law co-hosted the panel “Digital ID: what is it good for? Lessons from our research on Uganda’s identity system and access to social services” as part of RightsCon, the leading summit on human rights in the digital age. The panelists included Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER), Dorothy Mukasa, Team Leader of Unwanted Witness, Grace Mutung’u, Research Fellow at the Centre for IP and IT Law at Strathmore University, and Christiaan van Veen, Director of the Digital Welfare State & Human Rights Project at the Center . This blog summarizes highlights of the panel discussion. A recording and transcript of the conversation, as well as additional readings, can be found below.

Uganda’s national digital ID system, known as Ndaga Muntu, was introduced in 2014 through a mass registration campaign. The government aimed to collect the biographic and biometric information including photographs and fingerprints of every adult in the country, to record this data in a centralized database known as the National Identity Register, and to issue a national ID card and unique ID number to each adult. Since its introduction, having a national ID has become a prerequisite to access a whole host of services, from registering for a SIM card and opening a bank account, to accessing health services and social protection schemes.

This linkage of Ndaga Muntu to public services has raised significant human rights concerns and is serving to lock millions of people in Uganda out of critical services. Seven years from its inception, it is clear that the national digital ID is a tool for exclusion rather than for inclusion. Drawing on the joint report by the Center , ISER, and Unwanted Witness, this event made clear that Ndaga Muntu was grounded in false promises and is resulting in multiple forms of exclusion.

The False Promise of Inclusion

The Ugandan government argued that this digital ID system would enhance social inclusion by allowing Ugandans to prove their identity more easily. Having this proof of identity would facilitate access to public services such as healthcare, enable people to sign up for private services such as bank accounts, and allow people to move freely throughout Uganda. The same rhetoric of inclusion was used to sell Aadhaar, India’s digital ID system, to the Indian public.

But for many Ugandans this was a false promise. From the very outset, Ndaga Muntu was developed chiefly as a tool for national security. The powerful Ugandan military had long pushed for the collection of sensitive identity information and biometric data: in the context of a volatile region, a centralized information database is appealing because of its ability to verify identity and indicate who is “really Ugandan” and who is not. Therefore, the national ID project was housed in the Ministry of Internal Affairs, overseen by prominent members of the Ugandan People’s Defense Force, and designed to serve only those who succeeded in completing a rigorous citizenship verification process.

The panelist from Kenya, Grace Mutung’u, shared how Kenya’s hundred-year-old national identification system was similarly rooted in a colonial regime that focused on national security and exclusion. Those design principles created a system that sought only to “empower the already empowered” and not to extend benefits beyond already-privileged constituencies. The result in both Kenya and Uganda was the same: digital ID systems that are designed to ensure that certain individuals and groups remain excluded from political, economic, and social life.

Proliferating Forms of Exclusion

Beyond the fact that Ndaga Muntu was designed to directly exclude anyone not entitled to access public services, those who are entitled are also being excluded in the millions. For ordinary Ugandans, accessing Ndaga Muntu is a nightmarish process rife with problems every step of the way. These problems, such as corruption, incorrect data entry, and technical errors, have impeded Ugandans’ access to the ID. Vulnerable populations who rely on social protection programs that require proof of ID bear the brunt of such errors. For example, one older woman was told that the national ID registration system could not capture her picture because of her grey hair. Other elderly Ugandans have had trouble with fingerprint scanners that could not capture fingerprints worn away from years of manual labor.

The many individuals who have not succeeded in registering for Ndaga Muntu are therefore being left out of the critical services which are increasingly linked to the ID. At least 50,000 of the 200,000 eligible persons over the age of 80 in Uganda were unable to access potentially lifesaving benefits such as the Senior Citizens’ Grant cash transfer program. Women have been similarly disproportionately impacted by the national ID requirement; for instance, pregnant women have been refused services by healthcare workers for failing to provide ID. To make matters worse, ID requirements are increasingly ubiquitous in Uganda: proof of ID is often required to book transportation, to vote, to access educational services, healthcare, social protection grants, and food donations. Having a national ID has become necessary for basic survival, especially for those who live in extreme poverty.

Digital ID systems should not prohibit people from living their lives and utilizing basic services that should be universally accessible, particularly when they are justified on the basis that they will improve access to services. Not only was the promise of inclusion for Ndaga Muntu false, but the rollout of the system has also been incompetent and faulty, leading to even greater exclusion. The profound impact of this double discrimination in Uganda demonstrates that such digital ID systems and their impacts on social and economic rights warrant greater and urgent attention from the human rights community at large.

June 12, 2021. Madeleine Matsui, JD program, Harvard Law School; intern with the Digital Welfare State and Human Rights.

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

User-friendly Digital Government? A Recap of Our Conversation About Universal Credit in the United Kingdom

On September 30, 2020, the  Digital Welfare State and Human Rights Project hosted the first in its series of virtual conversations entitled “Transformer States: A Conversation Series on Digital Government and Human Rights” exploring the digital transformation of governments around the world. In this first iteration of the series, Christiaan van Veen and Victoria Adelmant interviewed Richard Pope, part of the founding team at the UK Government Digital Service and author of Universal Credit: Digital Welfare. In interviewing a technologist who worked with policy and delivery teams across the UK government to redesign government services, the event sought to explore the promise and realities of digitalized benefits. 

Universal Credit (UC), the main working-age benefit for the UK population, represents at once a major political reform and an ambitious digitization project. UC is a “digital by default” benefit in that claims are filed and managed via an online account, and calculations of recipients’ entitlements are also reliant on large-scale automation within government. The Department for Work and Pensions (DWP), the department responsible for welfare in the UK, repurposed the taxation office’s Real-Time Information (RTI) system, which already collected information about employees’ earnings for the purposes of taxation, in order to feed this data about wages into an automated calculation of individual benefit levels. The amount a recipient receives each month from UC is calculated on the basis of this “real-time feed” of information about her earnings as well as on the basis of a long list of data points about her circumstances, including how many children she has, her health situation and her housing. UC is therefore ‘dynamic,’ as the monthly payment that recipients receive fluctuates. Readers can find a more comprehensive explanation of how UC works in Richard’s report.

One “promise” surrounding UC was that it would make interaction with the British welfare system more user-friendly. The 2010 White Paper launching the reforms noted that it would ‘cut through the complexity of the existing system’ through introducing online systems which would be “simpler and easier to understand” and “intuitive.” Richard explained that the design of UC was influenced by broader developments surrounding the government’s digital transformation agenda, whereby “user-centered design” and “agile development” became the norm across government in the design of new digital services. This approach seeks to place the needs of users first and to design around those needs. It also favors an “agile,” iterative way of working rather than designing an entire system upfront (the “waterfall” approach).

Richard explained that DWP designs the UC software itself and releases updates to the software every two weeks: “They will do prototyping, they will do user research based on that prototyping, they will then deploy those changes, and they will then write a report to check that it had the desired outcome,” he said. Through this iterative, agile approach, government has more flexibility and is better able to respond to “unknowns.” Once such ‘unknown’ is the Covid-19 pandemic, and as the UK “locked down” in March, almost a million new claims for UC were successfully processed in the space of just two weeks. Not only would the old, pre-UC system have been unlikely to have been able to meet this surge, this has also compared very favorably with the failures seen in some US states—some New Yorkers, for example, were required to fax their applications for unemployment benefit.

The conversation then turned to the reality of UC from the perspective of recipients. For example, half of claimants were unable to make their claim online without help, and DWP was recently required by a tribunal to release figures which show that hundreds of thousands of claims are abandoned each year. The ‘digital first’ principle as applied to UC, in effect requiring all applicants to claim online and offering inadequate alternatives, has been particularly harmful in light of the UK’s ‘digital divide.’ Richard underlined that there is an information problem here – why are those applications being abandoned? We cannot be certain that the sole cause is a lack of digital skills. Perhaps people are put off by the large quantity of information about their lives they are required to enter into the digital system, or people get a job before completing the application, or they realize how little payment they will receive, or that they will have to wait around five weeks to receive any payment.

But had the UK government not been overly optimistic about future UC users’ access and ability to use digital systems? For example, the 2012 DWP Digital Strategy stated that “most of our customers and claimants are already online and more are moving online all the time” while only half of all adults with an annual household income between £6,000-£10,000 have an internet connection either via broadband or smartphone. Richard agreed that the government had been over-optimistic, but pointed again to the fact that we do not know why users abandon applications or struggle with the claim, such that it is “difficult to unpick which elements of those problems are down to the technology, which elements are down to the complexity of the policy, and which elements are down to a lack of digital skills.”

This question of attributing problems to policy rather than to the technology was a crucial theme throughout the conversation. Organizations such as the Child Poverty Action Group have pointed to instances in which the technology itself causes problems, identifying ways in which the UC interface is not user-friendly, for example. CPAG was commended in the discussion for having “started to care about design” and proposing specific design changes in its reports. Richard noted that certain elements which were not incorporated into the digital design of UC, and elements which were not automated at all, highlight choices which have been made. For example, the system does not display information about additional entitlements, such as transport passes or free prescriptions and dental care, for which UC applicants may be eligible. The fact that the technological design of the system did not feature information about these entitlements demonstrates the importance and power of design choices, but it is unclear whether such design choices were the result of political decisions, or simply omissions by technologists.

Richard noted that some of the political aims towards which UC is directed are in tension with the attempt to use technology to reduce administrative burdens on claimants and to make the welfare state more user-friendly. Though the ‘design culture’ among civil servants genuinely seeks to make things easier for the public, political priorities push in different directions. UC is “hyper means-tested”: it demands a huge amount of data points to calculate a claimant’s entitlement, and it seeks to reward or punish certain behaviors, such as rewarding two-parent families. If policymakers want a system that demands this level of control and sorting of claimants, then the system will place additional administrative burdens on applicants as they have more paperwork to find, they have to contact their landlord to get a signed copy of their lease, and so forth. Wanting this level of means-testing will result in a complex policy and “there is only so much a designer can do to design away that complexity”, as Richard underlined. That said, Richard also argued that part of the problem here is that government has treated policy and the delivery of services as separate. Design and delivery teams hold “immense power” and designers’ choices will be “increasingly powerful as we digitize more important, high-stakes public services.” He noted, “increasingly, policy and delivery are the same thing.”

Richard therefore promotes “government as a platform.” He highlighted the need for a rethink about how the government organizes its work and argued that government should prioritize shared reusable components and definitive data sources. It should seek to break down data silos between departments and have information fed to government directly from various organizations or companies, rather than asking individuals to fill out endless forms. If such an approach were adopted, Richard claimed, digitalization could hugely reduce the burdens on individuals. But, should we go in that direction, it is vital that government become much more transparent around its digital services. There is, as ever, an increasing information asymmetry between government and individuals, and this transparency will be especially important as services become ever-more personalized. Without more transparency about technological design within government, we risk losing a shared experience and shared understanding of how public services work and, ultimately, the capacity to hold government accountable.

October 14, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.