The Center at NYC Climate Week 2025

CLIMATE AND ENVIRONMENT

The Center for Human Rights & Global Justice at NYC Climate Week 2025

The Center for Human Rights and Global Justice will host a weeklong series of events for NYC Climate Week 2025. Organized by the Earth Rights Research and Action (TERRA) Program, these events will bring together global thought leaders, advocates, and discipline-spanning scholars to center pressing questions of rights-based climate action and more-than-human rights at this year’s NYC Climate Week. Join us for an engaging series of panels, conversations, film screenings, and more. 

Taking place at NYU Law, this dynamic week will run from Tuesday, September 23 to Thursday, September 25.

Tuesday, September 23, 2025

4:00-6:00 pm ET | English only
NYU School of Law, Wilf Hall, Room 512, 139 MacDougal Street

Join us for the official launch of Climate Change on Trial: Mobilizing Human Rights Litigation to Accelerate Climate Action, the new open-access book authored by Professor of Law César Rodríguez-Garavito and published by Cambridge University Press. This landmark publication traces the global rise of rights-based climate litigation and explores its transformative impacts on climate justice movements worldwide.

More than just a book, Climate Change on Trial is part of a comprehensive educational offering — including multimedia content, video explainers, and teaching resources — designed to support learning and action at the intersection of human rights and climate change.

Following a brief presentation of the book, an expert panel featuring César Rodríguez-Garavito (NYU Climate Law Accelerator), Elisa Morgera (UN Special Rapporteur on human rights and climate change), and Lisa Vanhala (Professor of Political Science at University College London), moderated by Ashley Otilia Nemeth (CLX Director of Programs), will discuss the evolving role of human rights law in the climate emergency. From courtroom strategies to global governance shifts to loss and damage, the conversation will highlight how the law has and must continue to adapt — urgently, creatively, and at a planetary scale — to remain relevant in the Anthropocene.

This event is hosted by NYU Climate Law Accelerator at the Center for Human Rights & Global Justice.

6:00-8:00 pm ET | English only
NYU School of Law, Wilf Hall, Room 512, 139 MacDougal Street

Summary: Signed in March 2024 by the late Māori King from Aotearoa (New Zealand) and other Pacific leaders, He Whakaputanga Moana is grounded in Te Ao Māori teachings and Polynesian values which recognize whales as both ancestors and sentient beings. Join us to hear from Hinemoana Halo, the Indigenous-led organization behind the Declaration, for a conversation on the rights of whales with legal experts from NYU MOTH and scientists from Project CETI. 

Panelists:

  • Aperahama Edwards
  • Simon Mitchell
  • César Rodríguez-Garavito
  • David Gruber

This event is hosted by Hinemoana Halo, Project CETI, the NYU More-Than-Human Life (MOTH) Program at the Center for Human Rights and Global Justice.

6:00-8:00 pm ET | English, Spanish
NYU School of Law, Vanderbilt Hall, Room 206

With current threats to biodiversity, it is tracked by billions of data points worldwide. Technology today prioritizes global aggregation, severing ties between data and the communities who have been stewarding and caring for these biodiverse lands and waters. The pace of technology change is only accelerating this extractive norm. At 2025 NYC Climate Week, Local Contexts, the Indigenous Data Exchange, Center CIRCL and the NYU MOTH Program seek to interrupt this cycle. Join us to reimagine and put into practice Indigenous-led biodiversity data infrastructures. Our focus is on building a future that connects and empowers rather than extracts and disconnects.

Panelists:

  • Stephanie Carroll (Ahtna)
  • Jane Anderson
  • Lydia Jennings (Pasqua Yaqui)
  • Darren Ranco (Penobscot)
  • Suzanne Greenlaw (Maliseet)
  • José Gualinga (Kichwa de Sarayaku)
  • Maheata White Davies (Tahiti)
  • Erin Robinson
  • Neil Davies

This event is hosted in collaboration with Local Contexts, the Indigenous Data Exchange, Center CIRCL and the NYU MOTH Program at the Center for Human Rights and Global Justice.

Wednesday, September 24, 2025

6:00-8:00 pm ET | English, Spanish
NYU School of Law, Tishman Auditorium, 40 Washington Square South

Join Sarayaku, the NYU MOTH Program, Selvas Producciones, Local Contexts, Fungi Foundation, Cosmo Sheldrake, SPUN, and 070 for an exclusive screening of Allpa Ukundi, Ñukanchi Pura (Underground, Around, and Among Us), a powerful new documentary directed by Natalia Arenas, Diego Forero, and Eriberto Gualinga that traces a groundbreaking collaboration between the Sarayaku People of the Ecuadorian Amazon and a global alliance of scientists, artists, and advocates. The film captures the alliance’s efforts to advance Indigenous sovereignty and defend the Sarayaku peoples’ ancestral territory against extractive threats by documenting fungal and sonic life in the Amazon, all set against the backdrop of Sarayaku’s intrepid Kawsak Sacha proposal for life. Following the screening, participants can stay for a discussion with members of the collaboration to explore how science, sound, and sovereignty intersect in the fight to defend the Living Forest.

Panelists:

  • José Gualinga
  • Samai Gualinga
  • César Rodríguez-Garavito
  • Eriberto Gualinga
  • Jane Anderson
  • Adriana Corrales
  • Carlos Andrés Baquero Díaz

This event is hosted by TAYJASARUTA (Kichwa Indigenous People of Sarayaku), Selvas Producciones, Local Contexts, Fungi Foundation, Cosmo Sheldrake, SPUN, 070, and the NYU MOTH Program at the Center for Human Right and Global Justice. 

Thursday, September 25, 2025

1:30-3:00 pm ET | English only
NYU School of Law, Wilf Hall 5th Floor, Room 512, 139 MacDougal Street

Join the Amazon Conservation Team (ACT) and the NYU MOTH Program to view Intangible Zone, learn more about the ACT’s work with Indigenous peoples living in voluntary isolation, and discuss the rights of these nations and the legal and political mechanisms available for their protection. About the film: In the Colombian Amazon, the Indigenous reserve Resguardo Curare Los Ingleses survives and fights to protect the Intangible Zone, a thriving territory inhabited by isolated Indigenous peoples, from the perils of the outside world. 

Panelists:

  • Brian Hettler
  • Juana Hofman
  • Daniel Aristizábal

This event is hosted by Amazon Conservation Team, the NYU MOTH Program at the Center for Human Rights and Global Justice.

3:00-4:30 pm ET | English only
NYU School of Law, Wilf Hall 5th Floor, Room 512, 139 MacDougal Street

Join us for a compelling conversation with Lisa Vanhala, author of Governing the End: The Making of Climate Change Loss and Damage, about the global politics of climate justice, the unequal impacts of climate change, and the challenges of turning international commitments into meaningful action. Drawing on in-depth research into the UN climate negotiations, Vanhala sheds light on how countries navigate loss, responsibility, and power in the face of planetary crisis.

Panelists:

  • Lisa Vanhala (Pro Vice-Provost for the Grand Challenge Theme of the Climate Crisis, University College London)
  • César Rodríguez-Garavito

This event is hosted by NYU Climate Law Accelerator at the Center for Human Rights & Global Justice.

MOTH Festival of Ideas & FORGE Gathering 2025

EVENTS

MOTH Festival of Ideas & FORGE Gathering at NYU Law

A dynamic week of interdisciplinary exploration, innovation, and collaboration.

The Center for Human Rights and Global Justice hosted a weeklong series of events through the More-Than-Human Life (MOTH) Festival of Ideas 2025 and the Future of Human Rights and Governance (FORGE) Gathering 2025. These global gatherings will bring together thought leaders, advocates, and scholars to explore the most pressing issues of our time, from ecological emergencies to technological disruption to geopolitical shifts. Taking place at NYU Law, this dynamic week of innovation, and collaboration ran from March 10 to March 15, 2025.

Hosted by the Earth Rights Action and Research (TERRA) and the FORGE programs, both gatherings include closed-door, interactive scholar-practitioner sessions, as well as sessions open to the public in the evenings.

With creativity and interdisciplinarity at their heart, the open sessions include keynote talks, interviews, film screenings, book launches, poetry readings, and interdisciplinary performances, concerts, and an exhibit on display throughout the week.

NYU Law is honored to host these pivotal gatherings that bring together bold ideas, diverse voices, and meaningful action. At a time when global justice faces unprecedented challenges, we are committed to fostering a space for creative thinking and forward-looking solutions.

César Rodríguez-Garavito
Chair, Center for Human Rights & Global Justice.

About the MOTH 2025 Festival

The MOTH Festival of Ideas featured thinkers and doers from around the world advancing the rights, interests, and well-being of nonhumans, humans, and the web of life that sustains us all. Practitioners and scholars from a wide range of disciplines—including law, ecology, philosophy, biology, journalism, the arts and well beyond—are pursuing efforts to bring the more-than-human world into the ambit of moral, legal, and social concern. The MOTH Festival of Ideas featured over 100 thinkers and doers at the cutting edge of this rich and rapidly evolving field. 

About the FORGE 2025 Gathering

With two days designed to foster a solutions-oriented community of legal experts, social scientists, governance professionals, and community-based practitioners, the FORGE program is dedicated to uncovering new approaches and solutions that reimagine rights and governance at a critical time for global justice. 

With the hope of building momentum toward a brighter future, the MOTH 2025 Festival of Ideas and FORGE 2025 Gathering seek to transform perceptions and inspire a transnational community of practice with new ideas about global justice and more-than-human rights and encourage experimentation with new actions and approaches. 

Conversation

An exploration of More-Than-Human-Rights, César Rodríguez-Garavito

  • César Rodríguez-Garavito
    Founding Director, NYU MOTH Program

Poetry

Las piedras que son vivas

  • Fátima Vélez
    Storyteller & Poet

Talk

Voices of the Forest, Indigenous Visions for the Future 

  • José María Gualinga
    Kawsak Sacha Initiative
    Sarayaku Indigenous People

Poetry

Bichos / Beasties

  • Ezequiel Zaidenwerg
    Writer, Translator, Educator & Photographer

Excerpts from Bichos / Beasties (Caracol/Snail; Avispa/Wasp; Polilla/Moth; Grillo/Cricket; Mariposa/Butterfly; Alacrán/Scorpion)

Conversation

Entangled Worlds: Conversation on the Wisdom of Fungi and Ecology 

  • Jonathan Watts
    Global Environment Editor, The Guardian
  • Merlin Sheldrake
    Biologist & Author of Entangled Life

Poetry

ECHOLOLOGY 

  • angela rawlings
    Interdisciplinary Artist & Researcher

Excerpts from ECHOLOLOGY (I; WHOSE WHO; OWLUTION vs. WOLVOLUTION)

Talk

The Power of Story: Quiet Revolutions, Creativity and Cultural Transformation

  • Sol Guy
    Producer & Co-founder, Quiet

Conversation

Exploring Non-Human Intelligence through Whale Communication

  • César Rodríguez-Garavito
    Founding Director, NYU MOTH Program
  • David Gruber
    Founder & President of Project CETI
  • Johanna Chao Kreillick
    Senior Fellow, Center for Human Rights & Global Justice  

Poetry

Mauve Sea-Orchids 

  • Lila Zemborain
    Poet, Critic & NYU Clinical Professor

Conversation

Sand: Kinship Beyond Humans

  • Christine Winter
    Senior Lecturer, University of Otago

Book Launch

Mother Other: The Living Word, Creativity, and Belonging 

  • Elena Landinez
    Visual Artist & NYU MOTH Art Fellow
  • Fátima Vélez
    Storyteller & Poet
  • Jackie Gallant
    Director of Programs, NYU MOTH Program

Poetry

Symbiosis
in the woodworking trades

  • Neronessa
    Poet & Impact Entrepreneur

Conversation

The Current Geopolitics of Human Rights & Justice

  • César Rodríguez-Garavito
    Founding Director, NYU FORGE Program
  • Elisa Morgera
    UN Special Rapporteur on Climate Change & Human Rights
  • Margaret Sattherthwaite
    UN Special Rapporteur on the Independence of Judges and Lawyers

Conversation

Creative Resistance in the Polycrisis

  • Azita Ardakani Walton
    Entrepreneur, Creative Strategist, and Philanthropist
  • Danielle Celermajer
    Multispecies Justice Project, University of Sydney
  • Jack Saul
    Psychologist and Artist, and Founding director of the International Trauma Studies Program 

Conversation

Art and Environmental Justice

  • Dylan McGarry
    Artist & Co-Founder of Empatheatre 
  • Elisa Morgera
    UN Special Rapporteur on Climate Change & Human Rights

Remarks

Troy McKenzie

  • Troy McKenzie
    Dean, New York University School of Law

Book Launch

Book Launch: The Many Lives of James Lovelock 

  • Genevieve Guenther
    Founding Director, End Climate Science
  • Jonathan Watts
    Global Environment Editor, The Guardian
  • Andrew C. Revkin
    Environmental Journalist & Author

Poetry

Do plants imagine flowers?

  • Eliana Hernández-Pachón
    Writer, Educator & Author of The Brush

Poetry

WET DREAM

  • Erin Robinsong
    Poet & Author of Rag Cosmology and Wet Dream

Excerpts from WET DREAM (THE FORCES THE FORMS; QUEEN OF HEAVEN; LUBE OF YOUR EYE)

Talk

Narrative Change: Why Stories Matter

  • Stephen Duncombe
    New York University & the Center for Artistic Activism

Center Chair gives keynote talk in Brazil Supreme Court’s seminar on structural litigation

CLIMATE AND ENVIRONMENT

Center Chair gives keynote talk in Brazil Supreme Court’s Seminar on structural litigation  

On October 7, 2024 as part of the Center for Human Rights and Global Justice’s ongoing academic exchange with Brazil’s Supreme Federal Court (STF), Professor César Rodríguez-Garavito gave a keynote talk in the seminar “Structural Litigation: Advances and Challenges” in Brasilia.

The event was organized by STF Chief Justice Luís Roberto Barroso as well as other high-ranking Brazilian judges, including STF’s Deputy Chief Justice Edson Fachin and the Federal High Court’s Chief Justice Antonio Herman Benjamin. 

In his opening remarks, Minister Barroso highlighted the significance of structural litigation—that is, constitutional cases addressing systemic policy issues that affect the rights of large groups. Among ongoing structural cases before the Brazilian Supreme Court are those dealing with violations of Indigenous rights in the Amazon, prison overcrowding, and police violence in informal settlements. He underscored that this emerging area is central to the Brazilian judiciary, urging judges to proactively identify such issues and ensure that relevant governmental institutions develop and implement effective solutions. Other judges on the panel echoed the importance of the judiciary’s authority to act in these matters and emphasized the need for effective monitoring of structural court decisions. 

The seminar also featured discussions on the judiciary’s role in resolving complex structural conflicts. Professor Rodríguez-Garavito shared insights on how structural cases are handled in comparative law, focusing on their impacts and potential applications to climate litigation. He highlighted the STF’s contributions to the protection of constitutional rights through structural rulings and suggested ways forward to ensure the legitimacy and effective implementation of the Court’s rulings. 

The Center for Human Rights and Global Justice’s participation in this seminar is one of many initiatives planned with high courts from around the world for the upcoming year, underscoring the Center’s commitment to supporting judicial engagement in innovative legal areas while protecting rights and advancing justice for all. 

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

TECHNOLOGY AND HUMAN RIGHTS

Poor Enough for the Algorithm? Exploring Jordan’s Poverty Targeting System

The Jordanian government is using an algorithm to rank social protection applicants from least poor to poorest, as part of a poverty alleviation program. While helpful to those individuals who receive aid, the system is excluding beneficiaries in need, as it is failing to accurately reflect the complex realities of poverty. It uses an outdated poverty measure, weights imperfect indicators—such as utility consumption—and relies on a static view of socioeconomic status.

On November 28, 2023, the Digital Welfare State and Human Rights project hosted the sixteenth episode in the Transformer States conversation series on Digital Government and Human Rights. Victoria Adelmant and Katelyn Cioffi interviewed Hiba Zayadin, a senior researcher in the Middle East and North Africa division at Human Rights Watch (HRW), about a report published by HRW on the Jordanian government’s use of an algorithmic system to rank applicants for a welfare program based on their poverty level, using data like electricity usage and car ownership. This blog highlights key issues related to the system’s inability to reflect the complexities of poverty and its algorithmic exclusion of individuals in need.

The context behind Jordan’s poverty targeting program 

Poverty targeting’ is generally understood to mean directing social program benefits towards those most in need, with the aim of efficiently using limited government resources and improving living conditions for the poorest individuals. This approach entails the collection of wide-ranging information about socioeconomic circumstances, often through in-depth surveys and interviews, to enable means testing or proxy means testing. Some governments have adopted an approach in which beneficiaries are ‘ranked’ from richest to poorest, and target aid only to those falling below a certain threshold. The World Bank has long advocated for poverty targeting in social assistance. For example, since 2003, the World Bank has supported Brazil’s Bolsa Família program, which is a program targeted at the poorest 40% of the population

Increasingly, the World Bank has turned to new technologies to seek to improve the accuracy of poverty targeting programs. It has provided funding to many countries for data-driven, algorithm-enabled solutions to enhance targeting. Similar programs have been implemented in countries including Jordan, Mauritania, Palestine, Morocco, Iraq, Tunis, Jordan, Egypt, and Lebanon.

Launched in 2019 with World Bank support, Jordan’s Takaful program, an automated cash transfer program, provides monthly support to families (roughly US $56 to $192) to mitigate poverty. Managed by the National Aid Fund, the program targets the more than 24% of Jordan’s population that falls under the poverty line. The Takaful program has been especially welcome in Jordan, in light of rising living costs. However, policy choices underpinning this program have excluded many individuals who are in need: eligibility restrictions limit access solely to Jordanian nationals, such that the program does not cover registered Syrian refugees, Palestinians without Jordanian passports, migrant workers, and the non-Jordanian families of Jordanian women—since Jordanian women cannot pass on citizenship to their children. Initial phases of the program entailed broader eligibility, but criteria were tightened in subsequent iterations.

Mismatch between the Takaful program’s indicators and the reality of people’s lives

In addition, further exclusions have arisen because of the operation of the algorithmic system used in the program. When a person applies to Takaful, the system first determines eligibility by checking whether an applicant is a citizen and whether they are under the poverty line. It subsequently employs an algorithm, relying on 57 socioeconomic indicators, to rank people from least poor to poorest. The National Aid Fund uses existing databases as well as applicants’ answers to a questionnaire – that they must fill out online. Indicators include household size, geographic location, utilities consumption, ownership of businesses, and car ownership. It is unclear how these indicators are weighted, but the National Aid Fund has admitted that some indicators will lead to the automatic exclusion of applicants from the Takaful program. Applicants who own a car that is less than five years old or a business valued at over 3000 Jordanian Dinars, for instance, are automatically excluded. 

In its recent report, HRW highlights a number of shortcomings of the algorithmic system deployed in the Takaful program, critiquing its inability to reflect the complex and dynamic nature of poverty. The system, HRW argues, uses an outdated poverty measure, and embeds many problematic assumptions. For example, the algorithm gives some weight to whether an applicant owns a car. However, there are cars in people’s names that they do not actually own; some people own cars that broke down long ago, but they cannot afford to repair them. Additionally, the algorithm assumes that higher electricity and water consumption indicates that a family is less vulnerable. However, poorer households in Jordan in many cases actually have higher consumption—a 2020 survey showed that almost 75% of low- to middle-income households lived in apartments with poor thermal insulation.

Furthermore, this algorithmic system is designed on the basis of a single assessment of socioeconomic circumstances at a fixed point in time. But poverty is not static; people’s lives change and their level of need fluctuates. Another challenge is the unpredictability of aid: in this conversation with CHRGJ’s Digital Welfare State and Human Rights team, Hiba shared the story of a new mother who had been suddenly and unexpectedly cut off from the Takaful program, precisely when she was most in need.

At a broader level, introducing an algorithmic system such as this can also exacerbate information asymmetries. HRW’s report highlights issues concerning opacity in algorithmic decision-making—both for government officials themselves and those subject to the algorithm’s decisions—such that it is more difficult to understand how decisions are being made within this system.

Recommendations to improve the Takaful program

Given these wide-ranging implications, HRW’s primary recommendation is to move away from poverty targeting algorithms and toward universal social protection, which could cost under 1% of the country’s GDP. This could be funded through existing resources, tackling tax avoidance, implementing progressive taxes, and leveraging the influence of the World Bank to guide governments towards sustainable solutions. 

When asked during this conversation whether the algorithm used in the Takaful program could be improved, Hiba noted that a technically perfect algorithm executing a flawed policy will still lead to negative outcomes. She argued that it is the policy itself – the attempt to rank people from least poor to poorest – that is prone to exclusion errors, and warns that technology may be shiny, promising to make targeting accurate, effective, and efficient, but that it can also be a distraction from the policy issues at hand.

Thus, instead of flattening economic realities and leading to the exclusion of people who are, in reality, in immense need, Hiba recommended that support be provided inclusively and universally—to everyone during vulnerable stages of life, regardless of their income and their wealth. Therefore, rather than focusing on using technology that will enable ever-more precise targeting, Jordan should focus on embracing solutions that allow for more universal social protection.

Rebecca Kahn, JD program, NYU School of Law;  and  Human Rights Scholar at the Digital Welfare State & Human Rights project. Her research interests relate to responsible AI governance, digital rights, and consumer protection. She previously worked in the U.S. House and Senate as a legislative staffer.

Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Risk Scoring Children in Chile

TECHNOLOGY & HUMAN RIGHTS

Risk Scoring Children in Chile

On March 30, 2022, Christiaan van Veen and Victoria Adelmant hosted the eleventh event in our “Transformer States” interview series on digital government and human rights. In conversation with human rights expert and activist Paz Peña, we examined the implications of Chile’s “Childhood Alert System,” an “early warning” mechanism which assigns risk scores to children based on their calculated probability of facing various harms. This blog picks up on the themes of the conversation. The video recording and additional readings can be found below.

The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.

Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”

A skewed picture of risk

But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.

The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.

Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.

The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.

Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.

The global context

These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”

But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.

As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”

This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.

April 20, 2022. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

TECHNOLOGY & HUMAN RIGHTS

Singapore’s “smart city” initiative: one step further in the surveillance, regulation and disciplining of those at the margins

Singapore’s smart city initiative creates an interconnected web of digital infrastructures which promises citizens safety, convenience, and efficiency. But the smart city is experienced differently by individuals at the margins, particularly migrant workers, who are experimented on at the forefront of technological innovation.

On February 23, 2022, we hosted the tenth event of the Transformer States Series on Digital Government and Human Rights, titled “Surveillance of the Poor in Singapore: Poverty in ‘Smart City’.” Christiaan van Veen and Victoria Adelmant spoke with Dr. Monamie Bhadra Haines about the deployment of surveillance technologies as part of Singapore’s “smart city” initiative. This blog outlines the key themes discussed during the conversation.

The smart city in the context of institutionalized racial hierarchy

Singapore has consistently been hailed as the world’s leading smart city. For a decade, the city-state has been covering its territory with ubiquitous sensors and integrated digital infrastructures with the aim, in the government’s words, of collecting information on “everyone, everything, everywhere, all the time.” But these smart city technologies are layered on top of pre-existing structures and inequalities, which mediate how these innovations are experienced.

One such structure is an explicit racial hierarchy. As an island nation with a long history of multi-ethnicity and migration, Singapore has witnessed significant migration from Southern China, the Malay Peninsula, India, and Bangladesh. Borrowing from the British model of race-based regulation, this multi-ethnicity is governed by the post-colonial state through the explicit adoption of four racial categories – Chinese, Malay, Indian and Others (or “CMIO” for short) – which are institutionalized within immigration policies, housing, education and employment. As a result, while migrant workers from South and Southeast Asia are the backbone of Singapore’s blue-collar labor market, they occupy the bottom tier of the racial hierarchy; are subject to stark precarity; and have become the “objects” of extensive surveillance by the state.

The promise of the smart city

Singapore’s smart city initiative is “sold” to the public through narratives of economic opportunities and job creation in the knowledge economy, improving environmental sustainability, and increasing efficiency and convenience. Through collecting and inter-connecting all kinds of “mundane” data – such as electricity patterns, data from increasingly-intrusive IoT products, and geo-location and mobility data – into centralized databases, smart cities are said to provide more safety and convenience. Singapore’s hyper-modern technologically-advanced society promises efficient and seamless public services, and the constant technology-driven surveillance and the loss of a few civil liberties are viewed by many as a small price to pay for such efficiency.

Further, the collection of large quantities of data from individuals is promised to enable citizens to be better connected with the government; while governments’ decisions, in turn, will be based upon the purportedly objective data from sensors and devices, thereby freeing decision-making from human fallibility and rendering it more neutral.

The realities: disparate impacts of smart city surveillance on migrant workers

However, smart cities are not merely economic or technological endeavors, but techno-social assemblages that create and impact different publics differently. As Monamie noted, specific imaginations and imagery of Singapore as a hyper-modern, interconnected, and efficient smart city can obscure certain types of racialized physical labor, such as the domestic labor of female Southeast-Asian migrant workers.

Migrant workers are uniquely impacted by increasing digitalization and datafication in Singapore. For years, these workers have been housed in dormitories with occupancy often exceeding capacity, located in the literal “margins” or outskirts of the city: migrant workers have long been physically kept separate from the rest of Singapore’s population within these dormitory complexes. They are stereotyped as violent or frequently inebriated, and the dormitories have for years been surveilled through digital technologies including security cameras, biometric sensors, and data from social media and transport services.

The pandemic highlighted and intensified the disproportionate surveillance of migrant workers within Singapore. Layered on top of the existing technological surveillance of migrants’ dormitories, a surveillance assemblage for COVID-19 contact tracing was created. Measures in the name of public health were deployed to carefully surveil these workers’ bodies and movements. Migrant workers became “objects” of technological experimentation as they were required to use a multitude of new mobile-based apps that integrated immigration data and work permit data with health data (such as body temperature and oximeter readings) and Covid-19 contact tracing data. The permissions required by these apps were also quite broad – including access to Bluetooth services and location data. All the data was stored in a centralized database.

Even though surveillant contact-tracing technologies were later rolled out across Singapore and normalized around the world, the important point here is that these systems were deployed exclusively on migrant workers first. Some apps, Monamie pointed out, were indeed only required by migrant workers, while citizens did not have to use them. This use of interconnected networks of surveillance technologies thus highlights the selective experimentation that underpins smart city initiatives. While smart city initiatives are, by their nature, premised on large-scale surveillance, we often see that policies, apps, and technologies are tried on individuals and communities with the least power first, before spilling out to the rest of the population. In Singapore, the objects of such experimentation are migrant workers who occupy “exceptional spaces” – of being needed to ensure the existence of certain labor markets, but also of needing to be disciplined and regulated. These technological initiatives, in subjecting specific groups at the margins to more surveillance than the rest of the population and requiring them to use more tech-based tools than others, serve to exacerbate the “othering” and isolation of migrant workers.

Forging eddies of resistance

While Monamie noted that “activism” is “still considered a dirty word in Singapore,” there have been some localized efforts to challenge some of the technologies within the smart city, in part due to the intensification of surveillance spurred by the pandemic. These efforts, and a rapidly-growing recognition of the disproportionate targeting and disparate impacts of such technologies, indicate that the smart city is also a site of contestation with growing resistance to its tech-based tools.

March 18, 2022. Ramya Chandrasekhar, LLM program at NYU School of Law whose research interests relate to data governance, critical infrastructure studies, and critical theory. She previously worked with technology policy organizations and at a reputed law firm in India.

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

TECHNOLOGY AND HUMAN RIGHTS

Chosen by a Secret Algorithm: Colombia’s top-down pandemic payments

The Colombian government was applauded for delivering payments to 2.9 million people in just 2 weeks during the pandemic, thanks to a big-data-driven approach. But this new approach represents a fundamental change in social policy which shifts away from political participation and from a notion of rights.

On Wednesday, November 24, 2021, the Digital Welfare State and Human Rights Project hosted the ninth episode in the Transformer States conversation series on Digital Government and Human Rights, in an event entitled: “Chosen by a secret algorithm: A closer look at Colombia’s Pandemic Payments.” Christiaan van Veen and Victoria Adelmant had a conversation with Joan López, Researcher at the Global Data Justice Initiative and at Colombian NGO Fundación Karisma about Colombia’s pandemic payments and its reliance on data-driven technologies and prediction. This blog highlights some core issues related to taking a top-down, data-driven approach to social protection.

From expert interviews to a top-down approach

The System of Possible Beneficiaries of Social Programs (SISBEN in Spanish) was created to assist in the targeting of social programs in Colombia. This system classifies the Colombian population along a spectrum of vulnerability through the collection of information about households, including health data, family composition, access to social programs, financial information, and earnings. This data is collected through nationwide interviews conducted by experts. Beneficiaries are then rated on a scale of 1 to 100, with 0 as the least prosperous and 100 as the most prosperous, through a simple algorithm. SISBEN therefore aims to identify and rank “the poorest of the poor.” This centralized classification system is used by 19 different social programs to determine eligibility: each social program chooses its own cut-off score between 1 and 100 as a threshold for eligibility.

But in 2016, the National Development Office – the Colombian entity in charge of SISBEN – changed the calculation used to determine the profile of the poorest. It introduced a new and secret algorithm which would create a profile based on predicted income generation capacity. Experts collecting data for SISBEN through interviews had previously looked at the realities of people’s conditions: if a person had access to basic services such as water, sanitation, education, health and/or employment, the person was not deemed poor. But the new system sought instead to create detailed profiles about what a person could earn, rather than what a person has. This approach sought, through modelling, to predict households’ situation, rather than to document beneficiaries’ realities.

A new approach to social policy

During the pandemic, the government launched a new system of payments called the Ingreso Solidario (meaning “solidarity income”). This system would provide monthly payments to people who were not covered by any other existing social program that relied on SISBEN; the ultimate goal of Ingreso Solidario was to send money to 2.9 million people who needed assistance due to the crisis caused by COVID-19. The Ingreso Solidario was, in some ways, very effective. People did not have to apply for this program: if they were selected as eligible, they would automatically receive a payment. Many people received the money immediately into their bank accounts, and payments were made very rapidly, within just a few weeks. Moreover, the Ingreso Solidario was an unconditional transfer and did not condition the receipt of the money to the fulfillment of certain requirements.

But the Ingreso Solidario was based on a new approach to social policy, driven by technology and data sharing. The Government entered agreements with private companies, including Experian and Transunion, to access their databases. Agreements were also made between different government agencies and departments. Through data-sharing arrangements across 34 public and private databases, the government cross- checked the information provided in the interviews with information in dozens of databases to find inconsistencies and exclude anyone deemed not to require social assistance. In relying on cross-checking databases to “find” people who are in need, this approach depends heavily on enormous data collection, and it increases government’s reliance on the private sector.

The implications of this new approach

This new approach to social policy, as implemented through the Ingreso Solidario, has fundamental implications. First, this system is difficult to challenge. The algorithm used to profile vulnerability, to predict income generating capacity, and to assign a score to people living in poverty, is confidential. The Government consistently argued that disclosing information about the algorithm would lead to a macroeconomic crisis because if people knew how the system worked, they would try to cheat the system. Additionally, SISBEN has been normalized. Though there are many other ways that eligibility for social programs could be assessed, the public accepts it as natural and inevitable that the government has taken this arbitrary approach reliant on numerical scoring and predictions. Due to this normalization, combined with the lack of transparency, this new approach to determining eligibility for social programs has therefore not been contested.

Second, in adopting an approach which relies on cross-checking and analyzing data, the Ingreso Solidario is designed to avoid any contestation in the design and implementation of the algorithm. This is a thoroughly technocratic endeavor. The idea is to use databases and avoid going to, and working with, the communities. The government was, in Joan’s words, “trying to control everything from a distance” to “avoid having political discussions about who should be eligible.” There were no discussions and negotiations between the citizens and the Government to jointly address the challenges of using this technology to target poor people. Decisions about who the extra 2.9 million beneficiaries should be were taken unilaterally from above. As Joan argued, this was intentional: “The mindset of avoiding political discussion is clearly part of the idea of Ingreso Solidario.”

Third, because people were unaware that they were going to receive money, those who received a payment felt like they had won the lottery. Thus, as Joan argued, people saw this money not “as an entitlement, but just as a gift that this person was lucky to get.” This therefore represents a shift away from a conception of assistance as something we are entitled to by right. But in re-centering the notion of rights, we are reminded of the importance of taking human rights seriously when analyzing and redesigning these kinds of systems. Joan noted that we need to move away from an approach of deciding what poverty is from above, and instead move towards working with communities. We must use fundamental rights as guidance in designing a system that will provide support to those in poverty in an open, transparent, and participatory manner which does not seek to bypass political discussion.

María Beatriz Jiménez, LLM program, NYU School of Law with research focus on digital rights. She previously worked for the Colombian government in the Ministry of Information and Communication Technologies and the Ministry of Trade.

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

TECHNOLOGY & HUMAN RIGHTS

Pilots, Pushbacks, and the Panopticon: Digital Technologies at the EU’s Borders

The European Union is increasingly introducing digital technologies into its border control operations. But conversations about these emerging “digital borders” are often silent about the significant harms experienced by those subjected to these technologies, their experimental nature, and their discriminatory impacts.

On October 27, 2021, we hosted the eighth episode in our Transformer States Series on Digital Government and Human Rights, in an event entitled “Artificial Borders? The Digital and Extraterritorial Protection of ‘Fortress Europe.’” Christiaan van Veen and Ngozi Nwanta interviewed Petra Molnar about the European Union’s introduction of digital technologies into its border control and migration management operations. The video and transcript of the event, along with additional reading materials, can be found below. This blog post outlines key themes from the conversation.

Digital technologies are increasingly central to the EU’s efforts to curb migration and “secure” its borders. Against a background of growing violent pushbacks, surveillance technologies such as unpiloted drones and aerostat machines with thermo-vision sensors are being deployed at the borders. The EU-funded “ROBORDER” project aims to develop “a fully-functional autonomous border surveillance system with unmanned mobile robots.” Refugee camps on the EU’s borders, meanwhile, are being turned into a “surveillance panopticon,” as the adults and children living within them are constantly monitored by cameras, drones, and motion-detection sensors. Technologies also mediate immigration and refugee determination processes, from automated decision-making, to social media screening, and a pilot AI-driven “lie detector.”

In this Transformer States conversation, Petra argued that technologies are enabling a “sharpening” of existing border control policies. As discussed in her excellent report entitled “Technological Testing Grounds,” completed with European Digital Rights and the Refugee Law Lab, new technologies are not only being used at the EU’s borders, but also to surveil and control communities on the move before they reach European territory. The EU has long practiced “border externalization,” where it shifts its border control operations ever-further away from its physical territory, partly through contracting non-Member States to try to prevent migration. New technologies are increasingly instrumental in these aims. The EU is funding African states’ construction of biometric ID systems for migration control purposes; it is providing cameras and surveillance software to third countries to prevent travel towards Europe; and it supports efforts to predict migration flows through big data-driven modeling. Further, borders are increasingly “located” on our smartphones and in enormous databases as data-based risk profiles and pre-screening become a central part of the EU’s border control agenda.

Ignoring human experience and impacts

But all too often, discussions about these technologies are sanitized and depoliticized. People on the move are viewed as a security problem, and policymakers, consultancies, and the private sector focus on the “opportunities” presented by technologies in securitizing borders and “preventing migration.” The human stories of those who are subjected to these new technological tools and the discriminatory and deadly realities of “digital borders” are ignored within these technocratic discussions. Some EU policy documents describe the “European Border Surveillance System” without mentioning people at all.

In this interview, Petra emphasized these silences. She noted that “human experience has been left to the wayside.” First-person accounts of the harmful impacts of these technologies are not deemed to be “expert knowledge” by policymakers in Brussels, but it is vital to expose the human realities and counter the sanitized policy discussions. Those who are subjected to constant surveillance and tracking are dehumanized: Petra reports that some are left feeling “like a piece of meat without a life, just fingerprints and eye scans.” People are being forced to take ever-deadlier routes to avoid high-tech surveillance infrastructures, and technology-enabled interdictions and pushbacks are leading to deaths. Further, difference in treatment is baked into these technological systems, as they enable and exacerbate discriminatory inferences along racialized lines. As UN Special Rapporteur on Racism E. Tendayi Achiume writes, “digital border technologies are reinforcing parallel border regimes that segregate the mobility and migration of different groups” and are being deployed in racially discriminatory ways. Indeed, some algorithmic “risk assessments” of migrants have been argued to represent racial profiling.

Policy discussions about “digital borders” also do not acknowledge that, while the EU spends vast sums on technologies, the refugee camps at its borders have neither running water nor sufficient food. Enormous investment in digital migration management infrastructures is being “prioritized over human rights.” As one man commented, “now we have flying computers instead of more asylum.”

Technological experimentation and pilot programs in “gray zones”

Crucially, these developments are occurring within largely-unregulated spaces. A central theme of this Transformer States conversation—mirroring the title of Petra’s report, “Technological Testing Grounds”—was the notion of experimentation within the “gray zones” of border control and migration management. Not only are non-citizens and stateless persons accorded fewer rights and protections than EU citizens, but immigration and asylum decision-making is also an area of law which is highly discretionary and contains fewer legal safeguards.

This low-rights, high-discretion environment makes it rife for testing new technologies. This is especially the case in “external” spaces far from European territory which are subject to even less regulation. Projects which would not be allowed in other spaces are being tested on populations who are literally at the margins, as refugee camps become testing zones. The abovementioned “lie detector,” whereby an “avatar” border guard flagged “biomarkers of deceit,” was “merely” a pilot program. This has since been fiercely criticized, including by the European Parliament, and challenged in court.

Experimentation is deliberately occurring in these zones as refugees and migrants have limited opportunities to challenge this experimentation. The UN Special Rapporteur on Racism has noted that digital technologies in this area are therefore “uniquely experimental.” This has parallels with our work, where we consistently see governments and international organizations piloting new technologies on marginalized and low-income communities. In a previous Transformer States conversation, we discussed Australia’s Cashless Debit Card system, in which technologies were deployed upon aboriginal people through a pilot program. In the UK, radical reform to the welfare system through digitalization was also piloted, with low-income groups being tested on with “catastrophic” effects.

Where these developments are occurring within largely-unregulated areas, human rights norms and institutions may prove useful. As Petra noted, the human rights framework requires courts and policymakers to focus upon the human impacts of these digital border technologies, and highlights the discriminatory lines along which their effects are felt. The UN Special Rapporteur on Racism has outlined how human rights norms require mandatory impact assessments, moratoria on surveillance technologies, and strong regulation to prevent discrimination and harm.

November 23, 2021. Victoria Adelmant,Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Social rights disrupted: how should human rights organizations adapt to digital government?

TECHNOLOGY & HUMAN RIGHTS

Social rights disrupted: how should human rights organizations adapt to digital government?

As the digitalization of government is accelerating worldwide, human rights organizations who have not historically engaged with questions surrounding digital technologies are beginning to grapple with these issues. This challenges these organizations to adapt both their substantive focus and working methods while remaining true to their values and ideals.

On September 29, 2021, Katelyn Cioffi and I hosted the seventh event in the Transformer States conversation series, which focuses on the human rights implications of the emerging digital state. We interviewed Salima Namusobya, Executive Director of the Initiative for Social and Economic Rights (ISER) in Uganda, about how socioeconomic rights organizations are having to adapt to respond to issues arising from the digitalization of government. In this blog post, I outline parts of the conversation. The event recording, transcript, and additional readings can be found below.

Questions surrounding digital technologies are often seen as issues for “digital rights” organizations, which generally focus on a privileged set of human rights issues such as privacy, data protection, free speech online, or cybersecurity. But, as governments everywhere enthusiastically adopt digital technologies to “transform” their operations and services, these developments are starting to be confronted by actors who have not historically engaged with the consequences of digitalization.

Digital government as a new “core issue”

The Initiative for Social and Economic Rights (ISER) in Uganda is one such human rights organization. Its mission is to improve respect, recognition, and accountability for social and economic rights in Uganda, focusing on the right to health, education, and social protection. It had never worked on government digitalization until recently.

But, through its work on social protection schemes, ISER was confronted with the implications of Uganda’s national digital ID program. While monitoring the implementation of the Senior Citizens grant in which persons over 80 years old receive cash grants, ISER staff frequently encountered people who were clearly over 80 but were not receiving grants. This program had been linked to Uganda’s national identification scheme, which holds individuals’ biographic and biometric information in a centralized electronic database called the National Identity Register and issues unique IDs to enrolled individuals. Many older persons had struggled to obtain IDs because their fingerprints could not be captured. Many other older persons had obtained national IDs, but the wrong birthdates were entered into the ID Register. In one instance, a man’s birthdate was wrong by nine years. In each case, the Senior Citizens grant was not paid to eligible beneficiaries because of faulty or missing data within the National Identity Register. Witnessing these significant exclusions led  ISER to become  actively involved in research and advocacy surrounding the digital ID. They partnered with CHRGJ’s Digital Welfare State team and Ugandan digital rights NGO Unwanted Witness, and the collective work culminated in a joint report. This has now become a “core issue” for ISER.

Key challenges

While moving into this area of work, ISER has faced some challenges. First, digitalization is spreading quickly across various government services. From the introduction of online education despite significant numbers of people having no access to electricity or the internet, to the delivery of COVID-19 relief via mobile money when only 71% of Ugandans own a mobile phone, exclusions are arising across multiple government initiatives. As technology-driven approaches are being rapidly adopted and new avenues of potential harm are continually materializing, organizations can find it difficult to keep up.

The widespread nature of these developments mean that organizations are finding themselves making the same argument again and again to different parts of government. It is often proclaimed that digitized identity registers will enable integration and interoperability across government, and that introducing technologies into governance “overcomes bureaucratic legacies, verticality and silos.” But ministries in Uganda remain fragmented and are each separately linking their services to the national ID. ISER must go to different ministries whenever new initiatives are announced to explain, yet again, the significant level of exclusion that using the National Identity Register entails. While fragmentation was a pre-existing problem, the rapid proliferation of initiatives across government is leaving organizations “firefighting.”

Second, organizations face an uphill battle in convincing the government to slow down in their deployment of technology. Government officials often see enormous potential in technologies for cracking down on security threats and political dissent. Digital surveillance is proliferating in Uganda, and the national ID contributes to this agenda by enabling the government to identify individuals. Where such technologies are presented as combating terrorism, advocating against them is a challenge.

Third, powerful actors are advocating the benefits of government digitalization. International agencies such as the World Bank are providing encouragement and technical assistance and are praising governments’ digitalization efforts. Salima noted that governments take this seriously, and if publications from these organizations are “not balanced enough to bring out the exclusionary impact of the digitalization, it becomes a problem.” Civil society faces an enormous challenge in countering overly-positive reports from influential organizations.

Lessons for human rights organizations

In light of these challenges, several key lessons arise for human rights organizations who are not used to working on technology-related problems but who are witnessing harmful impacts from digital government.

One important lesson is that organizations will need to adopt new and different methods in dealing with challenges arising from the rapid spread of digitalization; they should use “every tool available to them.” ISER is an advocacy organization which only uses litigation as a last resort. But when the Ugandan Ministry of Health announced that national ID would be required to access COVID-19 vaccinations, “time was of the essence”, in Salima’s words. Together with Unwanted Witness, it immediately launched litigation seeking an injunction, arguing that this would exclude millions, and the policy was reversed.

ISER’s working methods have changed in other ways. ISER is not a service provision charity. But, in seeing countless people unable to access services because they were unable to enroll in the ID Register, ISER felt obliged to provide direct assistance. Staff compiled lists of people without ID, provided legal services, and helped individuals to navigate enrolment. Advocacy organizations may find themselves taking on such roles to assist those who are left behind in the transition to digital government.

Another key lesson is that organizations have much to gain from sharing their experiences with practitioners who are working in different national contexts. ISER has been comparing its experiences and sharing successful advocacy approaches with Kenyan and Indian counterparts and has found “important parallels.”

Last, organizations must engage in active monitoring and documentation to create an evidence base which can credibly show how digital initiatives are, in practice, affecting some of the most vulnerable. As Salima noted, “without evidence, you can make as much noise as you like,” but it will not lead to change. From taking videos and pictures, to interviewing and writing comprehensive reports, organizations should be working to ensure that affected communities’ experiences can be amplified and reflected to demonstrate the true impacts of government digitalization.

October 19, 2021. Victoria Adelmant, Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law.