TECHNOLOGY AND HUMAN RIGHTS

Artificial Intelligence in the Digital State

In recent years, alarm has rightly been growing about the potential implications of artificial intelligence, prompting a proliferation of regulatory initiatives in many countries. Concerns have especially centered around the idea that AI poses an “existential threat” to humanity. But this kind of messaging diverts the majority of our attention to hypothetical future risks and the dominant role of the private sector, which has led the charge in building some of the more advanced forms of AI. In our work, we focus on the effects and impacts of the deployment of AI in specific, tangible contexts where harms are already being experienced today–and in particular, the ways in which public sector uses of AI can give rise to human rights concerns.

We have contributed to policy initiatives, built a repository of information to share knowledge about these harms, and built a community of practice bringing together activists, practitioners, and scholars to facilitate information-sharing and to advance collaborations in bringing human rights concerns into regulatory processes.

In partnership with Amnesty Tech’s Algorithmic Accountability Lab, we hosted an interdisciplinary strategy session on AI and the digital state with civil society actors from around the world, discussing how human rights concerns surrounding governments’ AI deployment can be prioritized in spaces where policymakers seek to regulate AI technologies.

We have since launched a community of practice around Artificial Intelligence and the Digital State. Through events and collaborations, we are bringing together dozens of digital-rights-focused NGOs and academics from around the world to discuss, strategize, and plan collective mobilization to ensure that the impacts of AI in social protection, healthcare, immigration, and housing are taken seriously within efforts to regulate AI. If you would like to join this community of practice, please sign up here.

As part of our “Transformer States” conversation series, we host in-depth interviews with practitioners and scholars exploring governments’ adoption of emerging technologies and the implications for the human rights of marginalized groups. Through these events, we have generated a large repository of information on dozens of case studies at the intersection of digital government and human rights, featuring video recordings, summary blogs, transcripts, and many additional reading materials. Many of these case studies focus on AI and machine learning in the digital state:

We also curate and publish guest blog posts from scholars and affected individuals. Blogs on AI in the digital state include:

Our team contributed to the UN Special Rapporteur on Racism’s report on the racist impacts of emerging technologies, and hosted an event surrounding the report’s launch discussing the implications of AI and other technologies on minoritized racial groups.

Our team submitted expert commentary to the United States White House’s Blueprint for an AI Bill of Rights initiative, which has laid the groundwork for regulatory efforts to assess, manage, and prevent the risks posed by AI in the United States and abroad. Responding to the White House’s Request for Information on biometrics, our submission discussed the implications of AI-driven biometric technologies for human rights law, democracy, and the rule of law, and compiled comparative evidence of human rights harms that have arisen in other national contexts. The submission also set the AI Bill of Rights effort in its global context in light of the “AI arms race” and US companies’ role in normalizing the use of biometric technologies. When the United States Internal Revenue Service announced that it would introduce facial recognition-based identification technologies, we published an op-ed in Slate pointing to the human rights implications of such AI-driven technologies.

During ongoing efforts in Brazil to regulate the development and use of AI, we hosted a virtual event with a member of the expert commission of jurists who had co-drafted the Bill to regulate AI in Brazil, to raise awareness of the Bill among an international audience of human rights scholars and practitioners, and to analyze how human rights law is shaping the legislation. A Brazilian member of our team published an op-ed and additional reading materials after the event.

Our team also submitted an amicus brief in a landmark case in the Dutch High Court that challenged a system that deployed machine learning to detect welfare fraud in low-income neighborhoods in the Netherlands. Our amicus brief drew attention to the discriminatory ways in which this AI system was targeted only at certain communities. We contributed human rights analysis to the court; this was then the first case in which an algorithmic welfare system was struck down on human rights grounds.

In our work with the International Organizations Clinic, we research multilateral development banks’ projects relating to new digital technologies, including artificial intelligence. We examine specific “digital” projects, and we ask: do these banks’ existing policies and processes for evaluating risks adequately capture the impacts that could arise from these technologies? Our recent work with the Clinic has entailed analysis of the risks arising from development banks’ investments in AI-driven biometric technologies and satellite imaging technologies.

Given the website migration, resources including case documents, hearing and briefing documents as well as reports will be soon made available through our resources library. If you need to access those materials before they are made available, please email us at chrgj@nyu.edu