The Chilean government, like many others, has deployed predictive modelling software to assess children’s risk of facing harm or abuse. The Childhood Alert System, an ‘early warning system’ based on algorithmic predictions, assigns ‘risk scores’ to children and adolescents. But, like many others, this system consistently and disproportionately focuses on low-income families and is deemed merely an exercise in ‘poverty profiling.’ This conversation will look at the implications of this system especially on children’s rights in Chile. We will examine how algorithmic risk prediction, far from being a neutral exercise, can stigmatize and criminalize families in poverty, exacerbate harmful interventions in children’s lives, and invisibilize other risks. We will ask, what does it mean to introduce predictive analytics into child welfare decisions, and what stories are these risk scores really telling?
- Paz Pena, founder of Not My AI; independent expert on human rights, intersectionality, and digital technologies
- Christiaan van Veen, Technology and Human Rights at Center for Human Rights and Global Justice
- Victoria Adelmant, Technology and Human Rights at Center for Human Rights and Global Justice