The deaths of over a thousand children in privatized care homes in Chile between 2005 and 2016 have, in recent years, pushed the issue of child protection high onto the political agenda. The country’s limited legal and institutional protections for children have been consistently critiqued in the past decade, and calls for more state intervention, to reverse the legacies of Pinochet-era commitments to “hands-off” government, have been intensifying. On his first day in office in 2018, former president Sebastián Piñera promised to significantly strengthen and institutionalize state protections for children. He launched a National Agreement for Childhood; established local “childhood offices” and an Undersecretariat for Children; a law guaranteeing children’s rights was passed; and the Sistema Alerta Niñez (“Childhood Alert System”) was developed. This system uses predictive modelling software to calculate children’s likelihood of facing harm or abuse, dropping out of school, and other such risks.
Predictive modelling calculates the probabilities of certain outcomes by identifying patterns within datasets. It operates through a logic of correlation: where persons with certain characteristics experienced harm in the past, those with similar characteristics are likely to experience harm in the future. Developed jointly by researchers at Auckland University of Technology’s Centre for Social Data Analytics and the Universidad Adolfo Ibáñez’s GobLab, the Childhood Alert predictive modelling software analyzes existing government databases to identify combinations of individual and social factors which are correlated with harmful outcomes, and flags children accordingly. The aim is to “prioritize minors [and] achieve greater efficiency in the intervention.”
A skewed picture of risk
But the Childhood Alert System is fundamentally skewed. The tool analyzes databases about the beneficiaries of public programs and services, such as Chile’s Social Information Registry. It thereby only examines a subset of the population of children—those whose families are accessing public programs. Families in higher socioeconomic brackets—who do not receive social assistance and thus do not appear in these databases—are already excluded from the picture, despite the fact that children from these groups can also face abuse. Indeed, the Childhood Alert system’s developers themselves acknowledged in their final report that the tool has “reduced capability for identifying children at high risk from a higher socioeconomic level” due to the nature of the databases analyzed. The tool, from its inception and by its very design, is limited in scope and completely ignores wealthier groups.
The analysis then proceeds on a problematic basis, whereby socioeconomic disadvantage is equated with risk. Selected variables include: social programs of which the child’s family are beneficiaries; families’ educational backgrounds; socioeconomic measures from Chile’s Social Registry of Households; and a whole host of geographical variables, including the number of burglaries, percentage of single parent households, and unemployment rate in the child’s neighborhood. Each of these variables are direct measures of poverty. Through this design, children in poorer areas can be expected to receive higher risk scores. This is likely to perpetuate over-intervention in certain neighborhoods.
Economic and social inequalities, including significant regional disparities in living conditions, persist in Chile. As elsewhere, poverty and marginalization do not fall evenly. Women, migrants, those living in rural areas, and indigenous groups are more likely to live in poverty—those from indigenous groups have Chile’s highest poverty rates. As the Alert System is skewed towards low-income populations, it will likely disproportionately flag children from indigenous groups thus raising issues of racial and ethnic bias. Furthermore, the datasets used will also reflect inequalities and biases. Public datasets about families’ previous interactions with child protective services, for example, are populated through social workers’ inputs. Biases against indigenous families, young mothers, or migrants—reflected through disproportionate investigations or stereotyped judgments about parenting—will be fed into the database.
The developers of this predictive tool wrote in their evaluation that, while concerns about racial disparities “have been expressed in the context of countries like the United States, where there are greater challenges related to racism. In the local Chilean context, we frankly don’t see similar concerns about race.” As Paz Peña points out, this dismissal is “difficult to understand” in light of the evidence of racism and racialized poverty in Chile.
Predictive systems such as these are premised on linking individuals’ characteristics and circumstances with the incidence of harm. As Abeba Birhane puts it, such approaches by their nature “force determinability [and] create a world that resembles the past” through reinforcing stereotypes, because they attach risk factors to certain individual traits.
The global context
These issues of bias, disproportionality, and determinacy in predictive child welfare tools have already been raised in other countries. Public outcry, ethical concerns, and evidence that these tools simply do not work as intended, have led many such systems to be scrapped. In the United Kingdom, a local authority’s Early Help Profiling System which “translates data on families into risk profiles [of] the 20 families in most urgent need” was abandoned after it had “not realized the expected benefits.” The U.S. state of Illinois’ child welfare agency strongly criticized and scrapped its predictive tool which had flagged hundreds of children as 100% likely to be injured while failing to flag any of the children who did tragically die from mistreatment. And in New Zealand, the Social Development Minister prevented the deployment of a predictive tool on ethical grounds, purportedly noting: “These are children, not lab rats.”
But while predictive tools are being scrapped on grounds of ethics and ineffectiveness in certain contexts, these same systems are spreading across the Global South. Indeed, the Chilean case demonstrates this trend especially clearly. The team of researchers who developed Chile’s Childhood Alert System is the very same team whose modelling was halted by the New Zealand government due to ethical questions, and whose predictive tool for the U.S. state of Pennsylvania was the subject of high-profile and powerful critique by many actors including Virginia Eubanks in her 2018 book Automating Inequality.
As Paz Peña noted, it should come as no surprise that systems which are increasingly deemed too harmful in some Global North contexts are proliferating in the Global South. These spaces are often seen as an “easier target,” with lower chances of backlash than places like New Zealand or the United States. In Chile, weaker institutions resulting from the legacies of military dictatorship and the staunch commitment to a “subsidiary” (streamlined, outsourced, neoliberal) state may be deemed to provide more fertile ground for such systems. Indeed, the tool’s developers wrote in a report that achieving acceptance of the system in Chile would be “simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits.”
This highlights the indispensability of international comparison, cooperation, and solidarity. Those of us working in this space must pay close attention to developments around the world as these systems continue to be hawked at breakneck speed. Identifying parallels, sharing information, and collaborating across constituencies is vital to support the organizations and activists who are working to raise awareness of these systems.