TECHNOLOGY AND HUMAN RIGHTS

Profiling the Poor in the Dutch Welfare State

Report on court hearing in litigation in the Netherlands about digital welfare fraud detection system (‘SyRI’)

On Tuesday, October 29, 2019, I attended a hearing before the District Court of The Hague (the Netherlands) in litigation by a coalition of Dutch civil society organizations challenging the Dutch government’s System Risk Indication (“SyRI”). The Digital Welfare State and Human Rights Project at NYU Law, which I direct, recently collaborated with the United Nations Special Rapporteur on extreme poverty and human rights in preparing an amicus brief to the District Court. The Special Rapporteur became involved in this case because SyRI has exclusively been used to detect welfare fraud and other irregularities in poor neighborhoods in four Dutch cities and affects the right to social security and to privacy of the poorest members of Dutch society. This litigation may also set a highly relevant legal precedent with impact beyond Dutch borders in an area that has received relatively little judicial scrutiny to date.

Lies, damn lies, and algorithms

What is SyRI? The formal answer can be found in legislation and implementing regulations from 2014. In order to coordinate government action against illicit use of government funds and benefits in the area of social security, tax benefits and labor law, Dutch law allows for the sharing of data between municipalities, welfare authorities, tax authorities and other relevant government authorities since 2014. A total of 17 categories of data held by government authorities may be shared in this context, from employment and tax data, to benefit data, health insurance data and enforcement data, among other categories of digitally stored information. Government authorities wishing to cooperate in a concrete SyRI project request the Minister for Social Affairs and Employment to use the SyRI tool by pooling and analyzing the relevant data from various authorities using an algorithmic risk model.

The Minister has outsourced the tasks of pooling and analyzing the data to a private foundation, somewhat unfortunately named ‘The Intelligence Agency (‘Inlichtingenbureau’). The Intelligence Agency pseudonymizes the data pool, analyzes the data using an algorithmic risk model and creates a file for those individuals (or corporations) who are deemed to be at a higher risk of being involved in benefit fraud and other irregularities. The Minister then analyzes these files and notifies the cooperating government authorities of those individuals (or corporations) are considered at higher risk of committing benefit fraud or other irregularities (‘risk notification’). Risk notifications are included in a register for two years. Those who are included in the register are not actively notified of this registration, but they can receive access to their information in the register after a specific request.

The preceding understanding of how the system works can be derived from the legislative texts and history, but a surprising amount of uncertainty remains about how exactly SyRI works in practice. This became abundantly clear in the hearing in the SyRI-case before the District Court of The Hague on October 29. The court is assessing the plaintiffs’ claim that SyRI, as legislated in 2014, violates norms of applicable international law, including the rights to privacy, data protection and a fair trial recognized in the European Convention on Human Rights, the Charter of Fundamental Rights of the European Union, the International Covenant on Civil and Political Rights and the EU General Data Protection Regulation.  In a courtroom packed with representatives from the 8 plaintiffs, reporters and concerned citizens from areas where SyRI has been used, the first question by the three-judge panel was to clarify the radically different views held by the plaintiffs and the Dutch State as to what SyRI is exactly.

According to the State, SyRI merely compares data from different government databases, operated by different authorities, in order to find simple inconsistencies. Although this analysis is undertaken with the assistance of an algorithm, the State underlined that this algorithm operates on the basis of pre-defined indicators of risk and that the algorithm is not of the ‘learning’ type. The State further emphasized that SyRI is not a Big Data or data-mining system, but that it employs a targeted analysis on the basis of a limited dataset with a clearly defined objective. It also argued that a risk notification by SyRI is merely a – potential – starting point for further investigations by individual government authorities and does not have any direct and automatic legal consequences such as the imposition of a fine or the suspension or withdrawal of government benefits or assistance.

But plaintiffs strongly contested the State’s characterization of SyRI. They claimed instead that SyRI is not narrowly targeted but instead aims at entire (poor) neighborhoods, that diverse and unconnected categories of personal data are brought together in SyRI projects, and that the resulting data exchange and analysis occur on a large scale. In their view, SyRI projects could therefore be qualified as projects involving problematic uses of Big Data, data-mining and profiling. They also made clear that it is exceedingly difficult for them or the District Court to assess what SyRI actually is or is not doing, because key elements of the system remain secret and the relevant legislation does not restrict the methods used, including the request to cooperating authorities to undertake a SyRI project, the risk model used, and the ways in which personal data can be processed.  All of these elements remain hidden from outside scrutiny.

Game the system, leave your water tap running

The District Court asked a series of probing and critical follow-up questions in an attempt to clarify the exact functioning of SyRI and to understand the justification for the secrecy surrounding it. One can sympathize with the court’s attempt to grasp the basic facts about SyRI in order to enable it to undertake its task of judicial oversight. Pushed by the District Court to clarify why the State could not be more open about the functioning of SyRI, the attorney for the State warned about welfare beneficiaries ‘gaming the system’. Referring to a pilot project pre-dating SyRI, in which welfare authority data about individuals claiming low-income benefits was matched with usage data held by publicly-owned drinking water companies to identify beneficiaries who committed fraud by falsely claiming they were living alone while actually living together (to claim a higher benefit level), the attorney for the State claimed that making it known that water usage is a ‘risk indicator’ could lead beneficiaries to leave their taps running to avoid detection. Some individuals attending the hearing could be heard snickering when this prediction was made.

Another fascinating exchange between the judges and the attorney for the State dealt with the standards applied by the Minister when assessing a request for a SyRI project by municipal and other government authorities. According to the State’s attorney, what would commonly happen is that a municipality has a ‘problem neighborhood’ and wants to tackle its problems, which are presumed to include welfare fraud and other irregularities, through SyRI. The request to the Minister is typically based ‘on the law, experience and logical thinking’ according to the State. Unsatisfied with this reply, the District Court probed the State for a more concrete justification of the use of SyRI and the precise standards applied to justify its use: ‘In Bloemendaal (one of the richest municipalities of the Netherlands) a lot of people enjoy going to classical concerts; in a problem neighborhood, there are a lot of people who receive government welfare benefits; why is that a justification for the use of SyRI?’, the Court asked. The attorney for the State had to admit that specific neighborhoods were targeted because those areas housed more people who were on welfare benefits and that, while participating authorities usually have no specific evidence that there are high(er) levels of benefit fraud in those neighborhoods, this higher proportion of people on benefits is enough reason to use SyRI.

Finally, and of great relevance to the intensity of the Court’s judicial scrutiny, the question of the gravity of the invasion of human rights – more specifically, the right to privacy – was a central topic of the hearing. The State argued that the data being shared and analyzed was existing data and not new data. It furthermore argued that for those individuals whose data was shared and analyzed, but who were not considered a ‘higher risk’, there was no harm at all: their data had been pseudonymized and was removed after the analysis. The opposing view by plaintiffs was that the government-held data that was shared and analyzed in SyRI was not originally collected for the specific purpose of enforcement. Plaintiffs also argued that – due to the wide categories of data that were potentially shared and analyzed in SyRI – a very intimate profile could be made of individuals in targeted neighborhoods: ‘This is all about profiling and creating files on people’.

Judgment expected in early 2020

The District Court announced that it expects to publish its judgment in this case on 29 January 2020. There are many questions to be answered by the Court. In non-legal language, they include at least the following: How does SyRI work exactly? Does it matter whether SyRI uses a relatively straightforward ‘decision-tree’ type of algorithm or, instead, machine learning algorithms? What is the harm in pooling previously siloed government data? What is the harm in classifying an individual as ‘high risk’? Does SyRI discriminate on the basis of socio-economic status, migrant status, race or color? Does the current legislation underpinning SyRI give sufficient clarity and adequate legal standards to meaningfully curb the use of State power to the detriment of individual rights? Can current levels of secrecy be maintained in a democracy based on the rule of law?

In light of the above, there will be many eyes focused on the Netherlands in January when a potentially groundbreaking legal precedent will be set in the debate on digital welfare states and human rights.

November 1, 2019.  Christiaan van Veen, Digital Welfare State & Human Rights Project (2019-2022), Center for Human Rights and Global Justice at NYU School of Law.