Regulating Artificial Intelligence in Brazil

TECHNOLOGY & HUMAN RIGHTS

Regulating Artificial Intelligence in Brazil

On May 25, 2023, the Center for Human Rights and Global Justice’s Technology & Human Rights team hosted an event entitled Regulating Artificial Intelligence: The Brazilian Approach, in the fourteenth episode of the “Transformer States” interview series on digital government and human rights. This in-depth conversation with Professor Mariana Valente, a member of the Commission of Jurists created by the Brazilian Senate to work on a draft bill to regulate artificial intelligence, raised timely questions about the specificities of ongoing regulatory efforts in Brazil. These developments in Brazil may have significant global implications, potentially inspiring other more creative, rights-based, and socio-economically grounded regulation of emerging technologies in the Global South.

In recent years, numerous initiatives to regulate and govern Artificial Intelligence (AI) systems have arisen in Brazil. First, there was the Brazilian Strategy for Artificial Intelligence (EBIA), launched in 2021. Second, legislation known as Bill 21/20, which sought to specifically regulate AI, was approved by the House of Representatives in 2021. And in 2022, a Commission of Jurists was appointed by the Senate to draft a substitute bill on AI. This latter initiative holds significant promise. While the EBIA and Bill 21/20 were heavily criticized for the limited value given to public input in comparison to the available participatory and multi-stakeholder mechanisms, the Commission of Jurists took specific precautions to be more open to public input. Their proposed alternative draft legislation, which is grounded in Brazil’s socio-economic realities and legal tradition, may inspire further legal regulation of AI, especially for the Global South, considering Brazil’s position in other discussions related to internet and technology governance.

Bill 21/20 was the first bill directed specifically at AI. But this was a very minimal bill; it effectively established that regulating AI should be the exception. It was also based on a decentralized model, meaning that each economic sector would regulate its own applications of AI: for example, the federal agency dedicated to regulating the healthcare sector would regulate AI applications in that sector. There were no specific obligations or sanctions for the companies developing or employing AI, and there were some guidelines for the government on how it should promote the development of AI. Overall, the bill was very friendly to the private sector’s preference for the most minimal regulation possible. The bill was quickly approved in the House of Representatives, without public hearings or much public attention.

It is important to note that this bill does not exist in isolation. There is other legislation that applies to AI in the country, such as consumer law and data protection law, as well as the Marco Civil da Internet (Brazilian Civil Rights Framework for the Internet). These existing laws have been leveraged by civil society to protect people from AI harms. For example, Instituto Brasileiro de Defesa do Consumidor (IDEC), a consumer rights organization, successfully brought a public civil action using consumer protection legislation against Via Quatro, a private company responsible for the subway line 4-Yellow of Sao Paulo. The company was fined R$500,000 for collecting and processing individuals’ biometric data for advertising purposes without informed consent.

But, given that Bill 21/20 sought to specifically address the regulation of AI, academics and NGOs raised concerns that it would reduce the legal protections afforded in Brazil: it “gravely undermines the exercise of fundamental rights such as data protection, freedom of expression and equality” and “fails to address the risks of AI, while at the same time facilitating a laissez-faire approach for the public and private sectors to develop, commercialize and operate systems that are far from trustworthy and human-centric (…) Brazil risks becoming a playground for irresponsible agents to attempt against rights and freedoms without fearing for liability for their acts.”

As a result, the Senate decided that instead of voting on Bill 21/20, they would create a Commission of Jurists to propose a new bill.

The Commission of Jurists and the new bill

The Commission of Jurists was established in April 2022 and delivered its final report in December 2022. Even if the establishment of the Commission was considered a positive development, it was not exempt from criticism from civil society, for the lack of racial and regional diversity of the Commission’s membership, as well as the need for different areas of knowledge to contribute to the debate. This criticism comes from a reflection of the socio-economic realities of Brazil, which is one of the most unequal countries in the world, and those inequalities are intersectional, considering race, gender, income, territorial origin. Therefore, AI applications will have different effects on different segments of the population. This is already clear from the use of facial recognition in public security: more than 90% of the individuals arrested using this technology were Black. Another example is the use of an algorithm to evaluate requests for emergency aid amid the pandemic, where many vulnerable people had their benefits denied based on incorrect data.

During its mandate, the Commission of Jurists held public hearings, invited specialists from different areas of knowledge, and developed a public consultation mechanism allowing for written proposals. Following this process, the new proposed bill had several elements that were very different from Bill 21/20. First, the new bill borrows from the EU’s AI Act by adopting a risk-based approach: obligations are distinguished according to the risks they pose. However, the new bill, following the Brazilian tradition of structuring regulation from the perspective of individual and collective rights, merges the European risk-based approach with a rights-based approach. The bill confers individual and collective rights that apply in relation to all AI systems, independent of the level of risk they pose.

Secondly, the new bill includes some additional obligations for the public sector, considering its differential impact on people’s rights. For example, there is a ban on the treatment of racial information, and provisions on public participation in decisions regarding the adoption of these systems. Importantly, though the Commission discussed the inclusion of a complete ban on facial recognition technologies in public spaces for public security, this proposal was not included: instead, the bill included a moratorium, establishing that a law must be approved regulating this use.

What the future holds for AI regulation in Brazil

After the Commission submitted its report, in May 2023 the president of the Senate presented a new bill for AI regulation replicating the Commission’s proposal. On 16th August 2023, the Senate established a temporary internal commission to discuss the different proposals for AI regulation that have been presented in the Senate to date.

It is difficult to predict what will happen following the end of the internal commission’s work, as political decisions will shape the next developments. However, what is important to have in mind is the progress that the discussion has reached so far, from an initial bill that was very minimal in scope, and supported the idea of minimal regulation, to one that is much more protective of individual and collective rights and considerate of Brazil’s particular socio-economic realities. Brazil has played an important progressive role historically in global discussions on the regulation of emerging technologies, for example with the discussions of its Marco Civil da Internet. As Mariana Valente put it, “Brazil has had in the past a very strong tradition of creative legislation for regulating technologies.” The Commission of Jurists’ proposal repositions Brazil in such a role.

September 28, 2023. Marina Garrote, LLM program, NYU School of Law whose research interests lie at the intersection of digital rights and social justice. Marina holds a bachelor and master’s degree from Universidade de São Paulo and previously worked at Data Privacy Brazil, a civil society association dedicated to public interest research on digital rights.

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System

TECHNOLOGY & HUMAN RIGHTS

Putting Profit Before Welfare: A Closer Look at India’s Digital Identification System 

Aadhaar is the largest national biometric digital identification program in the world, with over 1.2 billion registered users. While the poor have been used as a “marketing strategy” for this program, the “real agenda” is the pursuit of private profit.

Over the past months, the Digital Welfare State and Human Rights Project’s “Transformer States” conversations have highlighted the tensions and deceits that underlie attempts by governments around the world to digitize welfare systems and wider attempts to digitize the state. On January 27, 2021, Christiaan van Veen and Victoria Adelmant explored the particular complexities and failures of Aadhaar, India’s digital identification system, in an interview with Dr. Usha Ramanathan, a recognized human rights expert.

What is Aadhaar?

Aadhaar is the largest national digital identification program in the world; over 1.2 billion Indian residents are registered and have been given unique Aadhaar identification numbers. In order to create an Aadhaar identity, individuals must provide biometric data including fingerprints, iris scans, facial photographs, and demographic information including name, birthdate and address. Once an individual is set up in the Aadhaar system (which can be complicated depending on whether the individual’s biometric data can be gathered easily, where they live and their mobility), they can use their Aadhaar number to access public and, increasingly, private services. In many instances, accessing food rations, opening a bank account, and registering a marriage all require an individual to authenticate through Aadhaar. Authentication is mainly done by scanning one’s finger or iris, though One-Time Passcodes or QR codes can also be used.

The welfare “façade”

Unique Identification Authority of India (UIDAI) is the government agency responsible for administering the Aadhaar system. Its vision, mission, and values include empowerment, good governance, transparency, efficiency, sustainability, integrity and inclusivity. UIDAI has stated that Aadhaar is intended to facilitate “inclusion of the underprivileged and weaker sections of the society and is therefore a tool of distributive justice and equality.” Like many of the digitization schemes examined in the Transformer States series, the Aadhaar project promised all Indians formal identification that would better enable them to access welfare entitlements. In particular, early government statements claimed that many poorer Indians did not have any form of identification, therefore justifying Aadhaar as a way for them to access welfare. However, recent research suggests that less than 0.03% of Indian residents did not have formal identification such as birth certificates.

Although most Indians now have an Aadhaar “identity,” the Aadhaar system fails to live up to its lofty promises. The main issues preventing Indians from effectively claiming their entitlements are:

  • Shifting the onus of establishing authorization and entitlement onto citizens. A system that is supposed to make accessing entitlements and complying with regulations “straightforward” or “efficient” often results in frustrating and disempowering rejections or denials of services. The government asserts that the system is “self-cleaning,” which means that individuals have to fix their identity record themselves. For example, they must manually correct errors in their name or date of birth, despite not always having resources to do so.
  • Concerns with biometrics as a foundation for the system. When the project started, there was limited data or research on the effectiveness of biometric technologies for accurately establishing identity in the context of developing countries. However, the last decade of research reveals that biometric technologies do not work well in India. It can be impossible to reliably provide a fingerprint in populations with a substantial proportion of manual laborers and agricultural workers, and in hot and humid environments. Given that biometric data is used for both enrolment and authentication, these difficulties frustrate access to essential services on an ongoing basis.

Given these issues, Usha expressed concern that the system, initially presented as a voluntary program, is now effectively compulsory for those who depend on the state for support.

Private motives against the public good

The Aadhaar system is therefore failing the very individuals it was purported to be designed to help. The poorest are used as a “marketing strategy,” but it is clear that private profit is, and always was, the main motivation. From the outset, the Aadhaar “business model” would benefit private companies by growing India’s “digital economy” and creating a rich and valuable dataset. In particular, it was envisioned that the Aadhaar database could be used by banks and fintech companies to develop products and services, which further propelled the drive to get all Indians onto the database. Given the breadth and reach of the database, it is an attractive asset to private enterprises for profit-making and is seen as providing the foundation for the creation of an “Indian Silicon Valley.” Tellingly, the acronym “KYC,” used by UIDAI to assert that Aadhaar would help the government “know your citizen” is now understood as “know your customer.”

Protecting the right to identity

The right to identity cannot be confused with identification. Usha notes that “identity is complex and cannot be reduced to a number or a card,” because doing so empowers the data controller or data system to effectively choose whether to recognize the person seeking identification, or to “paralyse” their life by rejecting, or even deleting, their identification number. History shows the disastrous effects of using population databases to control and persecute individuals and communities, such as during the Holocaust and the Yugoslav Wars. Further, risks arise from the fact that identification systems like Aadhaar “fix” a single identity for individuals. Parts of a person’s identity that they may wish to keep separate—for example, their status as a sex worker, health information, or socio-economic status—are combined in a single dataset and made available in a variety of contexts, even if that data may be outdated, irrelevant, or confidential.

Usha concluded that there is a compelling need to reconsider and redraw attempts at developing universal identification systems to ensure they are transparent, democratic, and rights-based. They must, from the outset, prioritize the needs and welfare of people over claims of “efficiency,” which in reality, have been attempts to obtain profit and control.

February 15, 2021. Holly Ritson, LLM program, NYU School of Law; and Human Rights Scholar with the Digital Welfare State and Human Rights Project.

On the Frontlines of the Digital Welfare State: Musings from Australia

TECHNOLOGY & HUMAN RIGHTS

On the Frontlines of the Digital Welfare State: Musings from Australia

Welfare beneficiaries are in danger of losing their payments to “glitches” or because they lack internet access. So why is digitization still seen as the shiny panacea to poverty?

I sit here in my local pub in South Australia using the Wi-Fi, wondering whether this will still be possible next week. A month ago, we were in lockdown, but my routine for writing required me to leave the house because I did not have reliable internet at home.

Not having internet may seem alien to many. When you are in a low-income bracket, things people take for granted become huge obstacles to navigate. This is becoming especially apparent as social security systems are increasingly digitized. Not having access to technologies can mean losing access to crucial survival payments.

A working phone with internet data is required to access the Australian social security system. Applicants must generally apply for payments through the government website, which is notorious for crashing. When the pandemic hit, millions of the newly-unemployed were outraged that they could not access the website. Those of us already receiving payments just smiled wryly; we are used to this. We are told to use the website, but then it crashes, so we call and are put on hold for an hour. Then we get cut off and have to call back. This is normal. You also need a phone to fulfill reporting obligations. If you don’t have a working phone, or your battery dies, or your phone credit runs out, your payment can be suspended through the assumption that you’re deliberately shirking your reporting obligations.

In the last month, I was booted off my social security disability employment service. Although I had a certified disability affecting my job-seeking ability, the digital system had unceremoniously dumped me onto the regular job-seeking system, which punishes people for missing appointments. Unfortunately, the system had “glitched,” a popular term used by those in power for when payment systems fail. After narrowly missing a scheduled phone appointment, my payment was suspended indefinitely. Phone calls of over an hour didn’t resolve it; I didn’t even get to speak to a person, who could have resolved the issue. This is the danger of trusting digital technology above humans.

This is also the huge flaw in Income Management (IM), the “banking system” through which social security payments are controlled. I put “banking system” in quotation marks because it’s not run by a bank; there are none of the consumer protections of financial institutions, nor the choice to move if you’re unhappy with the service. The cashless welfare card is a tool for such IM: beneficiaries on the card can only withdraw 20% of their payment as cash, and the card restricts how the remaining 80% can be spent (for example, purchases of alcohol and online retailers like eBay are restricted). IM was introduced in certain rural areas of Australia deemed “disadvantaged” by the government.

The cashless welfare card is operated by Indue, a company contracted by the Australian government to administer social security payments. This is not a company with a good reputation for dealing with vulnerable populations. It is a monolith that is almost impossible to fight. Indue’s digital system can’t recognize rent cycles, meaning after a certain point in the month, the ‘limit’ for rent can be reached and a rent debit rejected. People have had to call and beg Indue to let them pay their landlords; others have been made homeless when the card stopped them from paying rent. They are stripped of agency over their own lives. They can’t use their own payments for second-hand school uniforms, or community fêtes, or buying a second-hand fridge. When you can’t use cash, avenues of obtaining cheaper goods are blocked off.

Certain politicians tout the cashless welfare card as a way to stop the poor from spending on alcohol and drugs. In reality, the vast majority affected by this system have no such problems with addiction. But when you are on the card, you are automatically classified as someone who cannot be trusted with your own money; an addict, a gambler, a criminal.

Politicians claim it’s like any other card, but this is a lie. It makes you a pariah in the community and is a tacit license for others to judge you. When you are at the whim and mercy of government policy, when you are reliant on government payments controlled by a third party, you are on the outside looking in. You’re automatically othered; you’re made to feel ashamed, stupid, and incapable.

Beyond this stigma, there are practical issues too. The cashless welfare card system assumes you have access to a smartphone and internet to check your account balance, which can be impossible for those with low incomes. Pandemic restrictions close the pubs, universities, cafes, and libraries which people rely on for internet access. Those without access are left by the wayside. “Glitches” are also common in Indue accounts: money can go missing without explanation. This ruins account-holders’ plans and forces them to waste hours having non-stop arguments with brick-wall bureaucracy and faceless people telling them they don’t have access to their own money.

Politicians recently had the opportunity to reject this system of brutality. The “Cashless Welfare Card trials” were slated to end on December 31, 2020, and a bill was voted on to determine if these “trials” would continue. The people affected by this system already told politicians how much it ruins their lives. Once again, they used their meager funds to call politicians’ offices and beg them to see the hell they’re experiencing. They used their internet data to email and rally others to do the same. I personally delivered letters to two politicians’ offices, complete with academic studies detailing the problems with IM. For a split second, it seemed like the politicians listened and some even promised to vote to end the trials. But a last-minute backroom deal meant that these promises were broken. Lived experiences of welfare recipients did not matter.

The global push to digitize welfare systems must be interrogated. When the most vulnerable in society are in danger of losing their payments to “glitches” or because they lack internet access, it begs the question: why is digitization still seen as the shiny panacea to poverty?

February 1, 2021. Nijole Naujokas, an Australian activist and writer who is passionate about social justice rights for the vulnerable. She is the current Secretary of the Australian Unemployed Workers’ Union, and is doing her Bachelor of Honors in Creative Writing at The University of Adelaide.

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

TECHNOLOGY & HUMAN RIGHTS

Marketizing the digital state: the failure of the ‘Verify’ model in the United Kingdom

Verify, the UK government’s digital identity program, sought to construct a market for identity verification in which companies would compete. But the assumption that companies should be positioned between government and individuals who are trying to access services has gone unquestioned.

The story of the UK government’s Verify service has been told as one of outright failure and a colossal waste of money. Intended as the single digital portal through which individuals accessing online government services would prove their identity, Verify underperformed for years and is now effectively being replaced. But accounts of its demise often focus on technical failures and inter-departmental politics, rather than evaluating the underlying political vision that Verify represents. This is a vision of market creation, whereby the government constructs a market for identity verification within which private companies can compete. As Verify is replaced and the UK government’s ‘digital transformation’ continues, the failings of this model must be examined.

Whether an individual wants to claim a tax refund from Her Majesty’s Revenue and Customs, renew her driver’s license through the Driver and Vehicle Licensing Agency, or receive her welfare payment from the Department for Work and Pensions, the government’s intention was that she could prove her identity to any of these bodies through a single online platform: Verify. This was a flagship project of the Government Digital Service (GDS), a unit working across departments to lead the government’s digital transformation. Much of GDS’ work was driven by the notion of ‘government as a platform’: government should design and build “supporting infrastructure” upon which others can build.

Squarely in line with this idea, Verify provides a “platform for identity.” GDS technologists wrote the software for the Verify platform, while the government then accredits companies as ‘identity providers’ (IdPs) which ‘plug into’ the platform to compete. An individual who seeks to access a government service online will see Verify on her screen and will be prompted by Verify to choose an identity provider. She will be redirected to that IdP’s website and must enter information such as her passport number or bank details. The IdP then checks this information against public and private databases before confirming her identity to the government service being requested. The individual therefore leaves the government website to verify her identity with a separate, private entity.

As GDS “didn’t think there was a market,” it aimed to support “the development of a digital identity market that spans both public and private sectors” so that users could “use their verified identity accounts for private sector transactions as well as government services.” After Verify went live in 2016, the government accredited seven IdPs, including credit reporting agency Experian and Barclays bank. Government would pay IdPs per user, with the price per user decreasing as user volumes increased. GDS intended Verify to become self-funding: government funding would end in Spring 2020, at which point the companies would take over responsibility. GDS was confident that the IdPs would “keep investing in Verify” and would “ensure the success of the market.”

But a market failed to emerge. The government spent over £200 million on Verify and lowered its estimate of its financial benefits by 75%. Though IdPs were supposed to take over responsibility for Verify, almost every company withdrew. After April 2020, new users could register with either the (privatized) Post Office or Digidentity, the only two remaining IdPs. But the Post Office is “a ‘white-label’ version of Digidentity that runs off the same back-end identity engine.” Rather than creating a market, a monopoly effectively emerged.

This highlights the flaws of the underlying approach. Government paid to develop and maintain the software, and then paid companies to use that software. Government also bore most of the risk: companies could enter the scheme, be paid tens of millions, then withdraw if the service proved less profitable than expected, without having invested in building or maintaining the infrastructure. This is reminiscent of the UK government’s decision to bear the costs of maintaining railway tracks while having private companies profit from running trains on these tracks. Government effectively subsidizes profit.

GDS had been founded as a response to failings in outsourcing government-IT: instead of procuring overpriced technologies, GDS would write software itself. But this prioritization of in-house development was combined with an ideological notion that government technologists’ role is to “jump-start and encourage private sector investment” and to build digital infrastructure while relying on the market to deliver services using that infrastructure. This ideal of marketizing the digital state represents a new “orthodoxy” for digital government; the National Audit Office has highlighted the lack of “evidence underpinning GDS’s assumptions that a move to a private sector-led model [was] a viable option for Verify.”

These assumptions are particularly troubling here, as identity verification is an essential moment within state-to-individual interactions. Companies were positioned between government and individuals, and effectively became gatekeepers. An individual trying to access an online government service was disrupted, as she was redirected and required to go through a company. Equal access to services was splintered into a choice of corporate gateways.

This is significant as the rate of successful identity verifications through Verify hovered around 40-50%, meaning over half of attempts to access online government services failed. More worryingly, the verification rate depended on users’ demographic characteristics, with only 29% of Universal Credit (welfare benefits) claimants able to use Verify. If claimants were unable to prove their identity to the system, their benefits applications were often delayed. They had to wait longer to access payments to which they were entitled by right. Indeed, record numbers of claimants have been turning to food banks while they wait for their first payment. It is especially important to question the assumption that a company needed to be inserted between individuals and government services when the stakes – namely further deprivation, hunger, and devastating debt – are so high.

Verify’s replacement became inevitable, with only two IdPs remaining. Indeed, the government is now moving ahead with a new digital identity framework prototype. This arose from a consultation which focused on “enabling the use of digital identity in the private sector” and fostering and managing “the digital identity market.” A Cabinet Office spokesperson has stated that this framework is intended to work “for government and businesses.”

The government appears to be pushing on with the same model, despite recurrent warning signs throughout the Verify story. As the government’s digital transformation continues, it is vital that the assumptions underlying this marketization of the digital state are fundamentally questioned.

March 30, 2021. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

TECHNOLOGY & HUMAN RIGHTS

Locked In! How the South African Welfare State Came to Rely on a Digital Monopolist

The South African Social Security Agency provides “social grants” to 18 million citizens. In using a single private company with its own biometric payment system to deliver grants, the state became dependent on a monopolist and exposed recipients to debt and financial exploitation.

On February 24, 2021, the Digital Welfare State and Human Rights Project hosted the fifth event in their “Transformer States” conversation series, which focuses on the human rights implications of the emerging digital state. In this conversation, Christiaan Van Veen and Victoria Adelmant explored the impacts of outsourcing at the heart of South Africa’s social security system with Lynette Maart, the National Director of the South African human rights organization The Black Sash. This blog summarizes the conversation and provides the event recording and additional readings below.

Delivering the right to social security

Section 27(1)(c) of the 1996 South African Constitution guarantees everyone the “right to have access” to social security. In the early years of the post-Apartheid era, the country’s nine provincial governments administered social security grants to fulfill this constitutional social right. In 2005, the South African Social Security Agency (SASSA) was established to consolidate these programs. The social grant system has expanded significantly since then, with about 18 million of South Africa’s roughly 60 million citizens receiving grants. The system’s growth and coverage has been a source of national pride. In 2017, the Constitutional Court remarked that the “establishment of an inclusive and effective program of social assistance” is “one of the signature achievements” of South Africa’s constitutional democracy.

Addressing logistical challenges through outsourcing

Despite SASSA’s progress in expanding the right to social security, its grant programs remain constrained by the country’s physical, digital, and financial infrastructure. Millions of impoverished South Africans live in rural areas lacking proper access to roads, telecommunications, internet connectivity, or banking, which makes the delivery of cash transfers difficult and expensive. Instead of investing in its own cash transfer delivery capabilities, SASSA awarded an exclusive contract in 2012 to Cash Paymaster Services (CPS), a subsidiary of South African technology company to administer all of SASSA’s cash transfers nationwide. This made CPS a welfare delivery monopolist overnight.

SASSA selected CPS in large part because its payment system, which included a smart card with an embedded fingerprint-based chip, could reach the poorest and most remote parts of the country. To obtain a banking license, CPS partnered with Grindrod Bank and opened 10 million new bank accounts for SASSA recipients. Cash transfers could be made via the CPS payment system to smart cards without the need for internet or electricity. CPS rolled out a network of 10,000 places where social grant payments could be withdrawn, known as “paypoints,” nationwide. Recipients were never further than 5km from a paypoint.

Thanks to its position as sole deliverer of SASSA grants and its autonomous payment system, CPS also had unique access to the financial data of millions of the poorest South Africans. Other Net1 subsidiaries including Moneyline (a lending group), Smartlife (a life insurance provider) and Manje Mobile (a mobile money service) were able to exploit this “customer base” to cross-sell services. Net1 subsidiaries were soon marketing loans, insurance, and airtime to SASSA recipients. These “customers” were particularly attractive because fees could be automatically deducted from the SASSA grants the very moment they were paid on CPS’ infrastructure. Recipients became a lucrative, practically risk-free market for lenders and other service providers due to these immediate automatic deductions from government transfers. The Black Sash has found that women were going to paypoints at 4.30am in their pajamas to try to withdraw their grants before deductions left them with hardly any of the grant left.

Through its “Hands off Our Grants” advocacy campaign, the Black Sash showed that these deductions were often unauthorized and unlawful. Lynette told the story of Ma Grace, an elderly pensioner who was sold airtime even though she did not own a mobile phone, and whose avenues to recourse were all but blocked off. She explained that telephone helplines were not free but required airtime (which poor people often did not have), and that they “deflected calls” and exploited language barriers to ensure customers “never really got an answer in the language of their choice.”

“Lockin” and the hollowing out of state capacity

Net1’s exploitation of SASSA beneficiaries is only part of the story. This is also about multidimensional governmental failure stemming from SASSA’s outright dependence on CPS. As academic Keith Breckenridge has written, the Net1/SASSA relationship involves “vendor lockin,” a situation in which “the state must confront large, perhaps unsustainable, switching costs to break free of its dependence on the company for grant delivery and data processing.” There are at least three key dimensions of this lockin dynamic which were explored in the conversation:

  • SASSA outsourced both cash transfer delivery and program oversight to CPS. CPS’s “foot soldiers” wore several hats: the same person might deliver grant payments at paypoints, field complaints as local SASSA representatives, and sell loans or airtime. Commercial activity and benefits delivery were conflated.
  • The program’s structure resulted in acute regulatory failures. Because CPS (not Grindrod Bank) ultimately delivered SASSA funds to recipients via its payment infrastructure outside the National Payment System, the payments were exempt from normal oversight by banking regulators. Accordingly, the regulators were blind to unauthorized deductions by Net1 subsidiaries from recipients’ payments.
  • SASSA was entirely reliant on CPS and unable to reach its own beneficiaries itself. Though the Constitutional Court declared SASSA’s 2012 contract with CPS unconstitutional due to irregularities in the procurement process, it ruled that the contract should continue as SASSA could not yet deliver the grants without CPS. In 2017, Net1 co-founder and former CEO Serge Belamant boasted that SASSA would “need to use pigeons” to deliver social grants without CPS. While this was an exaggeration, when SASSA finally transitioned to a partnership with the South African Post Office in 2018, it had to reduce the number of paypoints from 10,000 to 1,740. As Lynette observed, SASSA now has a weaker footprint in rural areas. Therefore, rural recipients “bear the costs of transport and banking fees in order to withdraw their own money.”

This story of SASSA, CPS, and social security grants in South Africa shows not only how outsourced digital delivery of welfare can lead to corporate exploitation and stymied access to social rights, but also how reliance on private technologies can induce “lockin” that undermines the state’s ability to perform basic and vital functions. As the Constitutional Court stated in 2017, the exclusive contract between SASSA and CPS led to a situation in which “the executive arm of government admits that it is not able to fulfill its constitutional and statutory obligations to provide for the social assistance of its people.”

March 11, 2021. Adam Ray, JD program, NYU School of Law; Human Rights Scholar with the Digital Welfare State & Human Rights Project in 2020. He holds a Masters degree from Yale University and previously worked as the CFO of Songkick.

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

TECHNOLOGY & HUMAN RIGHTS

I don’t see you, but you see me: asymmetric visibility in Brazil’s Bolsa Família Program

Brazil’s Bolsa Família Program, the world’s largest conditional cash transfer program, is indicative of broader shifts in data-driven social security. While its beneficiaries are becoming “transparent” as their data is made available, the way the State uses beneficiaries’ data is increasingly opaque.

“She asked a lot of questions and started filling out the form. When I asked her about when I was going to get paid, she said, ‘That’s up to the Federal Government.’” This experience of applying for Brazil’s Bolsa Família Program (“Programa Bolsa Família” in Portuguese, or PBF), the world’s largest conditional cash transfer program, hints at the informational asymmetries between individuals and the State. Such asymmetries have long existed, but information and communications technologies (ICTs) can exacerbate these imbalances. ICTs enable States to handle an increasing amount of personal data, and this is especially true in the PBF. In June 2020, 14.2 million Brazilian families living in poverty – 43.7 million individuals – were beneficiaries of the Bolsa Família program.

At the core of the PBF’s structure is a register called CadÚnico, which is used for more than 20 social policies. It includes detailed data on heads of households and less granular data on other family members. The law designates women as the heads of household and thereby the main PBF beneficiary. Information is collected about income, number of people living together, level of education and literacy, housing conditions, access to work, disabilities, and ethnic groups. This data is used to select PBF beneficiaries and to monitor their compliance with the conditions on which the maintenance of the benefit depends, such as requirements that children attend school . The federal government also uses the CadÚnico for finding multidimensional vulnerabilities, granting other benefits, or enabling research. Although different programs feed the CadÚnico, the PBF is its most important information provider due to its colossal size. In March 2021, the CadÚnico comprised 75.2 million individual entries from 28.9 million families: PBF beneficiaries make up a half.

The person responsible for the family unit within the PBF must answer all of the entries of the “main form,” which consists of 77 questions with varying degrees of detail and sensitivity. All these data points expose the sensitive personal information and vulnerabilities of low-income individuals.

The scope of this large and comprehensive dataset is celebrated by social policy experts because it enables the State to target needs for other policies. Indeed, the CadÚnico has been used to identify the relevant beneficiaries for policies ranging from electricity tariff discounts to higher education subsidies. Holding huge amounts of information about low-income individuals can allow States to proactively target needs-based policies.

But when the State is not guided by the principle of data minimization (i.e. collecting only the necessary data and no more), this appetite for information increases and places the burden of risks on individuals. They are transparent to the State, while the State becomes increasingly opaque to them.

Upon registering for the PBF, citizens are not informed about what will happen to the information they provide. For example, the training materials for officials registering beneficiaries only note that they must warn potential beneficiaries of their liability for providing false and inaccurate information, but they do not state that officials must tell beneficiaries how their data will be used, nor about their data rights , nor any details about when or whether they might receive their cash transfer. The emphasis, therefore, lies on the responsibilities of the potential beneficiary instead of the State. The lack of transparency about how people’s data will be used reduces citizens’ ability to exercise their rights.

In addition to the increased visibility of recipients to the State, the PBF also releases the beneficiaries’ data to the public due to strict transparency requirements. Though CadÚnico data is generally confidential, PBF recipients’ personal data is publicly available through different paths:

  • The Federal Government’s Transparency Portal publishes a monthly list containing the beneficiary’s name, municipality, NIS (social security number) and the amounts paid.
  • The Caixa Econômica Federal’s portal– the public bank that administers social benefits–allows anyone to check the status of the benefit by inserting name, NIS and CPF (taxpayer’s identity number).
  • The NIS of any citizen can be queried at the Citizen’s Consultation Portal CadÚnico by providing name, mother’s name, and birth date.

In making a person’s status as a PBF beneficiary easily accessible, the (mostly female) beneficiaries suffer a lack of privacy from all sides and are stigmatized. Not only are they surveilled by the State as it closely monitors conditionalities for the PBF, but they are also monitored by fellow citizens. Citizens have made complaints to the PBF about beneficiaries they believe should not receive cash transfers. At InternetLab, we used the Brazilian Access to Information Law to gain access to some of these complaints. 60% of the complaints showed personal identification information about the accused beneficiary, suggesting that citizens are monitoring and reporting their “undeserving” neighbors and using the above portals to check databases.

The availability of this data has further worrying consequences: at InternetLab, we have witnessed several instances of fraud and electoral propaganda directed at PBF beneficiaries’ phones, and it is not clear where this contact data came from. Different actors are profiling and targeting Brazilian citizens according to their socio-economic vulnerabilities.

The public availability of beneficiaries’ data is backed by law and arises from a desire to fight corruption in Brazil. This requires government spending, including on social programs, to be transparent. But spending on social programs has become more controversial in recent years amidst an economic crisis and the rise of conservative political majorities, and misplaced ideas of “corrupted beneficiaries” have mingled with anti-corruption sentiments. The emphasis has been placed on making beneficiaries “transparent,” rather than government.

Anti-corruption laws do not adequately differentiate between transparency practices that confront corruption and favor democracy, and those which disproportionately reinforce vulnerabilities and inequalities in focusing on recipients of social programs. Public contracts, public employees’ salaries, and beneficiaries of social benefits are all exposed under the same grounds. But these are substantially different uses of public resources, and exposure of these different kinds of data has very unequal impacts, with beneficiaries more likely to be harmed by this “transparency.”

The personal data of social program beneficiaries should be treated with more care, and we should question whether disclosing so much information about them is necessary. In the wake of Brazil’s General Data Protection Law which came into force last year, it is vital that the work to increase the transparency of the State continues while the privacy of the vulnerable is protected, not the other way around.

May 3, 2021. Nathalie Fragoso and Mariana Valente.
Nathalie Fragoso, Head of Research, Privacy and Surveillance, Internet Lab.
Mariana Valente, Associate Director of Internet Lab.

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

TECHNOLOGY & HUMAN RIGHTS

Fearing the future without romanticizing the past: the role for international human rights law(yers) in the digital welfare state to be

Universal Credit is one of the foremost examples of a digital welfare system and the UK’s approach to digital government is widely copied. What can we learn from this case study for the future of international human rights law in the digital welfare state?

Last week, Victoria Adelmant and I organized a two-day workshop on digital welfare and the international rule and role of law, which was part of a series curated by Edinburgh Law School. While zooming in on Universal Credit (UC) in the United Kingdom, arguably one of the most developed digital welfare systems in the world, our objective was broader: namely to imagine how and why law, especially international human rights law, does and should play a role when the state goes digital. Below are some initial and brief reflections on the rich discussions we had with close to 50 civil servants, legal scholars, computer scientists, digital designers, philosophers, welfare rights practitioners, and human rights lawyers.

What is “digital welfare?” There is no agreed upon definition. At the end of a United Nations country visit to the UK in 2018, where I accompanied the UN Special Rapporteur on extreme poverty and human rights, we coined the term by writing that “a digital welfare state is emerging”. Since then, I have spent years researching and advocating around these developments in the UK and elsewhere. For me, the term digital welfare can be (imperfectly) defined as a welfare system in which the interaction with beneficiaries and internal government operations is reliant on various digital technologies.

In UC, that means you apply for and maintain your benefits online, your identity is verified online, your monthly benefits calculation is automated in real-time, fraud detection happens with the help of algorithmic models, etc. Obviously, this does not mean there is no human interaction or decision-making in UC. And the digitalization of the welfare state did not start yesterday either; it is a process many decades in the making. For example, a 1967 book titled The Automated State mentions the Social Security Administration in the United States as having “among the most extensive second-generation computer systems.” Today, digitalization is no longer just about data centers or government websites, and systems like UC exemplify how digital technologies affect each part of the welfare state.

So, what are some implications of digital welfare for the role of law, especially for international human rights law?

First, as was pointed out repeatedly in the workshop, law has not disappeared from the digital welfare state altogether. Laws and regulations, government lawyers, welfare rights advisors, and courts are still relevant. As for international human rights law, it is no secret that its institutionalization by governments, especially where it comes to economic and social rights, has never been perfect. And neither should we romanticize the past by imagining a previous law and rules-based welfare state as a rule of law utopia. I was reminded of this recently when I watched a 1975 documentary by Frederick Wiseman about a welfare office in downtown Manhattan which was far from utopian. Applying law and rights to the welfare state has been a long and continuous battle.

Second, while there is much to fear about digitalization, we shouldn’t lose sight of its promises for the reimagination of a future welfare state. Several workshop participants emphasized the potential user-friendliness and rationality that digital systems can bring. For example, the UC system quickly responded to a rise in unemployment caused by the pandemic, while online application systems for unemployment benefits in the United States crashed. Welfare systems also have a long history of bureaucratic errors. Automation offers, at least in theory, a more rational approach to government. Such digital promises, however, are only as good as the political impetus that drives digital reform, which is often more focused on cost-savings, efficiency, and detecting supposedly ubiquitous benefit fraud than truly making welfare more user-friendly and less error-prone.

What role does law play in the future digital welfare state? Several speakers emphasized a previous approach to the delivery of welfare benefits as top-down (“waterfall”). Legislation would be passed, regulations would be written and then implemented by the welfare bureaucracy as a final step. Not only is delivery now taking place digitally, but such digital delivery follows a different logic. Digital delivery has become “agile,” “iterative,” and “user-centric,” creating a feedback loop between legislation, ministerial rules and lower-level policy-making, and implementation. Implementation changes fast and often (we are now at UC 167.0).

It is also an open question what role lawyers will play. Government lawyers are changing primary social security legislation to make it fit the needs of digital systems. The idea of ‘Rules as Code’ is gaining steam and aims to produce legislation while also making sure it is machine-readable to support digital delivery. But how influential are lawyers in the overall digital transformation? While digital designers are crucial actors in designing digital welfare, lawyers may increasingly be seen as “dinosaurs,” slightly out of place when wandering into technologist-dominated meetings with post-it notes, flowcharts, and bouncy balls. Another “dinosaur” may be the “street-level bureaucrat.” Such bureaucrats have played an important role in interpreting and individualizing general laws. Yet, they are also at risk of being side-lined by coders and digital designers who increasingly shape and form welfare delivery and thereby engage in their own form of legal interpretation.

Most importantly, from the perspective of human rights: what happens to humans who have to interact with the digital welfare state? In discussions about digital systems, they are all too easily forgotten. Yet, there is substantial evidence of the human harm that may be inflicted by digital welfare, including deaths. While many digital transformations in the welfare state are premised on the methodology of “user-centered design,” its promise is not matched by its practice. Maybe the problem starts with conceptualizing human beings as “users,” but the shortcomings go deeper and include a limited mandate for change and interacting only with “users” who are already digitally visible.

While there is every reason to fear the future of digital welfare states, especially if developments turn toward lawlessness, such fear does not have to lead to outright rejection. Like law, digital systems are human constructs, and humans can influence their shape and form. The challenge for human rights lawyers and others is to imagine not only how law can be injected into digital welfare systems, but how such systems can be built on and can embed the values of (human rights) law. Whether it is through expanding the concept and practice of “user-centered design” or being involved in designing rights-respecting digital welfare platforms, (human rights) lawyers need to be at the coalface of the digital welfare state.

March 23, 2021. Christiaan van Veen, Director of the Digital Welfare State and Human Rights Project (2019-2022) at the Center for Human Rights and Global Justice at NYU School of Law.

Experimental automation in the UK immigration system

TECHNOLOGY & HUMAN RIGHTS

Experimental automation in the UK immigration system

The UK government is experimenting with automated immigration systems. The promised benefits of automation are inevitably attractive, but these experiments routinely expose people—including some of the most vulnerable—to unacceptable risks of harm.

In April 2019, The Guardian reported that couples accused of sham marriages were increasingly being subjected to invasive investigations by the Home Office, the UK government body responsible for immigration policy. Couples reported having their wedding ceremonies interrupted to be quizzed about their sex life, being told they were not in a genuine relationship because they were wearing pajamas in bed, and being present while their intimate photos were shared between officials.

The official tactics reported are worrying enough, but it has since come to light through the efforts of a legal charity (the Public Law Project) and investigative journalists that an automated system is largely determining who gets investigated in the first place. An algorithm, hidden from public view, is sorting couples into “pass” and “fail” categories, based on eight unknown criteria.
Couples who “fail” this covert algorithmic test are subjected to intrusive investigations. They must attend an interview and hand over extensive evidence about their relationship, a process which has been described as “insulting” and “grueling.” These investigations can also prevent couples from getting married altogether. If the Home Office decides that a couple has failed to “comply” with an investigation—even if they are in a genuine relationship—the couple is denied a marriage certificate and forced to start the process all over again. One couple was reportedly ruled non-compliant for failing to provide six months of bank statements for an account that had only been open for four months. This makes it difficult for people to plan their weddings and their lives. And the investigation can lead to other immigration enforcement actions, such as visa cancellation, detention, and deportation. In one case, a sham marriage dawn raid led to a man being detained for four months, until the Home Office finally accepted that his relationship was genuine.

We know little about how this automated system operates in practice or its effectiveness in detecting sham marriages. The Home Office refuses to disclose or otherwise explain the eight criteria at the center of the system. There is a real risk that the system is racially discriminatory, however. The criteria were derived from historical data, which may well be skewed against certain nationalities. The Home Office’s own analysis shows that some nationalities, including Bulgarian, Greek, Romanian and Albanian people, receive “fail” ratings more frequently than others.

The sham marriages algorithm is, in many respects, a typical case of the deployment of automation in the UK immigration system. It is not difficult to understand why officials are seeking to automate immigration decision-making. Administering immigration policy is a tough job. Officials are often inexperienced and under pressure to process large volumes of decisions. Each decision will have profound effects for those subjected to it. This is not helped by the dense complexity of, and frequent changes in, immigration law and policy, which can bamboozle even the most hardened administrative lawyer. All of this, of course, takes place in an environment where migration remains one of the most vexed issues on the political agenda. Automation’s promised benefits of greater efficiency, lower costs, and increased consistency are, from the government’s perspective, inevitably attractive.

But in reality, a familiar pattern of risky experimentation and failure is already emerging. It begins with the Home Office deploying a novel automated system with the goal of cheaper, quicker, and more accurate decision-making. There is often little evidence to support the system’s effectiveness in delivering those goals and scant consideration of the risks of harm. Such systems are generally intended to benefit the government or the general, non-migrant population, rather than the people subject to them. When the system goes wrong and harms individuals, the Home Office fails to take adequate steps to address those harms. The justice system—with its principles and procedures developed in response to more traditional forms of public administration—is left to muddle through in trying to provide some form of redress. That redress, even where best efforts are made, is often unsatisfactory.

This is the story we seek to tell in our new book, Experiments in Automating Immigration Systems, through an exploration of three automated immigration systems in the UK: a voice recognition system used to detect fraud in English language testing; an algorithm for identifying “risky” visa applications; and automated decision-making in the process for EU citizens to apply to remain in the UK after Brexit. It is, at its core, a story of risky bureaucratic experimentation that routinely exposes people, including some of the most vulnerable, to unacceptable risks of harm. For example, some of the students caught up in the English language testing scandal were detained and deported, while others had to abandon their studies and fight for years through the courts to prove their innocence. While we focus on the UK experience, this story will no doubt be increasingly familiar in many countries around the world.

It is important to remember, however, that this story is just beginning. While it would be naïve to think that the tensions in public administration can ever be wholly overcome, the government must strive to reap the benefits of automation for all of society, in a way that is sensitive to and mitigates the attendant risks of injustice. That work is, of course, best led by the government itself.

But the collective work of journalists, charities, NGOs, lawyers, researchers, and others will continue to play a crucial role in ensuring, as far as possible, that automated administration is just and fair.

March 14, 2022. Joe Tomlinson and Jack Maxwell.
Dr. Joe Tomlinson is a Senior Lecturer in Public Law at the University of York.
Jack Maxwell is a barrister at the Victorian Bar.

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

TECHNOLOGY & HUMAN RIGHTS

Digital Paternalism: A Recap of our Conversation about Australia’s Cashless Debit Card with Eve Vincent

On November 23, 2020, the Center for Human Rights and Global Justice’s Digital Welfare State and Human Rights Project hosted the third virtual conversation in its “Transformer States: A Conversation Series on Digital Government and Human Rights” series. Christiaan van Veen and Victoria Adelmant interviewed Eve Vincent, senior lecturer in the Department of Anthropology at Macquarie University and author of a crucial report on the lived experiences of one of the first Cashless Debit Card trials in Ceduna, South Australia.

The Cashless Debit Card is a debit card which is currently used in parts of Australia to deliver benefit income to welfare recipients. Vitally, it is a tool of compulsory income management: the card “quarantines” 80% of a recipient’s payment, preventing this 80% from being withdrawn as cash and blocking attempted purchases of alcohol or gambling products. It is similar to, and intensifies, a previous scheme of debit card-based income management, known as the “Basics Card.” This earlier card was introduced after a 2007 report into child sexual abuse in indigenous communities in Australia’s Northern Territory which identified alcoholism, substance abuse, and gambling as major causes of such abuse. One of the measures taken was the requirement that indigenous communities’ benefit income be received on a Basics Card which quarantined 50% of benefit payments. The Basics Card was later extended to non-indigenous welfare recipients, but it remained disproportionately targeted at indigenous communities.

Following a 2014 report by mining magnate Andrew Forrest on inequality between indigenous and non-indigenous groups in Australia, the government launched the Cashless Debit Card to gradually replace the Basics Card. The Cashless Debit Card would quarantine 80% of benefit income on the card, and the card would block spending where alcohol is sold or where gambling takes place. Initial trials were targeted, again, in remote indigenous areas. The communities in the first trials were presented as parasitic on the welfare state and in crisis with regard to alcohol abuse, assault, and gambling. It was argued that drastic intervention was warranted: the government should step in to take care of these communities as they were unable to look after themselves. Income management would assist in this paternalistic intervention, fostering responsibility and curbing alcoholism and gambling through blocking their purchases. Many of Eve’s research participants found these justifications offensive and infantilizing. The Cashless Debit Card is now being trialed in more populous areas with more non-indigenous people, and the narrative has shifted. Justifications for cards for non-indigenous people have focused more on the need to teach financial literacy and budgeting skills.

Beyond the humiliating underlying stereotypes, the Cashless Debit Card itself leads cardholders feeling stigmatized. While the non-acceptance of Basics Cards at certain shops had led to prominent “Basics Card not accepted here” signs, the Cashless Debit Card was intended to be more subtle. It is integrated with EFTPOS technology, meaning it can theoretically be used in any shop with one of these ubiquitous card-reading devices. ETPOS terminals in casinos or pubs are blocked, but these establishments can arrange with the government to have some discretion. A pub can arrange to allow Cashless Debit Card-holders to pay for food but not alcohol, for example, thereby not excluding them entirely. Despite this purported subtlety, individuals reported feeling anxious about using the card as the technology was proving unreliable and inconsistent, accepted one day but not the next. When the card was declined, sometimes seemingly randomly, this was deeply humiliating. Card-holders would have to gather their shopping and return it to the shelves under the judging gaze of others, potentially of people they know.

Separately, some card-holders had to use public computers to log into their accounts to check their cards’ balance, highlighting the reliance of such schemes on strong digital infrastructure and on individuals’ access to connected devices. But some Cashless Debit Card-holders were quite positive about the card: there is, of course, a diversity of opinions and experiences. Some found that the card’s fortnightly cycle had helped them with budgeting and thought the app upon which they could check their balance was a user-friendly and effective budgeting tool.

The Cashless Debit Card scheme is run by a company named Indue, continuing decades-long trends of outsourcing welfare delivery. Many participants in Eve’s research spoke positively of their experience with Indue, finding staff on helplines to be helpful and efficient. But many objected to the principle that the card is privatized and that profits are being made on the basis of their poverty. The Cashless Debit Card costs AUD 10,000 per participant per year to administer: many card-holders were outraged that such an expense is outlaid to try to control how they spend their very meager income. Recently, the biggest four banks in Australia and government-owned Australia Post have been in talks about taking over the management of the scheme. This raises an interesting parallel with South Africa, where social grants were originally paid through a private provider but, following a scandal regarding the tender process and the financial exploitation of poor grant recipients, public providers stepped in again.

As an anthropologist, Eve’s research takes as a starting point the importance of listening to the people affected and foregrounding their lived experience, resonating with a common approach to human rights research. Interestingly, many Cashless Debit Card-holders used the language of human rights to express indignation about the scheme and what it represents. Reminiscent of Sally Engle Merry’s work on the ‘vernacularization’ of human rights, card-holders invoked human rights in a manner quite specific to the Aboriginal Australian context and history. Eve’s research participants often compared the Cashless Debit Card trials to the past, when the wages of indigenous peoples had been stolen and their access to money was tightly controlled. They referred to that time as the “time before rights”; before legislative equal citizen rights had been gained. Today, they argued, now that indigenous communities have rights, this kind of intervention and control of communities by the government is unacceptable. As one of Eve’s research participants put it, the government has through the Cashless Debit Card “taken away our rights.”

December 4, 2020. Victoria Adelmant, Director of the Digital Welfare State & Human Rights Project at the Center for Human Rights and Global Justice at NYU School of Law. 

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

TECHNOLOGY & HUMAN RIGHTS

A GPS Tracker on Every “Boda Boda”: A Tale of Mass Surveillance in Uganda

The Ugandan government recently announced that GPS trackers would be placed on every vehicle in the country. This is just the latest example of the proliferation of technology-driven mass surveillance, spurred by a national security agenda and the desire to suppress political opposition.

Following the June 2021 assassination attempt on Uganda’s Transport Minister and former army commander, General Katumba Wamala, President Yoweri Museveni suggested mandatory Global Positioning System (GPS) tracking of all private and public vehicles. This includes motorcycle taxis (commonly known as boda bodas) and water vessels. Museveni also suggested collecting and storing the palm prints and DNA of every Ugandan.

Hardly a month later, reports emerged that the government, through the Ministry of Security, had entered into a 10-year secretive contract with a Russian security firm to undertake the installation of GPS trackers in vehicles. Selection of the firm was never subjected to the procurement procedures required by Ugandan law, and a few days after this news broke, it emerged that the Russian firm was facing bankruptcy litigation. The line minister who endorsed the contract subsequently distanced himself from the deal, saying that he was merely enforcing a presidential directive. The government has confirmed that Ugandans will have to pay 20,000 UGX (approximately $6 USD) annually to the Russian firm for the installation of trackers on their vehicles. This controversial move means Ugandans are paying for their own surveillance.
According to 2020 statistics by the Ugandan Bureau of Statistics, a total of 38,182 motor vehicles and 102,273 motor cycles are registered in Uganda. Most of these motorcycles function as boda bodas and are a de facto mode of public transport in Uganda commonly used by people of all social classes. In the capital of Kampala, boda bodas are essential because of their ability to navigate heavy traffic jams. In remote locations where public transport is inaccessible, boda bodas are the only means of transportation for most people, except the elites. While a boda boda motorcycle was allegedly used in the assassination attempt on General Katumba Wamala, those same boda bodas also function as ambulances (including bringing the General to a hospital after the attack) and many other essential purposes.

It should be emphasized that this latest attempt at boda boda mass surveillance is part of a broader effort by the government of Uganda to exert power and control via digital surveillance and thereby limit the full enjoyment of human rights offline and online. One example is the widespread use of indiscriminate drone surveillance. Another is the Cyber Crimes Unit in the Ugandan police which, since 2014, has had overly broad powers to monitor the social media activity of Ugandans. Unwanted Witness has raised concerns about the intrusive powers of this unit, which violate Article 27 of the 1995 Uganda Constitution that guarantees the right to privacy.

And that is not all. In 2018, the Ugandan government contracted the Chinese firm Huawei to install CCTV cameras in all major cities and on all highways, spending over $126 million USD on these cameras and related facial recognition technology. In the absence of any judicial oversight, there are also concerns about backdoor access to this system for illegal facial recognition surveillance on potential targets and the use of this system to stifle all opposition to the regime.

The fears about the use of this CCTV system to violate human rights and stifle dissent came true in November 2020. Following the arrest of two opposition presidential candidates, political protests erupted in Uganda, and this CCTV system was used to crack down on dissent after these protests. Long before these protests, the Wall Street Journal had already reported on how Huawei technicians assisted the Ugandan government to spy on political opponents.

This is taking place in a wider context of attacks on human rights defenders and NGOs. Under the guise of seeking to pre-empt terror threats, the state has instituted cumbersome regulations on nonprofits and granted authorities the power to monitor and interfere in their work. Last year, a number of well-known human rights groups were falsely accused of funding terrorism and had their bank accounts frozen. The latest government clampdown on NGOs resulted in the suspension of the operations of 54 organizations on allegations of non-compliance with registration laws. Uganda’s pervasive surveillance apparatus will be instrumental in these efforts at censoring and silencing human rights organizations, activists, and other forms of dissent.
The intrusive application of digital surveillance harms the right to privacy of Ugandans. Privacy is a fundamental right enshrined in the 1995 Constitution and numerous international human rights treaties and other legal instruments. The right to privacy is also a central pillar of a well-functioning democracy. But in the quest to surveil its population, the Ugandan government has either underplayed or ignored the violation of human rights.

What is especially problematic here is the partial privatization of government surveillance to individual corporations. There is a long and unfortunate track record in Uganda of private corporations evading all human rights accountability for their involvement in surveillance. In 2019, for example, Unwanted Witness wrote a report that faulted a transport hailing app—SafeBoda—for sharing customers’ data with third parties without their consent. With the planned GPS tracking, Ugandan boda boda users will have their privacy eroded further, with the help of the Russian security firm. Driven by a national security agenda and the desire to control and suppress any opposition to the long-running Museveni presidency, digital surveillance is proliferating as Ugandans’ rights to privacy, to freedom of expression, and to freedom of assembly are harmed.

October 13, 2021. Dorothy Mukasa is the Chief Executive Officer of Unwanted Witness, a leading digital rights organization in Uganda.