LUCIR Lecture: Technological Change and Human Rights
- Tuesday 19 April 2022
2511 DP The Hague
- 2.02 (or via livestream)
Understanding the uses and abuses of machine learning in democratic systems
Algorithms are remaking our perceptions of reality, for better and worse. Nowhere is this more consequential than in the realm of human rights and political violence. Internet access, camera phones, social media platforms, GPS location data, and high resolution satellite photography have brought into view new, more specific, conceptualizations of human rights across the democratic world. Digital sources of evidence are produced through the computational processing of signals as well as themselves being input to downstream machine learning algorithms used to recognize patterns of violations and protections across the globe. In particular, hybrid machine learning-human-in-the-loop decision systems offer the promise of an enhanced capability to understand, forecast, and potentially mitigate violence and abuses. The widespread availability of digital video evidence of police brutality targeted at minorities in the United States is one high profile example.
However, there is a dark-side to this cascade of human-computer interaction. The same computational tools that researchers and human rights non-governmental organizations use to study protections and violations are crucial tools of surveillance, repression, and control in autocratic states - not only ubiquitous cameras and GPS tracking, but image, text, and network analysis tools. Moreover, computationally-accelerated rights' abuses are not limited to autocracies. Dataveillance, the use of machine learning on digital trace data to predict on and offline behavior, is a particular accelerant to polarization, distrust of experts, and extremism. Machine learning algorithms, both intentionally (through ad networks and A/B testing) and unintentionally (through optimizing attention), have weaponized diversity and values in important contexts.
Our research team has tracked, with machine learning, the increasing role of algorithms and computation in producing human rights abuses around the world in autocracies and democracies. Despite this, our understanding of dataveillance is extremely limited because common misconceptions about computing and society lead to technical blind spots for social scientists and policy-makers as well as social and policy blindspots for technicians. While GDPR and other emerging regulations to protect privacy are one step in limiting the danger of dataveillance in democracies, new human rights threats, from deepfakes to the metaverse, are on the horizon and will require novel frameworks to simultaneously measure and protect rights.
The lecture and discussion will be moderated by Professor Daniel Thomas, Institute of Political Science.
This event is co-sponsored by a KNAW Early Career Partnership.