Universiteit Leiden

nl en

Lecture

False consensus biases AI against vulnerable stakeholders

Date
Thursday 19 March 2026
Time
Location
Kamerlingh Onnes Building
Steenschuur 25
2311 ES Leiden
Room
A051 - Grotius room

About the speaker

Mengchen Dong is a behavioral scientist and Research Scientist at the Center for Humans and Machines of the Max Planck Society. She completed her PhD in psychology and studies AI ethics and governance across interpersonal, organizational, and societal contexts. Her research promotes a nuanced understanding of human-AI interaction, emphasizing the influence of personal circumstances and sociocultural backgrounds.

Abstract of the lecture

The deployment of AI systems for welfare benefit allocation allows for accelerated decision-making and faster provision of critical help, but has already led to an increase in unfair benefit denials and false fraud accusations. Collecting data from the US and UK (N = 3,249), we explore the public acceptability of such speed-accuracy trade-offs in populations of claimants and non-claimants. We observe a general willingness to trade off speed gains for modest accuracy losses, but this aggregate view masks notable divergences among subgroups. Welfare claimants are less willing to compromise on accuracy, raising concerns that solely using aggregate data for calibration could lead to policies misaligned with stakeholder preferences. Our study further uncovers asymmetric insights between claimants and non-claimants. Non-claimants consistently overestimate claimants’ willingness to accept speed-accuracy trade-offs, even when financially incentivized for accuracy. This suggests that policy decisions influ- enced by the dominant voice of non-claimants, however well-intentioned, may neglect the actual preferences of those directly affected by welfare AI systems. Our findings underline the need for stakeholder engagement and transparent communication in the design and deployment of these systems, particularly in contexts marked by power imbalance.

Suggested readings

  • Dong, M., Bonnefon, J. F., & Rahwan, I. (2025). Heterogeneous preferences and asymmetric insights for AI use among welfare claimants and non-claimants. Nature Communications, 16(1), 6973.
  • Misuraca, G. and Van Noordt, C., AI Watch - Artificial Intelligence in public services, EUR 30255 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-19540-5, doi:10.2760/039619, JRC120399.
  • Bryan, C.J., Tipton, E. & Yeager, D.S. Behavioural science is unlikely to change the world without a heterogeneity revolution. Nature Human Behaviro, 5, 980–989 (2021).

More information

Lecture series: Humanity in the Automated State

Registration will be opening in the future

This website uses cookies.  More information.