BIAS: Mitigating Diversity Biases in the Labor Market
The project will investigate the use of Artificial Intelligence in the labor market, and how biases in hiring and promoting processes based on personal characteristics are potentially reproduced with AI-based systems.
- 2022 - 2025
- Eduard Fosch Villaronga
- European Union's Horizon Europe programme, grant agreement No. 101070468
The BIAS project investigates the use of Artificial Intelligence in the labor market. In particular, the project explores how AI-based systems potentially reproduce biases in hiring and promoting processes based on personal characteristics. In an employment context, this can, for example, involve analyzing text created by an employee or recruitment candidate to assist management in deciding to invite a candidate for an interview, to training and employee engagement, or to monitor for infractions that could lead to disciplinary proceedings.
We identify and mitigate biases in applications used in a Human Resources Management (HRM) context at the Horizon Europe BIAS project. In particular, the project:
- Develops the Debiaser, a proof-of-concept for innovative technology based on Natural Language Processing and Case-Based Reasoning for an HR recruitment use case. The system will contain two modules: one for bias detection and another for bias mitigation.
- Conducts extensive ethnographic fieldwork concerning the lived experiences of employees, Human Resource Managers, and technology developers and channels them toward improving these algorithms.
- Provides substantial training for HR managers and technology developers regarding AI's responsible development and implementation.
The project is coordinated by Dr. Roger A. Søraa from the Department of Interdisciplinary Studies of Culture at the Humanities Faculty at the Norwegian University of Science and Technology, who will be accompanied by a consortium coming from around Europe, including Leiden University, but also Bern University of Applied Sciences, University of Iceland, Smart Venice, LOBA, CrowdHelix, Digiotouch, and FARPLAS.
Dr. Eduard Fosch-Villaronga is the project leader at Leiden University. eLaw - Center for Law and Digital Technologies will assess the trustworthiness levels of the AI system developed by the consortium and survey the workers’ attitudes towards diversity biases in labor automation around Europe.
Due to the selected cookie settings, we cannot show this video here.Watch the video on the original website or
You can find more information about our project on our website https://biasproject.eu/.
This project contributes to advancing the knowledge in the field of Diversity & AI, which Dr. Eduard Fosch-Villaronga started at eLaw - Center for Law and Digital Technologies some time ago. Within that topic, Eduard also chairs the Gendering Algorithms initiative at Leiden University, a project aiming to explore the functioning, effects, and governance policies of AI-based gender classification systems.
This project has received funding from the European Union's Horizon Europe programme under the open call HORIZON-CL4-2021-HUMAN-01-24 - Tackling gender, race and other biases in AI (RIA) (grant agreement No. 101070468).