Universiteit Leiden

nl en

Towards affective computing that works for everyone

Tessa Verhoef from the Leiden Institute of Advanced Computer Science and Eduard Fosch-Villaronga from eLaw- Center for Law and Digital Technologies have written an article on how affective computing should be inclusive, diverse, and work for everyone.

Diversity and inclusion are critical aspects of the responsible development of artificial intelligence (AI) technologies, including affective computing. Affective computing, which focuses on recognising, interpreting, and responding to human emotions, can revolutionise various domains, such as healthcare, education, and human-machine interaction. Capturing subjective states through technical means is challenging, though, and errors can occur, as seen with lie detectors not working adequately or gender classifier systems misgendering users. If used for ulterior decision-making processes, such inferences could have disastrous consequences for people, the impacts of which may vary depending on the context of an application, i.e. flagging innocent people as potential criminals in border control or detrimentally affecting vulnerable groups in mental health care.

Following this line of thought, Tessa Verhoef from the Creative Intelligence Lab at Leiden University and Eduard Fosch-Villaronga from eLaw - Center for Law and Digital Technologies have written an article highlighting that systems trained on the datasets currently available and used most widely may not work equally well for everyone and will likely have racial biases, biases against users with (mental) health problems, and age biases because they derive from limited samples that do not fully represent societal diversity.

Eduard Fosch Villaronga and Tessa Verhoef

Tessa and Eduard presented the paper entitled 'Towards affective computing that works for everyone' at the conference Affective Computing + Intelligent Interaction (ACII ‘23) that was held at the Massachusetts Institute of Technology (MIT) Media Lab. The annual Conference of the Association for the Advancement of Affective Computing (AAAC) is the premier international forum for research on affective and multimodal human-machine interaction and systems.

In their paper, they argue that missing diversity, equity, and inclusion elements in affective computing datasets directly affect the accuracy and fairness of emotion recognition algorithms across different groups. The researchers conducted a literature review revealing how affective computing systems may work differently for different groups due to, for instance, mental health conditions impacting facial expressions and speech or age-related changes in facial appearance and health. To do so, they analysed existing affective computing datasets and highlighted a disconcerting lack of diversity in current affective computing datasets regarding race, sex/gender, age, and (mental) health representation. By emphasising the need for more inclusive sampling strategies and standardised documentation of demographic factors in datasets, this paper provides recommendations and calls for greater attention to inclusivity and consideration of societal consequences in affective computing research to promote ethical and accurate outcomes in this emerging field.

Access to the paper

You can access the paper by following this link

Acknowledgement

The authors thank Joost Batenburg for providing support through the SAILS Programme, a Leiden University wide AI initiative. They would like to thank also the Gendering Algorithms has received funding from the Global Transformations and Governance Challenges Initiative at Leiden University. This paper has also been partly funded by the Safe and Sound project (a project that received funding from the European Union’s Horizon-ERC programme (Grant Agreement No. 101076929).                                        

This website uses cookies.  More information.