Universiteit Leiden

nl en

Why a drag queen is given less exposure than a white supremacist

Technology is developing at a mind-blowing rate, also in the field of artificial intelligence. For minority groups such as the LGBTQ+ community, this could be dangerous, writes researcher Eduard Fosch Villaronga in a letter to the editor of Nature Machine Intelligence.

Why did you write the letter?

‘Artificial intelligence is developed and implemented with the best of intentions usually. Take facial recognition software, for instance, which makes it easier to identify, recognize, and verify someone from a photo. Although it may improve border control,  these programs may unintentionally discriminate against minority groups. Facial recognition is more accurate with white men than with darker-skinned women, which can be really problematic if any decision is based on that.’

And can the equivalent be seen with the LGBTQ+ community?

‘Platforms such as Facebook and Twitter are investing in the development of artificial intelligence to automatically block or remove “toxic” content from their channels, such as insults or strong language. This can inadvertently affect the LGBTQ+ community. For drag queens, for instance, it is relatively normal to call each another a “bitch” as a term of endearment. However, the algorithm may not understand the context fully and may flag this utterance as a profanity. The content moderator tool may therefore identify the content as toxic, more toxic even than well-known infamous white supremacists’ comments.’

Why is this a problem?

‘The tool allows blocking toxic content, which means drag queens no longer have a voice on the internet. This significantly curtails this minority group’s freedom of speech. You could even say their freedom to be who they are because they cannot express themselves as they would do freely offline. Although it is not yet apparent whether and how other technological developments will affect the LGBTQ+ community, it is clear that how AI affects LGBTQ+ has been largely underexplored.’

Why do some algorithms discriminate? You would think that, unlike human beings, they aren’t fallible.

'The problem is that people program algorithms, and as humans, they have certain beliefs and opinions about how the world should be. Research has regularly shown that programmers are not sufficiently aware of the impact of their work. It is difficult enough to make a robot walk, so understandably hardly any programmer wonders how these developments affect us. One reason may be the lack of interdisciplinary education.’

How could programmers prevent this kind of discrimination in the future?

‘Ideally, you would consider the implications of innovation before you roll it out to the general population, thinking about the potential consequences your actions may have to society at different levels. That’s why I sent this letter to Nature Machine Intelligence together with Adam Poulsen and Roger Søraa, to stress that having good intentions is not enough to develop technology responsibly. An important step forward in bridging such a gap and ensuring that technology serves everyone equally would be to ensure development teams are more diverse and inclusive in terms of background and perspective so that the voices of people of different gender, ethnic background, religion, or sexual orientation are heard, valued, and considered.’

This website uses cookies.  More information.