Universiteit Leiden

nl en

ChatGPT has left-wing bias in Stemwijzer voting advice application

The AI chatbot ChatGPT has a clear left-liberal bias when filling in the Stemwijzer voting advice application. This was discovered by master's student Merel van den Broek during an assignment for the Machine Learning for Natural Language Processing course.

The linguistics student came up with the idea of having ChatGPT fill in Stemwijzer when they heard that the chatbot has a left-leaning bias in English. ‘I wondered if it also did in Dutch?’

According to Van den Broek, the Dutch political arena is ideally suited to investigating the political preferences of this kind of chatbot. ‘They don’t have an equivalent to Stemwijzer in the US because there are only two major parties there. In the Netherlands, we have the non-partisan Stemwijzer and there are many different parties across the political spectrum, so you can measure many more nuances.’

SP, Denk and D66

ChatGPT’s political preferences turned out to be mainly for SP, Denk and D66. The parties the chatbot least agreed with were all on the right: JA21, BBB and FVD. ‘The result is almost a perfect ranking from left to right. It’s quite striking.’ This left-wing bias is mainly due to the texts used to train the language model, says Van den Broek. ‘The texts used for this included scientific ones, and academics are generally more likely to be left-wing.’

Professor Stephan Raaijmakers, a lecturer on the Machine Learning for Natural Language Processing course, also thinks the left-wing bias is due in part to the data used to train the language model. ‘These kinds of language model reflect the biases in the texts in their database.’

Which texts were included is not yet entirely clear. ‘We don’t have a clear picture of that. They’ve made a kind of snapshot of the web, but have also added sources such as GitHub (online platform for software development, ed.). We are slowly discovering more and more data sources that should also have been used. Certain texts may well be politically biased. And the model has been further tweaked with human feedback – and then bias can also creep in.’

Jokes about Trump

OpenAI, the developer of ChatGPT, is still adjusting the language model, Raaijmakers explains. ‘They block jokes about Trump, for example. Nor can you ask how to make a bomb. So they try to use ethical and legal interventions to gain a bit more control over the language model a bit.’ These interventions might also contribute to the leftist bias, although Raaijmakers says we cannot say for sure.

Caution

Because of this political bias, Raaijmakers urges caution when using ChatGPT as a ‘personal political advisor’. ‘Of course, this is still only a small experiment, but the left-liberal bias is clearly visible. If you ask questions about politics, you won’t get a neutral answer.’

The version of ChatGPT used in the experiment was trained on texts from 2021 and before, so Van den Broek used the 2021 Stemwijzer. To check its accuracy, they administered a second questionnaire, the Political Orientation Test. This also showed a left-liberal bias.
 

Text: Tom Janssen
Photo: Unsplash

This website uses cookies.  More information.