Leiden University logo.

nl en

Bram Klievink: 'The government’s biggest AI challenge is that no system is ever neutral'

Using artificial intelligence is more complicated for the government than for companies. Bram Klievink, Professor of Public Administration, aims to identify the problems and find solutions.

‘If half of the books that Amazon recommends to you aren’t interesting, it’s not really an issue. But if the government makes mistakes in just one-tenth of a percent of cases, this can be very serious; for example, if it’s trying to identify fraudsters,’ explains Bram Klievink. And a company only needs to test which algorithm yields most profit, within the limits of the law. ‘A government has a much more complex societal agenda.’

Bram Klievink, Professor of Public Administration with a special focus on Digitisation and Public Policy.

A striking example: in early 2020 a court declared that the Dutch government’s System Risk Indicator (SyRI) was unlawful. This instrument had been in use since 2014 and its purpose was to prevent fraudulent benefit claims by means of data linking and pattern recognition. The system created risk profiles on the basis of data about fines, compliance and education, among other factors. Although your data were anonymously encrypted until you emerged as a potential fraudster, the court decided that the violation of the right to a private life was too great.

Policy with social media data

The SyRI debacle shows that although the government has considerable scope, it is more restricted than – let’s say – Facebook. Simon Vydra, a PhD candidate supervised by Klievink, is researching whether social media data are useful for analysing the effects of policies targeting young parents. There are many technical possibilities: ‘You can do sentiment analysis, for example, and try to assess the level of support for policies.’ 

'Minor choices and deliberations can have unexpected consequences'

Klievink: ‘When you use a technique like that, you always make choices. You have to set a lot of parameters. If your analysis system is based on Twitter data, for example, you have to set the point at which your system identifies a Tweeter as a human or a bot. Is it ten tweets a day or a hundred? And how many conversation topics does your topic model distinguish? Will it be five large, but general topics, or do you choose a refined model with twenty more specific topics? Even minor choices and deliberations can have unexpected or unintended consequences for how the outcomes will be used. These choices are never neutral, but we can’t avoid making them.’

Decision-makers and technicians

Dilemmas relating to these choices will often stay hidden, because policy-makers and the technicians who create the systems don’t speak each other’s language. ‘The AI specialist often has technical and methodological expertise, but lacks the necessary content expertise to foresee the consequences of the choices that are made. Conversely, the policy-maker often doesn’t know what knobs the technician can turn, exactly what their settings are, and what this means for the outcomes.’ Klievink therefore concludes that the collaboration between people from diverse disciplines working on public AI projects can never be close enough.

Other Leiden University research on AI in the public sector

This interview is part of a series in which researchers from all disciplines within Leiden University talk about their work in the field of AI. The mathematician and computer scientist Joost Batenburg says that the emphasis in the public sector often lies on what is technically possible, and not enough on the interaction with people. ‘Systems are becoming more impersonal. I want AI not only to make the government’s work more efficient, but also to help citizens.’

Professor of Law and Data Science Bart Custers shares this motivation. He tries to bring together experts in technology, ethics and law. ‘That way we can create ‘privacy by design’: build in all kinds of guarantees in advance, to avoid problems developing.’

Tekst: Rianne Lindhout
Foto: Patricia Nauta

This website uses cookies.