Leiden University logo.

nl en

Fall of Misinformation Series: Mark Leiser

Misinformation spreads easily and fast. It gets presented as news, whereas actual news gets dismissed as fake. Conflicting streams of information allows all sides to cherry-pick whatever is most comfortable, boosting degrees of confidence and confusing the deliberation of both politicians and voters. Conspiracy theories are growing in the number of adamant followers who see misinformation where others see truth. From COVID-19 to QAnon, misinformation is on all our minds. What exactly is happening and why? Have we entered a post-truth era? What can we do as university and are we doing enough? The Young Academy Leiden approached some of the researchers currently working on the topic.

Dr. Mark Leiser is an Assistant Professor at eLaw, the Center for Law and Digital Technologies, at the University of Leiden. He has been working on the regulation of deceptive content, focussing on fake news, disinformation and online manipulation in the European Union. 

There is a widespread worry about misinformation and fake news. How do you understand the phenomenon, what factors are mainly responsible for it? 

We now operate in what is referred to as the Web 2.0, in which we generate the content that appears on the web, rather than ‘just’ read what is served up by publishers. Powerful platforms like Google, Facebook, and YouTube allow everyone to upload their own content. These actors are  incredibly powerful at shaping the world’s news and information landscape as more and more people move away from traditional forms of media like BBC or CNN that were responsible for the delivery of news content into our living rooms. That is no longer the case. Most people get their news from social media. 

Other  factors here are societal. For example, if one is economically harmed by the lockdown, it is easier to go online and find others that feel the same way, justifying your belief that the government has failed in its response to a public health emergency, or that their response was disproportional. If you go to a platform like YouTube or Facebook, you will likely find others who feel the same. The Internet’s connectivity is actually starting to look like a burden. 

Finally, throughout the 20th century we built our economic models for human behavior on concepts like rationality. We believed  in concepts like “the wisdom of the crowd” or “the marketplace of ideas”. Now we realize that crowds don't necessarily come up with the best notions  and that the marketplace of ideas is more about the number of people that like something (populism) than about the quality of those ideas. 

Putting these together, users are uploading content that they believe to be true. Information platforms have economic incentives to facilitate user generated content, regardless of whether the content creator is malicious and deceptive. Their algorithms promote content through likes and shares (quantity) rather than its contributions to discussion, debate, democracy, etc. (quality). The result of all this is that when people go online to find information, which they do more and more, they find this wealth of inaccurate information, amplified by platforms like YouTube and Facebook. 

 

What is the current state of the legal responses to the problem? Do you think that regulatory push back can rely on a mix of existing legal frameworks, or is something else needed?  

Much of the early work that was done in response to fake news and disinformation was actually done by the likes of Google, Facebook and YouTube themselves. Whether they did it to avoid regulatory intervention, or the imposition of fines, is, of course, a different question. Generally speaking, users have a right to post deceptive, but not illegal content under Article 10 ECHR and Article 11 of the Charter; Facebook will only really remove content that goes against their terms and conditions or when they have received notification of its illegality. Even now, with the new Facebook Oversight Board, users have the right to challenge any removals that might be implemented by the social media giant. 

We have three major new proposals in the European Union: The Digital Services Act, the Digital Markets Act and the new Artificial Intelligence Act (as proposed). These will give a little more push back, regulating the power that the large information platforms like Facebook and YouTube have accumulated over time. There are also specific provisions that aim to regulate specifics of disinformation dissemination. 

Regardless of these provisions, we do not yet have an all encompassing law for false information. Instead, we have a law regulating specific aspects of data driven advertising, laws that protect consumers from deceptive marketing, misleading advertising, and snakeoil products and services (e.g. fake Covid remedies), etc..  We have a law that provides users with rights when an algorithm makes a decision that affects us like targeted microtargeting, more work calling for better algorithmic transparency and we have a new proposed law for the regulation of artificial intelligence that may be used in certain contexts to let people know when they see deep fakes. We have a myriad of regulatory responses, but the real question is whether it is a cohesive enough body of law, obligations, and rights to solve the problem. There I think it leaves a lot to be desired. We need a holistic legal framework for the regulation of content.

There is also a glaring hole in our legal frameworks when it comes to the proper regulation of political advertising. On the one hand, political parties are usually well-regulated entities, but Facebook has issue-specific advertising, tailored advertising, and custom audience features which make it very easy to get political ads from special interest groups into a user’s timeline without the corrective effect of the ‘marketplace of ideas’. That is a minefield because there's a close conflict with the protections of political speech. The human rights framework makes it very challenging to regulate political speech, which also gets the highest level of protection by our European courts.

Which aspects of the spread of misinformation do you think are particularly ill-understood and call for more research? 

We need a better empirical understanding of which measures, that we could put in place within the user interfaces of platforms or their system architecture, are most effective. 

Let me give an illustration. Let’s say the government passes a law telling everybody that they have to wear a seatbelt. They also run an advertising and education campaign telling everybody the dangers of not wearing your seatbelt and have advertisements that graphically display what happens when you don't wear a seatbelt. None of those measures are nearly as effective as playing a high pitched noise inside your car until you put on your seatbelt. It is not social norms nor the law that is effective in making us put our seatbelts on, but this really annoying noise. In this light, what’s missing right now is a better understanding of what technological measure is most effective at stopping people from posting, sharing, believing fake news. 

Any other challenges that you face in working on the topic? 

Yes. One thing that I find hard in working on this topic is not letting my research area affect my friendships with people who post, share, and believe obviously fake news. When I see obvious misinformation online, on places like Facebook or instagram, I have to bite my tongue and not point it out to people that this is fake news or misinformation. That's a challenge. You can't just get into an argument with everybody that poses fake posts or fake news on places like Facebook. You just have to let them fall on their own sword. We just have to let it slide and that's actually hard, it's a limitation that I find really troubling sometimes. On occasion, I’ve typed a comment out - but only when I felt that the comment’s audience was risking harm. An educated response to dangerous disinformation usually reduces its virality. 

Another challenge is keeping up with all the changes that are going on in my field at any given time. The speed with which things develop is one of the things I love about advanced digital technologies but it means that it's very difficult to keep up. Sometimes I feel like, if you take a week off for a couple of days off, there can be a whole wealth of cases and regulatory measures that have to be taken into consideration. 

When working on misinformation and fake news, do you ever run into the boundaries of your own field? What sort of interdisciplinary lines of research do you think are especially needed?

We need researchers that can access Facebook's data and study how that data spreads and report it in a very balanced way. The question isn't whether or not somebody encounters deceptive content, the question is whether or not they access it and believe it and share it. We need ways to make noise, introduce friction, get people to slow down when you go online for their news. We need to work on improving users’ competencies when interacting with journalistic content online. This will help people determine the news-story’s accuracy, its partisanship, its epistemic quality, and the credibility of the sources.  This way people will be better equipped to deal with fake news and disinformation when they come across it. We need to work on ways to take these insights and turn them into effective regulation. 

Lawyers and policymakers also need to spend more time talking to experts in cognitive and social psychology, to understand how information spreads across networks and to map out where that deceptive content goes and the precise harms of that content within society. Slowing down the spread of deceptive content can protect the information ecosystem and preserve rights to free expression. We desperately need more work on effective platform regulation that operates in a way that preserves fundamental rights and empowers users with the competences they need to judge the epistemic quality of the information in their timelines. 

I love to have more dialogue with people about these things. If you are nearby, pop me an email, and let's have a coffee and chat about fake news!

This website uses cookies.