How do Facebook and Google prevent terrorism?
Facebook and Google are being pressured more and more to prevent terrorists from accessing and using their platform. Right now, they are developing software that uses artificial intelligence to remove unwanted content. Researchers from Leiden University explain for the NOS how this works, and whether this is actually desirable.
According to Roy de Kleijn, this software will mostly be used to deal with simple problems, such as using photo and facial recognition to delete undesirable pictures. Moreover, according to Jelle van Buuren and Quirine Eijkman it is positive that a lot of cases are still handled manually by humans, and that not everything is dealt with by this software.
However, a number of major question marks remain. For both Eijkman and Van Buuren it is a very significant step to let these kinds of decisions to commercial, private enterprises, rather than testing this by an independent judge: 'Content by IS and Al-Qaida is very clearly related to terrorism, but what to do in cases where this link is not as clear-cut, such as the Syrian rebels?'
Would you like to read more? The full article (in Dutch) can be found on the NOS website.