Conference | Mini symposium
SAILS event: Showcasing AI Research @ Humanities
- Friday 17 February 2023
2311 GJ Leiden
- Telders auditory
This exciting and timely SAILS event, in collaboration with Digital Humanities, will bring you up to speed on current AI-related research at the Faculty of Humanities. Our scholars from the fields of history, journalism, philosophy, arts and linguistics will present their research, which either relies on AI as a powerful research instrument or studies AI as a phenomenon in its own right. The programme will feature a number of full talks as well as several shorter talks, with plenty of opportunities for your participation and networking. Join us for drinks afterwards in the Faculty Club! (Disclaimer: This text was written by a human.)
SAILS (Society Artificial Intelligence and Life Sciences) is a university-wide network of AI researchers from the seven faculties at Leiden University. It is aimed at facilitating collaboration across disciplines on the use of Artificial Intelligence (AI). A key characterisic of AI research is its interdisciplinarity. By organising this event, we aim to:
- showcase the wide variety of AI research @ the Facutly of Humanities
- facilitate connections between academics who use AI in their research, both within and beyond the faculty
- enable cross-pollination of AI research within and across faculties of the university
For more information on this event, please contact Matthijs Westera.
Defining Responsible AI in Journalism: Reporters’ Perceptions of Automated Decision-Making and Algorithmic Bias
Historical Research in the Age of AI, or: Should Historians Become Data Scientists?
[short talk] Looking Behind Closed Doors: Understanding 17th-Century Domestic Life with AI
[short talk] Bridging the sign language technology gap: pitfalls, prerequisites and opportunities
[short talk] AI and Digital Humanities
[short talk] Tracing the History of Technocracy in Historical Parliamentary Debates
|[short talk] Setting up Forensic Text Analysis
Willemijn Heeren (University Lecturer), Meike de Boer (PhD candidate @ Leiden University Centre for Linguistics)
|16:30||Pragmatics for Explainable AI
Daniel Kostić (Postdoctoral researcher Human AI @ Institute for Philosophy)
|16:55||Replacing Irreplaceability: Algorithmic Profiling, AI, and the Fear of Fragmentation
Ilios Willemars (University Lecturer @ Leiden University Centre for the Arts in Society)
This mini symposium is part of a series; each month an individual faculty participating in SAILS will organise a similar event.Register here to attend the mini symposium AI@Humanities!
Defining responsible AI in journalism: Reporters’ perceptions of automated decision-making and algorithmic bias. Tomás Dodds (Assistant Professor in Journalism and New Media, Leiden University Centre for Linguistics)
The development of artificial intelligence (AI) technologies inside newsrooms presents new ethical challenges for journalists. AI-based technologies are impacting organizational and professional values and forcing a re-examination of ethical and legal guidelines for robotics and AI in journalism. Recent initiatives, like the European Media Freedom Act, have called for news organizations to draft standards for the responsible use of AI in the newsroom. Responsible AI in journalism refers to designing and implementing algorithmic systems without infringing human rights. However, journalists’ perceptions of AI's impact on their professional values (e.g., editorial independence), decision processes, and decision-making power remain largely uncharted. This talk presents the results of an ongoing study that examines how new AI-based technologies – such as recommender systems or automated insights – are affecting the professionalization of journalistic values and professional identities and how, in turn, this phenomenon impacts reporters’ perceptions about automated decision-making, fairness, transparency, and explainability.
Historical Research in the Age of AI, or: Should Historians Become Data Scientists? Gerhard de Kok (Postdoctoral researcher Human AI @ Leiden University Institute for History)
The digitization of psychical archival sources has greatly facilitated historical research, but it turned out to be just the first step in revolutionizing research methods. Advances in Handwritten Text Recognition (HTR) now make it possible to turn archives into Big Data. This presents new challenges for historians, such as the ability to effectively analyze the vast amounts of data now available. It also raises the question: should historians become data scientists? It is crucial that scholars approach the integration of AI into their research with a balanced and thoughtful approach. In this talk, I will discuss the impact of AI on the practice of historical research. I will explore the potential of new technologies for historians, but also look at some pitfalls to be avoided.
Looking Behind Closed Doors: Understanding 17th-century domestic life with AI. Xuan Li (Postdoctoral researcher @ Leiden University Centre for the Arts in Society)
How can we look inside the domestic spaces of 17th-century houses to understand everyday life four centuries ago? Historians rely on estate inventories; architectural historians use surviving buildings and drawings, while art historians examine the images of 17th-century interiors to understand the material culture of domestic life. My previous research combined the first two, developing the ‘spatial reading of inventories’ to locate household objects from inventories through 3D schematic reconstructions. In my postdoctoral research, I aim to bring art historical sources – paintings and other contemporary visual materials of domestic interiors – into the equation using Artificial Intelligence (AI) technologies. I plan to apply computer vision and object recognition techniques to 17th-century Netherlandish paintings. Using automatically annotating objects in paintings, I will compare the painted reality with the written one as revealed in estate inventories, investigating the presence of and the relationships between household objects on a scale unattainable by traditional means.
Bridging the sign language technology gap: pitfalls, prerequisites and opportunities. Victoria Nyst (University lecturer @ Leiden University Centre for Linguistics)
The digital revolution has led to an explosion of language technologies, and digital language applications are now widely available for the largest languages. However, there is a big gap in the availability of tools for minority languages and non-western languages, including visual and tactile sign languages of deaf communities and deafblind communities.
The development of sign language technologies is lagging behind due to various causes, including our limited understanding of the linguistics of sign languages, the lack of large data sets, and the small amount of research done. An additional challenge is the lack of involvement of deaf and deafblind communities in many sign language technology projects and the mismatch in perspectives between them and tech developers.
In this presentation, I will present three ways in which our HANDS! Lab for Sign Languages and Deaf Studies is involved in developing technologies. Firstly, by compiling large data sets needed for training, for African sign languages as well as Dutch Sign Language. Secondly, by reflecting on the how language technologies can be developed in a meaningful and ethical way (PhD van der Mark). Thirdly, by developing machine learning tools for automated sign recognition and comparison for research purposes (PhD Fragkiadakis).
Tracing the History of Technocracy in Historical Parliamentary Debates. Ruben Ros (PhD candidate @ Leiden University Institute for History)
Democracy is often said to be under the sway of "technocracy": expert rule. Scientific institutions, experts and model have a tremendous influence on democratic decision-making, often at the cost of transparency and sovereignty. This research studies the rise of technocratic ways of thinking and the impact they have on democratic debate. It does so by mining and modelling millions of parliamentary debates from the twentieth century Dutch Lower House using NLP (Natural Language Processing) methods. Using language modelling, network analysis and argument mining, the research aims to uncover how expertise has become so important in politics.
Setting up forensic text analysis. Willemijn Heeren (University Lecturer), Meike de Boer (PhD candidate @ Leiden University Centre for Linguistics)
This presentation will outline a new project to develop a research pipeline supporting studies into forensic text analysis at the Leiden University Centre for Linguistics (LUCL). It is supported by an LUCDH small grant for research development. The project will include the tailoring, testing and validating of authorship analysis algorithms that were developed on languages other than Dutch and on non-forensically relevant data for the task of dealing with forensically-relevant text types (e.g. chats, e-mails and speech transcripts). This may help forensic linguistic methods in general, and methods for the Dutch language in particular.
As development materials, data from OpenSonar and the Spoken Dutch Corpus will be used. The intended pipeline will consist of four main components: (1) pre-processing of textual data, (2) linguistic feature extraction, (3) feature (vector) comparison, and (4) Bayesian statistical evaluation. Methods are found in the existing literature, using available software packages where possible. Once such a pipeline is available, it will allow for the discovery of further text features that contain information characterizing the author or speaker in this specific context. Finally, we focus on the Dutch language; there is little earlier work on Dutch, but as languages differ in their linguistic features, languages are also likely to vary –to some extent– in speaker- or author-dependent features.
Pragmatics for Explainable AI. Daniel Kostić (Postdoctoral researcher Human AI @ Institute for Philosophy)
AI systems are often used in data-driven decision-making without an explanation of how the AI makes those decisions. This explanatory opacity of AI is only partly a result of its profound complexity. The other part of the opacity problem stems from using explanatory norms that are too permissive, too restrictive, or incommensurable. I develop a heuristic for explainable AI which can deal with both the complexity of AI and plurality of explanatory norms. In dealing with the complexity of AI, I will apply my theory of topological explanations which provides necessary and sufficient conditions under which a highly abstract and complex mathematical network models are explanatory. In regard to plurality of explanatory norms in AI, I will build upon the idea that certain perspectival inferential patterns determine both the explanation seeking questions (or why-questions) and the space of possible answers to them.
Replacing irreplaceability: algorithmic profiling, AI, and the fear of fragmentation. Ilios Willemars (University Lecturer @ Leiden University Centre for the Arts in Society)
This presentation deals with work in progress on the concept of ‘replacement’ and on the anxiety to find oneself replaced, be it by a partner in a relationship, be it by a younger sibling, or be it through technological developments that make it possible to produce data doubles. Recognizing that the modern subject, the subject of Humanism, sees itself as essentially irreplaceable, I investigate how the discourse on replacement continues to impact us today. Claims of irreplaceability often come, paradoxically, with increasing anxieties about being replaced. One instance where this fear is visible is in discussions around AI. ‘What if AI replaces me at work?’ or ‘What if students replace their work for a work produced by ChatGPT?’. This presentation portrays some of the cultural context in which fears of replacement have become central, and proposes different concepts that may be more generative for addressing the anxieties of our time.