Leiden University logo.

nl en

Lunch Time Seminars

In this section you can find information about and recordings of past SAILS Lunch Time Seminars.

The biweekly Lunch Time Seminar is an online only event, but it is not publicly accessible in real-time. If you would like to attend one of the upcoming sessions, please send an email to sails@liacs.leidenuniv.nl.

Past Lunch Time Seminars 2022

Extracting Relevant Information From Text: Challenges and Solutions

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Suzan Verberne, Associate Professor at LIACS

We typically associate machine learning with classification, regression, and clustering. But some machine learning tasks are extraction tasks: we have a sequence of data and we need to extract the relevant information from it. In text data, the relevant information we are looking for are typically entities and relations between them: names of people and events in news texts, proteins and genes in biomedical data, or artefacts and locations in archaeological reports (as we saw in the seminar talk by Alex Brandsen). In this talk I will introduce information extraction and the common machine learning approach to information extraction from text. I will briefly discuss three different projects addressing information extraction in different domains, among which the PhD project of Anne Dirkson in which we have developed text mining techniques to process and extract information from the large volume of messages on a patient forum. Specifically, we have mined side effects of medications, and the coping strategies of patients who suffer from these side effects. I will show the challenges and results of this project.

Suzan Verberne is an associate professor at the Leiden Institute for Advanced Computer Science of Leiden University. She is group leader of Text Mining and Retrieval in which she supervises 7 PhD students. She obtained her PhD in 2010 on the topic of Question Answering systems and has since then been working on the edge between Natural Language Processing and Information Retrieval. She has been involved in projects involving a large number of application domains and collaborations: from art history to law and from archeologists to medical specialists. Her recent work centers around interactive information access for specific domains. She is also involved in a number of projects on social media mining. 

AI and Humanistic Thinking

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Peter Verhaar, Digital Scholarship Librarian and University Lecturer at the Leiden University Centre for the Arts in Society.

As is the case in virtually all academic disciplines, humanities scholars are increasingly trying to harness the manifold possibilities associated with AI. The emergence of tools and algorithms based on machine learning and deep learning have pushed researchers to experiment with data-rich approaches which can help to expose properties of cultural and historical objects they could never observe before, moving beyond the ‘human bandwith’. The transition from mere data creation to actual analysis continues to pose challenges, however. In this presentation I want to discuss two central caveats that need to be taken into account by humanities scholars who aim to work with methods based on AI, and who aim to integrate the outcomes of these methods into their research.

A first important challenge can be created by a lack of explainability of such results. Existing AI algorithms tend to focus first and foremost on the development of models for the classification of specific objects, and the logic underlying such models often receives much less attention. The type of learning that is implemented within deep learning algorithms also differs quite fundamentally from the ways in which humanities scholars have produced knowledge traditionally. During recent years, a number of techniques have been developed, fortunately, to clarify the steps that are followed by algorithms during the generation of predictions and classifications. Such techniques to enhance the explainability of AI algorithms can ultimately help to reconcile methodologies based on AI with the more conventional forms of humanistic thinking.

A second challenge results from the fact that the data sets that are used as training data are often biased. Whereas humanities scholars typically aim to contextualise and to explain events, objects and phenomena by considering these from many different perspectives, the ‘ground truth’ that is used to train models usually reflects one perspective only. It is clear that such biased datasets can have important ramifications for marginalised communities and that they may reinscribe existing social and political inequalities.

AI for a Liveable Planet

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Jan Willem Erisman, Professor Environmental Sustainability at Universiteit Leiden

The Liveable Planet programme is one of the eight interdisciplinary programmes that were launched at Leiden University in 2020, SAILS being one of the others. Leiden’s Liveable Planet programme aims to combine scientific, policy, socio-cultural and historical/archaeological research at Leiden University into coherent research with which we can tackle the major challenges of a transition to a habitable planet with ecological sustainability. The programme will  serve as a hub for the wide range of relevant research carried out within Leiden University and welcomes interaction with colleagues interested in contributing to the initiative within as well as outside of Leiden University.

The Netherlands is in the top 5 of most happy people. At the same time, we are experiencing several crises such as the nitrogen, climate, biodiversity, housing, and sustainable energy crisis. The current policy mainly addresses short-term problems in isolation and does not look far ahead, with new problems looming on the horizon. With the Liveable Planet program we will stimulate community based approaches in the global context to help address these crisis. We use the Sustainable Development Goals for 2030 as a starting point and will contribute to achieving these goals at all scales. This requires multidisciplinary  approaches, new methods, instruments and big data. In this lunch presentation I will give an overview of the Liveable Planet programme and provide an overview of the challenges and opportunities where AI might play a significant role.

Can BERT Dig It? - Named Entity Recognition for Information Retrieval in the Archaeology Domain

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Alex Brandsen, Postdoc researcher in Digital Archaeology at the Faculty of Archaeology.

The amount of archaeological literature is growing rapidly. Until recently, these data were only accessible through metadata search. We implemented a text retrieval engine for a large archaeological text collection (~658 Million words). In archaeological IR, domain-specific entities such as locations, time periods, and artefacts, play a central role. This motivated the development of a named entity recognition (NER) model to annotate the full collection with archaeological named entities.
In this talk, we present ArcheoBERTje, a BERT model pre-trained on Dutch archaeological texts. We compare the model's quality and output on a Named Entity Recognition task to a generic multilingual model and a generic Dutch model. We also investigate ensemble methods for combining multiple BERT models, and combining the best BERT model with a domain thesaurus using Conditional Random Fields (CRF).
We find that ArcheoBERTje outperforms both the multilingual and Dutch model significantly with a smaller standard deviation between runs, reaching an average F1 score of 0.735. 
Our results indicate that for a highly specific text domain such as archaeology, further pre-training on domain-specific data increases the model’s quality on NER by a much larger margin than shown for other domains in the literature, and that domain-specific pre-training makes the addition of domain knowledge from a thesaurus unnecessary. At the end of the presentation, a short demonstration of the entity search system is given. 

Artificial X

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Peter van der Putten, Assistant Professor AI at LIACS

Abstract

What is it what makes us uniquely human? Is it intelligence, or something else? In this talk I will give a broad overview of my research theme and practice Artificial X: investigating human qualities such as intelligence, but also creativity, emotions, curiosity, bonding, obedience or even topics such as morality and religion, through an artificial creature lens. I will illustrate this with a kaleidoscopic sampling of projects from previous years, ranging from research to creative student works, as well as a personal project currently on display at Museum De Lakenhal and ZKM Karlsruhe. These projects help us reflect on what can we learn from these bots about ourselves and what not, encourage general public debate and speculate on what our joint future with Artificial X may look like.   

AI & Ethics

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

André Krom - LUMC

Abstract: 
In this talk I will present an overview of common and pressing ethical issues and dilemmas faced by researchers working on potential AI applications for health care purposes. This will be the main part of my contribution to the webinar. Knowledge of such ethical issues and dilemmas, however, is one thing. Having options for action to actuallt deal with them, is another, and equally important. In the interest of providing options for action, I will therefore briefly introduce the key features of a constructive approach in applied ethics called "guidance ethics". While this approach is typically used to structure conversations about ethical questions concerning the application of AI-systems, I will argue and show that it is helpful in the context of facing ethical issues and dilemmas in AI research as well.

Applying Automated CCN-based Object Detection for Archaeological Prospection in Remotely-sensed Data

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Wouter Verschoof - Van der Vaart, Post-doc researcher at the Faculty of Archaeology

Abstract:

The manual analysis of remotely-sensed data, i.e., information about the earth obtained by terrestrial, aerial, and spaceborne sensors, is a widespread practice in local and regional scale archaeological research, as well as heritage management. However, the ever-growing set of largely digitally and freely available remotely-sensed data, creates new challenges to effectively and efficiently analyze these data and find and document the seemingly overwhelming number of potential archaeological objects.  

Therefore, computer-aided methods for the automated detection of archaeological objects are needed. Recent applications in archaeology mainly involve the use of Deep Learning Convolutional Neural Networks (CNNs). These algorithms have proven successful in the detection of a wide range of archaeological objects, including prehistoric burial mounds and medieval roads. However, the use of these methods is not without challenges. Furthermore, in archaeology these approaches are generally tested in an (ideal) experimental setting, but have not been applied in different contexts or 'in the wild', i.e., incorporated in archaeological practice. Even though the latter is important to investigate the true potential of these automated approaches.  

In this talk we will explore some of the opportunities and limitations of using CNN-based object detection in archaeological prospection and the potential—on both a quantitative and qualitative level—of these methods for landscape or spatial archaeology. 

Towards reliable and trustworthy AI systems

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Towards reliable and trustworthy AI systems

Jan van Rijn, assistant professor in Artificial Intelligence.

Abstract: 

The enormous potential of artificial intelligence is like a two-sided sword for society. When applied correctly, these systems can make a positive difference in our daily lives. On the other hand, sloppy deployment can lead to severe damage and even dangerous situations. This consideration becomes more important as artificial intelligence systems get further integrated into our society. As such, there is an obligation for the research community to develop methods that ensure safe deployment and verification techniques, to ensure beneficial applications.

Machine learning models (in particular deep neural networks) are known to be vulnerable to adversarial attacks. By injecting Gaussian noise into the input, the model can be influenced to make a pre-determined decision. The research field of neural network verification develops techniques that determine for a given model how vulnerable it is to such adversarial input perturbation. However, these techniques require a lot of domain expertise and are computationally expensive.

In this talk, I will present our recent work on neural network verification, that can determine whether such a network is vulnerable or safe. I will overview the state of the art neural network verifiers, and explain what their strengths and weaknesses are. These can be sped up by applying advances in hyperparameter configuration. When selecting the right hyperparameter configuration for such a validator, the validation process can be sped-up drastically, and validation resources can be utilized more focused, leading to more reliable AI systems.

How AI Changed My Life

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

What to Include in a Future Children's Book about AI

Bas Haring, professor of Public understanding of Science at LIACS.

Abstract

Although some might know me as a philosopher, I actually have a background in artificial intelligence. After my Ph.D. I didn't do much with that field, or so it seems. I wrote popular scientific books about e.g. evolution, biodiversity and even about economics. However, all these topics are in fact linked to artificial intelligence — I realised later in my career. In this meeting I will tell you how these topics are linked.

Computational agents can help people improve their theory of mind

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Rineke VerbruggeProfessor Logic and Cognition at Groningen University

Abstract:

When engaging in social interaction, people rely on their ability to reason about other people’s mental states, including goals, intentions, and beliefs. This theory of mind ability allows them to more easily understand, predict, and even manipulate the behavior of others. People can also use their theory of mind to reason about the theory of mind of others, which allows them to understand sentences like “Alice believes that Bob does not know that she wrote a novel under pseudonym”. But while the usefulness of higher orders of theory of mind is apparent in many social interactions, empirical evidence so far suggested that people often do not use this ability spontaneously when playing games, even when doing so would be highly beneficial. 

In this lecture, we discuss some experiments in which we have attempted to encourage participants to engage in higher-order theory of mind reasoning by letting them play games against computational agents: the one-shot competitive Mod game; the turn-taking game Marble Drop; and the negotiation game Colored Trails. It turns out that we can entice people to use second-order theory of mind in Marble Drop and Colored Trails, and in the Mod game even third-order theory of mind.

We discuss different methods of estimating participants’ reasoning strategies in these games, some of them based only on their moves in a series of games, others based on reaction times or eye movements. In the coming era of hybrid intelligence, in which teams consist of humans, robots and software agents, it will be beneficial if the computational members of the team can diagnose the theory of mind levels of their human colleagues and even help them improve.

Information Content of Empirical Data: Methodology and Metaphysics

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

James W. McAllister, Professor in History and Philosophy of Science, Leiden University

Abstract
Empirical data -- the results of observations and measurements -- contain information about the world. But how much information do they contain, and what does this tell us about the structure of the world? A traditional reply to these questions is that an empirical data set contains a single pattern, and that this shows that the world has a unique structure. This reply suggests that scientific laws and theories constitute, in the terms of algorithmic information theory, a lossless compression of empirical data. I argue to the contrary that scientific practice depends on decomposing empirical data sets into two additive components: a simple pattern, which corresponds to a structure in the world, and a residual noise term. This view leads to two intriguing implications. First, if the noise term is algorithmically incompressible, then empirical data sets as a whole are also incompressible. Second, since it is possible to decompose a data set into a pattern and noise in infinitely many different ways, and since each of these decompositions has equal claim to validity, then empirical data are evidence that the world contains an infinite amount of structure

Past Lunch Time Seminars 2021

The Excitement of Tappigraphy

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Arko Ghosh, Assistant Professor at the Faculty of Social and Behavioural Sciences.

Abstract:

The time-series of smartphone touchscreen interactions (tappigraphy) may help resolve the systematic links between brain functions and behavior in the real world. In this talk, I will provide an overview of tappigraphy, and how we are applying it to unravel human behavior in health and neurological disease. My talk will involve life span measurements, cognitive tests, brain implant recordings, and long-term behavioral monitoring. I will present straightforward statistical models linking tappigraphy to brain functions in healthy people, and machine learning approaches that help infer brain status based on tappigraphy inputs in Epilepsy and Stroke. These studies vividly demonstrate the potential of tappigraphy to investigate fundamental neuro-behavioral processes relevant to the real world.

The Politics of AI & Ethics 

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Linnet Taylor, Professor of International Data Governance at the Tilburg Institute for Law, Technology, and Society (TILT)

Abstract:


The process of setting rules and norms for computing processes and applications has been dominated by requirements engineering and formalisable, rather than ‘thick’ interpretations of central concepts including fairness, responsibility, trust and participation. Yet computing science experts and other disciplines such as law and philosophy often understand these terms very differently. These differences in understanding can create productive friction and discussion amongst experts with very different backgrounds and orientations, but can also constitute gaps that lead to governance-by-default, where instead of creating architectures for the control and shaping of digital power and intervention, disagreement on fundamental concepts delays action. This talk will explore whether these diverging understandings represent fundamental incompatibilities between disciplinary worldviews, what the effects of the resulting faultlines are in terms of thes target and aims of governing data and AI, how we can recognise productive disjunctures. I will look particularly at the current politics of AI, and ask whether there are ways to govern technology when different groups are locked in opposition around core concepts and assumptions which each consider non-negotiable.

AI and Historical Research

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Gerhard de Kok, Teacher at the Institute of History, Leiden University

Abstract:


Handwritten Text Recognition (HTR) is revolutionizing historical research. Models trained using neural networks can now read seventeenth and eighteenth century handwriting better than most humans alive today. Such models have already automatically transcribed vast archives, including those of the Dutch East and West India Companies (WIC and VOC). The resulting transcriptions are not perfect, but they open up new avenues for research. For the first time in history, it is possible to search through these archives with full text search. The millions of pages of transcriptions also invite further exploration through the use of NLP techniques. A first experiment with word embeddings has already led to some promising results.

Ethics in AI

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Karën Fort, Associate Professor at Sorbonne Université, France

Abstract


In the last decade, AI, and especially Natural language processing (NLP) applications have become part of our daily lives. Firms like Google, Facebook or Amazon have devoted huge efforts into research and are present in all our conferences. The distance between researchers and users has shrunk and a number of ethical issues started to show: stereotypes are repeated and amplified by machine learning systems, AI is used for ethically-questionable purposes like automatic sentencing, and more or less experimental tools are forced on users without taking their limitations into account. In this presentation, I'll detail some of the issues we are faced with, and I'll propose a  systemic view on the subject that allows to uncover some blind spots that have to be discussed.

Computational Creativity

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Rob Saunders, Associate Professor at LIACS, Leiden University

Abstract:


Creativity is one of the most highly prized faculties of human intelligence. The history of speculation about intelligent machines is mirrored by a fascination with the possibility of mechanical creativity. From the myths and legends of antiquity to the Golden Age of Automata in the 18th Century, the achievements of mechanical wonders were often paired with amazement at the performance of apparently creative acts. During the 20th Century the fascination with creative machines continued and at the dawn of the Computer Age the prospect of computationally modelling creative thinking was proposed as one of the “grand challenges” in the prospectus for the field of Artificial Intelligence. In the past 60 years, the field of Artificial Intelligence has seen significant progress in realising the goal of building computational creativity, from early Discovery Systems to the latest advances in Deep Learning. Like intelligence, however, the notion of creativity is an essentially social construct. Much work remains if creative machines are ever to become a reality, both in terms of technical advances and the integration of such machines into society. In addition, the development of machines capable of acting in ways that would be considered creative if performed by a human, will challenge our understanding of what it means to be creative. This talk will explore the history of creative machines and the prospects for the future of computational creativity research.

Natural Language Processing for Translational Data Science in Mental Healthcare

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Marco Spruit, Professor Advanced Data Science in Population Health, Leiden University


Abstract:


In this overview talk, I will first position the research domain of Translational Data Science, in the context of the COVIDA research programme on Dutch NLP for healthcare. Then, I will present our prognostic study on inpatient violence risk assessment by applying natural language processing techniques to clinical notes in patients’ electronic health records (Menger et al, 2019). Finally, I will discuss followup work where we try to better understand the performance of the best performing RNN model using LDA as a text representation method among others, which reminds us once more of the lingering issue of data quality in EHRs.

A Few Simple Rules for Prediction

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Marjolein Fokkema, Assistant Professor at Psychology, FSW, Leiden University


Abstract:


Prediction Rule Ensembling (PRE) is a statistical learning method that aims to balance predictive accuracy and interpretability. It inherits high predictive accuracy from decision tree ensembles (e.g., random forests, boosted tree ensembles) and high interpretability from sparse regression methods and single decision trees. In this presentation, I will introduce PRE methodology, starting from the algorithm originally proposed by Friedman and Popescu (2008). I will show several real-data applications, for example on the prediction of academic achievement and chronic depression. I will discuss several useful extensions of the original algorithm which are already implemented in R package ‘pre’, like the inclusion of a-priori knowledge, unbiased rule derivation, and (non-)negativity constraints. Finally, I will discuss current work in which we leverage the predictive power of black-box models (e.g., Bayesian additive regression trees, deep learning) to further improve accuracy and interpretability of PRE.

Towards a Mathematical Foundation of Machine Learning

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Johannes Schmidt-Hieber, Mathematical Institute, Leiden University


Abstract:


Recently a lot of progress has been made regarding the theoretical understanding of machine learning methods. One of the very promising directions is the statistical approach, which interprets machine learning as a collection of statistical methods and builds on existing techniques in mathematical statistics to derive theoretical error bounds and to understand phenomena such as overparametrization. The talk surveys this field and describes future challenges.

Machine Learning for Spatio-Temporal Datasets + SAILS Data Observatory

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Mitra Baratchi, Assistant Professor at LIACS, Leiden University


Abstract:


Spatio-temporal datasets (e.g., GPS trajectories, Earth observations) are ubiquitous. Algorithms for effective and automated processing of such data are relevant from various applications, from crowd movement analysis to environmental modelling. These algorithms need to be designed considering the fundamental aspects of the underlying spatio-temporal processes (e.g., the existence of spatial and temporal correlations) and be robust against various ubiquitous data imperfection issues. In this talk, I will introduce the field of spatio-temporal data mining and talk about crucial open research challenges for making use of such data.

I would also like to discuss the vision of creating a “data observatory” to address various important research challenges in multi-disciplinary research. The data observatory aims to bring together datasets (the observations), AI algorithms (the tools), and expertise (the humans) in a well-equipped setting that facilitates a collaborative investigation.

Applications of Artificial Intelligence in Early Drug Discovery

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Gerard van Westen, Professor of Artificial Intelligence & Medicinal Chemistry, LACDR, Leiden University

Abstract:


Drug discovery is changing, the influence and catalytic effect of Artificial Intelligence (AI) cannot be denied. History dictates this new development will likely be a synergistic addition to drug discovery rather than a revolutionary replacement of existing methods (like the history of HTS or combichem, a new tool in the toolbox). As more and more scientific data is becoming public and more and more computing power becomes available the application of AI in drug discovery offers exciting new opportunities.

Central to drug discovery in the public domain is the ChEMBL database which provides literature obtained bioactivity data for a large group of (protein) targets and chemical structures.[1, 2] Machine learning can leverage this data to obtain predictive models able to predict  the  activity probability of untestedchemical structures  contained within the large collections of chemical vendorson the basis of the chemical similarity principle. [3, 4]

In this talk I will give an overview of research going on at the computational drug discovery group in Leiden. Central in our research is the usage of machine. I will highlight some examples we have published previously and finish with an outlook of cool new possibilities just around the corner.[5, 6]

References
1. Sun, J., et al., J. Cheminf., 2017. 9, 10.1186/s13321-017-0203-5
2. Gaulton, A., et al., Nucleic Acids Res., 2012. 40, 10.1093/nar/gkr777
3. Bender, A. and R.C. Glen, Org. Biomol. Chem., 2004. 2, 10.1039/b409813g
4. Van Westen, G.J.P., et al., Med. Chem. Commun., 2011. 2, doi:10.1039/C0MD00165A
5. Liu, X., et al., J. Cheminf., 2019. 11, 10.1186/s13321-019-0355-6
6. Lenselink, E.B., et al., J. Cheminf., 2017. 9, 10.1186/s13321-017-0232-0

Opportunities and Challenges of AI in Security Research

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Nele Mentens, Professor of Computer Science, LIACS, Leiden University

Abstract: 


Artificial Intelligence plays an important role in the protection of electronic devices and networks. Examples of domains in which AI has shown to lead to better products and protection mechanisms, are the security evaluation of embedded and mobile devices, and the detection of attacks in IoT and IT networks. Besides the added value that AI brings, there are also a number of pitfalls with respect to the privacy of users whose personal data are processed, and the confidentiality of the models that are employed. This talk will give an overview of these opportunities and challenges. 

AI in Criminal Law

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Bart Custers, Professor of Law and Data Science, Leiden University

Abstract:


AI is developed and used for many good causes, but increasingly criminals also make use of developments in AI. In this presentation, examples of crime are examined that involve AI and related technologies, including big data analytics, A/B optimization and deepfake technology. Typically such technologies can enhance the effectiveness of crimes like ransomware, phishing and fraud. Next, it is discussed how AI related technologies can be used by law enforcement for discovering previously unknown patterns in crime and empirical research on what works in sanctioning. Examples of novel patterns are presented as well as existing sophisticated risk assessment systems. From a procedural criminal law perspective, i.e., when investigating crime, AI technologies can also be used both in providing cues during criminal investigations and in finding evidence. Approaches in predictive policing are investigated as well as the potential role of existing cyber agent technology. With regard to finding evidence, advanced data analytics can prove to be helpful for finding the proverbial needle in the haystack, providing Bayesian probabilities and building narratives for alternative scenarios. For all these developments, legal issues are identified that may require further debate and academic research.

AI-Based Quantification of Electron Microscopy Data

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Dr Oleh Dzyubachyk, Post-Doctoral Researcher at LUMC, Leiden University

Abstract: 


Electron microscopy (EM) is an imaging modality that has vast potential for becoming one of the primary beneficiaries of the advance of machine learning. In my talk I will first introduce to you this imaging modality and provide a few examples of data quantification needs. Next, I will describe our recent developments that enabled applying machine learning methodology to our in-house data and preliminary results of the mitochondria quantification project. Finally, I will share with you my ideas about potential directions for future research.

Machine Learning for Scientific Images

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Daan Pelt, Assistant Professor at LIACS, Leiden University

Abstract: 


In recent years, convolutional neural networks (CNNs) have proved successful in many imaging problems in a wide variety of application fields. However, it can be difficult to train and apply existing CNNs to scientific images, due to computational, mathematical, and practical issues. In this talk, I will discuss newly developed methods that are specifically designed for scientific images. These methods can accurately train with large image sizes and a limited amount of training data (or no training data at all), and can automatically adapt to various tasks without extensive hyperparameter tuning. The talk will include comparisons between the new methods and existing CNNs, some recent results for real-world scientific problems, and ideas for future work.

AI and Lawmaking: worlds apart? 

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Anne Meuwese, Professor Public Law, Leiden University


Abstract: 


The intersection of 'Public Law', 'Governance' and 'Artificial Intelligence' is not limited to the question of how AI can be regulated. AI also has the potential to change certain fundamental processes of the state. This presentation looks at one such process: lawmaking. In what ways could IA change both the process and the outcome of lawmaking by legislators? Among the possible applications of AI in lawmaking discussed are 1) the use of AI in monitoring the effects of legislation ‘ex post’, in particular the potential of AI in identifying regulatory failures, 2) the possible changes in the types of norms used in legislation in sectors in which AI is used to support administrative decision-making (rules vs standards, level of abstraction, ‘granularity’ of norms), 3) the implications of AI for the idea of ‘technology neutrality’ in legislative drafting, 4) the expected increased frequency of legislative projects for which an (AI) system will need to be designed in parallel. To what extent to we see these applications emerging and what are the implications for the fields of public law and governance?

Reutilizing Historical Satellite Imagery in Archaeology: An AI Approach 

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Tuna Kalayci, Assistant Professor at the department of Archaeological Sciences, Leiden University

Abstract:


The historical imagery is invaluable in archaeological research. At the very least, an old photograph offers the first record of an object (ranging from a small pottery piece to a large ancient settlement). In some cases, these images might be the only records left due to destruction of that object. Therefore, it is beneficial for archaeologists to explore these data sources to the full extent. This talk examines one of these sources, CORONA spy-satellite and discusses the results of a CNN model for the automated documentation of ancient settlements in the Middle East. This talk will also include brief evaluations of two potential future projects: Sounds of Leiden (SouL) and Robotics in Archaeology.

Modeling (implicit) questions in discourse

This video can not be shown because you did not accept cookies.

You can leave our website to view this video.

Matthijs Westera, Assistant Professor at LUCL, Leiden University

Abstract: 


When we talk, we try to be clear, coherent, relevant, informative and truthful, or at least to appear that way. An audience will expect this, and this expectation constrains their possible interpretations of our utterances. How exactly this works is the topic of the linguistic field of Pragmatics, where a helpful notion has proven to be that of a Question Under Discussion (QUD): a (typically implicit) question to which an utterance must provide some kind of answer. In a coherent discourse, every utterance should address a pertinent QUD, ideally one that was evoked by the preceding discourse. Despite their centrality in the field of Pragmatics, QUDs have received only little attention in Natural Language Processing (NLP), where the vast majority of work on discourse coherence is not QUD-based but relation-based (discourse relations such as 'explanation' and 'elaboration'), and virtually any work on questions concerns, instead, either question answering (given a question, find a suitable answer to it) or 'answer questioning' (given an answer contained in a text, generate a suitable comprehension question for it). I will present my (et al.) ongoing attempts (crowdsourcing and computational modeling) to add QUDs to the NLP toolbox, hoping to receive valuable suggestions for, e.g., possible applications in the various fields represented at SAILS.

This website uses cookies.