Lecture
SAILS Workshop Computational Models of Language Learning and Change
- Date
- Friday 12 December 2025
- Time
- Location
- Lipsius Building
Cleveringaplaats 1
2311 BD Leiden - Room
- Lips-1.30
Computational Models of Language Learning and Change
On the occasion of Yuchen Lian’s PhD defense (December 12), SAILS is organizing a workshop on 'Computational Models of Language Learning and Change’.
At 11:30, Yuchen will defend her thesis titled: 'Emergence of Linguistic Universals in Neural Agents via Artificial Language Learning and Communication’ in the Academy Building. After the ceremony, a lunch will be served for registered workshop participants and we are excited to host three guest speakers:
13:45 - 14:20 Lunch
14:20 – 14:30 Opening
14:30 - 15:10 Lisa Beinborn (University of Göttingen)
‘Cognitively-inspired representation learning’
Abstract
Current language modeling architectures only work well when trained on sufficiently large training datasets. In contrast, humans acquire language with remarkably efficient learning curves. We develop cognitively inspired modeling approaches that learn to better generalize from smaller datasets and test them within the BabyLM framework. In this talk, I present our ideas and results for a range of learning configurations: the representation of the input, the vocabulary, the learning process, and the target objective. Based on our experiments, we conclude that current modeling approaches are optimized for the characteristics of English rather than for modeling language, and that smarter combinations of learning variants are required for developing more sample-efficient models.
15:10 - 15:50 Raquel Alhama (University of Amsterdam)
‘Emergent Communication with Noisy Channels’
Abstract
We investigate communication emerging in noisy environments with the goal of capturing the impact of message disruption on the emerged protocols. We implement two different noise mechanisms, inspired by the erasure and deletion channels studied in information theory, and simulate a referential game in a neural agent-based model with a variable message length channel. We leverage a stochastic evaluation setting to apply noise only after a message is sampled, which adds ecological validity and allows us to estimate information-theoretic measures of the emerged protocol directly from symbol probabilities. Contrary to our expectations, the emerged protocols do not become more redundant with the presence of noise; instead, we observe that certain levels of noise encourage the sender to produce more compositional messages, although the impact varies depending on the type of noise and input representation.
15:50 - 16:30 Afra Alishahi (Tilburg University)
‘Getting closer to reality: Grounding and interaction in models of human language acquisition’
Abstract
Humans learn to understand speech from weak and noisy supervision: they manage to extract structure and meaning from speech by simply being exposed to utterances situated and grounded in their daily sensory experience. Emulating this remarkable skill has been the goal of numerous studies; however researchers have often used severely simplified settings where either the language input or the extralinguistic sensory input, or both, are small-scale and symbolically represented. I present a series of studies on modelling visually grounded language understanding.
16:30 Closing
To register please go to REGISTER
Join us!
Please click the the link below to register to the SAILS mailing list and receive participation links for our events including the Lunch Time Seminars.
Sign up