How do we process language and speech? How do we integrate information at different levels – intonation, sentence structure, meaning, tone, words, sounds - and how does information at one level interact with information at another level?
At LUCL, researchers study these topics using experimental methods such as EEG, fMRI and Eye-tracking, while acoustic aspects of speech itself are studied in detail in the phonetics lab.
In addition, they study how infants and young children acquire language and speech and the learning mechanisms they employ for this task. In the babylab researchers use the Head-Turn paradigm, Preferential Looking and fNIRS to find out what babies know and elicitation methods to study the development of their speech production mechanism. Last but not least, researchers explore how Deep Learning systems can further our understanding of language and communication.