How does the brain of Japanese speakers choose pronunciation?
The way in which written language is processed in the brain is a hot topic in cognitive research. Cognitive psychologist Rinus Verdonschot studied a Japanese script in which a single character can have up to three possible pronunciations. He discovered that all three are simultaneously activated in the brain. In the end, the right pronunciation is determined by the surrounding characters.
Japanese and Chinese are so-called logographic languages, meaning that instead of using an alphabet to express words, they use characters. In Japanese, there are even three scripts which are used interchangeably, including the kanji script, taken from Chinese. The latter is characterised by the fact that for most words, in addition to the Japanese pronunciation, there is also a pronunciation which is borrowed from Chinese. Therefore, most Japanese characters can be pronounced in a number of different ways. So, how does the brain choose the appropriate pronunciation?
The pronunciation depends on the context of the surrounding characters, says cognitive psychologist Rinus Verdonschot, who is defending his thesis on this topic on 12 May: ‘In Dutch, words are often pronounced exactly as they are spelled. As a result of this lack of ambiguity, we are often, when reading, insensitive to the context in which a word appears. Japanese, on the other hand, requires a relatively high processing cost, i.e. the time between reading and pronouncing a word. This is due to the different pronunciation possibilities. This cost means that the context has more time to impact the reader.’
When a kanji, a Japanese character, has more then one possible pronunciation, are the various options simultaneously activated in the brain? That was Verdonschot’s research question. ‘Although some researchers assume that this is the case, it had never yet been proven. We have now for the first time found direct scientific evidence for it.’
Both pronunciations active
Verdonschot used a masked priming experiment. Subjects were briefly shown a kanji (called ‘prime’ in the experiment) with more than one possible pronunciation. In addition, the researchers showed the subjects another character which they were required to read (the ‘target’). The conclusion? If the prime provided information on whether to pronounce the target the Japanese or the Chinese way, the subjects were consistently quicker to read the target than if an unrelated prime was being shown. This means that both pronunciations become activated in the brain.
Moras rather than phonemes
The thesis also discusses the so-called production unit in Japanese. Verdonschot: ‘In English, we use phonemes. The word ‘cat’, for instance, consists of three phonemes, k, a and t. Japanese, on the other hand, divides words into another kind of unit, known as a mora, which usually consists of a vowel and a consonant. We have shown that at the level of production too, in the different pronunciations, moras rather than phonemes are used as a unit.’
Word processing in languages using non-alphabetic scripts: The cases of Japanese and Chinese
Thursday 12 May 2011, 13.45 hrs
Supervisor: Professor N.O. Schiller
Academy Building, Rapenburg 73