So far we’ve been focusing primarily on English, but it’s important to remember that the phonology of each language is specific to that language: the patterns of which features and segments contrast with each other and which are simply allophones is different in each language of the world. So, for example, we know that in English, aspirated [ph] and unaspirated [p] are both allophones of a single phoneme. But in Thai, these two segments contrast with each other and are two different phonemes. The phonetic difference is the same, but how that difference is organized in the mental grammar is different in the two languages. This has effects when adults are trying to learn a second language.
Now, it’s a stereotype that people who are native speakers of Japanese often have difficulty when they’re learning some sounds of English, particularly in learning the difference between English /ɹ/ and /l/. These two sounds are contrastive in English, and we have lots of minimal pairs that provide evidence for that contrast, like rake and lake, fall and far, cram and clam. But neither of these segments is part of the Japanese phoneme inventory. Japanese has one phoneme, the retroflex flap [ɽ], that is phonetically a little bit similar to English /l/ and a little bit similar to English /ɹ/. So given that English /ɹ/ and /l/ are both phonetically different to the Japanese /ɽ/, and are phonetically different from each other, why is this phonemic contrast hard for Japanese learners to master?
To answer this question, we have to look at babies. Babies learn the phonology of their native language very early. When they are just born, we know that babies can recognize all kinds of phonetic differences. You might be wondering how we can tell what sounds a baby can recognize — we can’t just ask them, “Are these two sounds the same or different?” But we can use a habituation technique to observe whether they notice a difference or not. Babies can’t do much, but one thing they’re very good at is sucking. Using an instrument called a pressure transducer, which is connected to a pacifier, we can measure how powerfully they suck. When a baby is interested in something, like a sound that she’s hearing, she starts to suck harder. If you keep playing that same sound, eventually she’ll get bored and her sucking strength will decrease. When her sucking strength drops off, we say that the baby has habituated to the sound. But if you play a new sound, she gets interested again and starts sucking powerfully again. So we can observe if a baby notices the difference between two sounds by observing whether her sucking strength increases when you switch from one sound to the other. For newborn infants, we observe habituation with sucking strength, and for babies who are a little older, we can observe habituation just by where they look: they’ll look toward a source of sound when they’re interested in it, then look away when they get habituated. If they notice a change in the sound, they’ll look back toward the sound.
Using this technique, linguists and psychologists have learned that babies are very good at noticing phonetic differences, and they can tell the difference between all kinds of different sounds from many different languages. But this ability changes within the first year of life. A researcher named Janet Werker at the University of British Columbia looked at children and adults’ ability to notice the phonetic difference between three different pairs of syllables: the English contrast /ba/ and /da/, the Hindi contrast between a retroflex stop /ʈa/ and a dental stop /t̪a/, and a Salish contrast between glottalized velar /kʼi/ and uvular /qʼi/ stops. Each of these pairs differs in place of articulation, and within each language, each pair is contrastive. Werker played a series of syllables and asked English-speaking adults to press a button when the syllables switched from one segment to the other. As you might expect, the English-speaking adults were perfect at the English contrast but did extremely poorly on the Hindi and Salish contrasts.
Then Werker tested babies’ ability to notice these three phonetic differences, using the head-turn paradigm. These babies were growing up in monolingual English-speaking homes. At age six months, the English-learning babies were about 80-90% successful at noticing the differences in English, in Hindi and in Salish. But by age ten months, their success rate had dropped to about 50-60%, and by the time they were one year old, they were only about 10-20% successful at hearing the phonetic differences in Hindi and Salish. So these kids are only one year old, they’ve been hearing English spoken for only one year, and they’re not even really speaking it themselves yet, but already their performance on this task is matching that of English-speaking adults. The difference between retroflex [ʈa] and dental [t̪a] is not contrastive in English, so the mental grammar of the English-learning baby has already categorized both those sounds as just unusual-sounding allophones of English alveolar /ta/. Likewise, the difference between a velar and a uvular stop, which is contrastive in Salish, is not meaningful in English, so the baby’s mind has already learned to treat a uvular stop as an allophone of the velar stop, not as a separate phoneme.
So if we go back to our question of why it’s so hard for adults to learn the phonemic contrast in a new language, like the Japanese learners who have difficulty with English /l/ and /ɹ/, the answer is because, by the time they’re one year old, the mental grammar of Japanese-learning babies has already formed a single phoneme category that contains English /l/ and /ɹ/ as allophones of that one phoneme. To recognize the contrast in English, a Japanese learner has to develop two separate phoneme categories.