-
Using the visual world paradigm, we compared first, L1 and L2 speakers’ anticipation of upcoming information in a discourse and second, L1 and L2 speakers’ ability to infer the meaning of unknown words in a discourse based on the semantic cues provided in spoken language context. It was found that native speakers were able to use the given contextual cues, throughout the discourse, to anticipate upcoming linguistic input and fixate targets consistent with the input thus far, while L2 speakers showed weaker effects of discourse context on target fixations. However, both native speakers and L2 learners alike were able to use contextual information to infer the meaning of unknown words embedded in the discourse and fixate images associated with the inferred meanings of these words, especially given adequate contextual information. We suggest that these results reflect similarly successful integration of the preceding semantic information and the construction of integrated mental representations of the described scenarios in L1 and L2.
-
While the specificity of infants’ early lexical representations has been studied extensively, researchers have only recently begun to investigate how words are organized in the developing lexicon and what mental representations are activated during processing of a word. Integrating these two lines of research, the current study asks how specific the phonological match between a perceived word and its stored form has to be in order to lead to (cascaded) lexical activation of related words during infant lexical processing. We presented German 24-month-olds with a cross-modal semantic priming task where the prime word was either correctly or incorrectly pronounced. Results indicate that correct pronunciations and mispronunciations both elicit similar semantic priming effects, suggesting that the infant word recognition system is flexible enough to handle deviations from the correct form. This might be an important prerequisite to children’s ability to cope with imperfect input and to recognize words under more challenging circumstances.
-
We examined how L2 exposure early in life modulates toddler word recognition by comparing German–English bilingual and German monolingual toddlers’ recognition of words that overlapped to differing degrees, measured by number of phonological features changed, between English and German (e.g., identical, 1-feature change, 2-feature change, 3-feature change, no overlap). Recognition in English was modulated by language background (bilinguals vs. monolinguals) and by the amount of phonological overlap that English words shared with their L1 German translations. L1 word recognition remained unchanged across conditions between monolingual and bilingual toddlers, showing no effect of learning an L2 on L1 word recognition in bilingual toddlers. Furthermore, bilingual toddlers who had a later age of L2 acquisition had better recognition of words in English than those toddlers who acquired English at an earlier age. The results suggest an important role for L1 phonological experience on L2 word recognition in early bilingual word recognition.
-
This editorial provides an overview of the Special issue of Journal of Experimental Child Psychology. This Special Issue discusses interrelations between non-linguistic and linguistic representations of cognition and action in development. The special issue is devoted to current empirical evidence from different areas of developmental research. The key question is whether and how non-linguistic and linguistic modes of representation and perception are linked and interrelated during the course of development. This key question was addressed by the different contributions with respect to different areas of development and using a variety of different approaches. The contributions primarily focused on infancy and early childhood, spanning from 9 months to 4 years. This period of life is of particular relevance with respect to language development because children start to produce their first words around their first birthday and their language acquisition undergoes rapid development during the next few years.
-
Visual cues from the speaker's face, such as the discriminable mouth movements used to produce speech sounds, improve discrimination of these sounds by adults. The speaker's face, however, provides more information than just the mouth movements used to produce speech--it also provides a visual indexical cue of the identity of the speaker. The current article examines the extent to which there is separable encoding of speaker identity in speech processing and asks whether speech discrimination is influenced by speaker identity. Does consistent pairing of different speakers' faces with different sounds--that is, hearing one speaker saying one sound and a second speaker saying the second sound--influence the brain's discrimination of the sounds? ERP data from participants previously exposed to consistent speaker-sound pairing indicated improved detection of the phoneme change relative to participants previously exposed to inconsistent speaker-sound pairing--that is, hearing both speakers say both sounds. The results strongly suggest an influence of visual speaker identity in speech processing.
-
One of the first challenges facing the young language learner is the task of segmenting words from a natural language speech stream, without prior knowledge of how these words sound. Studies with younger children find that children find it easier to segment words from fluent speech when the words are presented in infant-directed speech, i.e., the kind of speech typically directed toward infants, compared to adult-directed speech. The current study examines whether infants continue to display similar differences in their segmentation of infant- and adult-directed speech later in development. We show that 16-month-old infants successfully segment words from a natural language speech stream presented in the adult-directed register and recognize these words later when presented in isolation. Furthermore, there were no differences in infants’ ability to segment words from infant- and adult-directed speech at this age, although infants’ success at segmenting words from adult-directed speech correlated with their vocabulary size.
-
Examined whether the processing of individual words in silent reading is impacted by rhythmic properties of the surrounding context. Listeners are sensitive to the metric structure of words, i.e., an alternating pattern of stressed and unstressed syllables, in auditory speech processing. Event-related potentials (ERPs) were recorded as 19 participants (mean age 24 years) listened to a sequence of words with a consistent metrical pattern, e.g., a series of trochaic words, suggest that participants register words metrically incongruent with the preceding sequence. Participants' EEG data were recorded as they read lists of either three trochaic or iambic disyllabic words followed by a target word that was either congruent or incongruent with the preceding metric pattern. Results showed that ERPs to targets were modulated by an interaction between metrical structure (iambic vs trochaic) and congruence: for iambs, more positive ERPs were observed in the incongruent than congruent condition 250-400 ms and 400-600 ms poststimulus, whereas no reliable impact of congruence was found for trochees. It is suggested that when iambs are in an incongruent context, i.e., preceded by trochees, the context contains the metrical structure that is more typical in participants' native language which facilitates processing relative to when they are presented in a congruent context, containing the less typical, i.e., iambic, metrical structure. The results provide evidence that comprehenders are sensitive to the prosodic properties of the context even in silent reading, such that this sensitivity impacts lexicosemantic processing of individual words.
-
Are there individual differences in children's prediction of upcoming linguistic input and what do these differences reflect? Using a variant of the preferential looking paradigm (Golinkoff, Hirsh-Pasek, Cauley, & Gordon, 1987), we found that, upon hearing a sentence like, ``The boy eats a big cake,'' 2-year-olds fixate edible objects in a visual scene (a cake) soon after they hear the semantically constraining verb eats and prior to hearing the word cake. Importantly, children's prediction skills were significantly correlated with their productive vocabulary size--skilled producers (i.e., children with large production vocabularies) showed evidence of predicting upcoming linguistic input, while low producers did not. Furthermore, we found that children's prediction ability is tied specifically to their production skills and not to their comprehension skills. Prediction is really a piece of cake, but only for skilled producers.
-
We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. Toddlers (30-month-olds) heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape-related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts; hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.
-
Infants become selectively sensitive to phonological distinctions relevant to their native language at an early age. One might expect that infants bring some of this phonological knowledge to bear in encoding the words they subsequently acquire. In line with this expectation, studies have found that 14-month-olds are sensitive to mispronunciations of initial consonants of familiar words when asked to identify a referent. However, there is very little research investigating infants' sensitivity to vowels in lexical representations. Experiment 1 examines whether infants at 15, 18 and 24 months are sensitive to mispronunciations of vowels in familiar words. The results provide evidence for vowels constraining lexical recognition of familiar words. Experiment 2 compares 15, 18 and 24-month-olds' sensitivity to consonant and vowel mispronunciations of familiar words in order to assess the relative contribution of vowels and consonants in constraining lexical recognition. Our results suggest a symmetry in infants' sensitivity to vowel and consonant mispronunciations early in the second year of life.
-
Previous research has shown that English infants are sensitive to mispronun ciations of vowels in familiar words by as early as 15-months of age. Thes results suggest that not only are infants sensitive to large mispronunciations of the vowels in words, but also sensitive to smaller mispronunciations, involving changes to only one dimension of the vowel. The current study broadens this research by comparing infants' sensitivity to the different types of changes involved in the mispronunciations. These included changes to the backness, height, and roundedness of the vowel. Our results confirm that 18-month-olds are sensitive to small changes to the vowels in familiar words Our results also indicate a differential sensitivity of vocalic specification, with infants being more sensitive to changes in vowel height and vowel backness than vowel roundedness. Taken together, the results provide clear evidence for specificity of vowels and vocalic features such as vowel height and backness in infants' lexical representations.
-
Investigated the cognitive processes involved in 24-month-old toddler word recognition by examining how words are represented in a toddler's mind, focusing on whether the phonological properties of words are important for their organization in the toddler lexicon in 2 experiments. A digital video scoring system was used in Experiment 1 to assess visual events while 32 toddlers (aged 23-24 months) were presented with the same experiment used by N. Mani and K. Plunkett (2010) in which phonologically related and unrelated primes were used to see whether the phonological relations between prime target pairs would influence children's target recognition. Despite showing target recognition in both conditions, the toddlers looked longer at the target following unrelated primed trials compared to related primed trials. The authors suggest that this pattern of responding is indicative of lexical-level interference effects influencing target responding in 24-month-olds. In Experiment 2 the only procedural difference was that unprimed baseline trials were included in which 28 toddlers (aged 22-25 months) were presented with a cross in the middle of the screen in place of a prime image followed by the simultaneous presentation of target-distractor images and subsequent naming of the target image. While results added support to the findings of Experiment 1, it was also seen that large cohort trials also resulted in reduced target looking compared to small target cohort trials, indicating that phonological priming is not a necessary condition for the observed lexical level cohort effects. It is concluded that by 24 months of age, children's responding in word recognition tasks approximates to adult-like performance in that words begin to cluster together in the toddler lexicon based on their phonological properties so that word recognition involves the activation and processing of phonologically related words.
-
While there are numerous studies that investigate the amount of phonological detail associated with toddlers’ lexical representations of words and their sensitivity to mispronunciations of these words, research has only recently begun to address the mechanisms guiding the use of this detail during word recognition. The current chapter reviews the literature on experiments using the visual world paradigm to assess infant word recognition, in particular, the amount of attention infants pay to phonological detail in word recognition. We further present data from a novel study using a visual priming paradigm to assess the extent to which toddlers retrieve sub-phonemic detail during lexical access. The results suggest that both the retrieval of an object’s label and toddlers’ recognition of a word involve activation of not only phonemic but also sub-segmental information associated with the lexical representation of this word. We therefore conclude that lexical access in toddlers is mediated by sub-phonemic information.
-
-
This paper investigated how foreign-accented stress cues affect on-line speech comprehension in British speakers of English. While unstressed English vowels are usually reduced to / @ /, Dutch speakers of English only slightly centralize them. Speakers of both languages differentiate stress by suprasegmentals (duration and intensity). In a cross-modal priming experiment, English listeners heard sentences ending in monosyllabic prime fragments—produced by either an English or a Dutch speaker of English—and performed lexical decisions on visual targets. Primes were either stress-matching (“ab” excised from absurd ), stress-mismatching (“ab” from absence ), or unrelated (“pro” from profound ) with respect to the target (e.g., ABSURD). Results showed a priming effect for stress-matching primes only when produced by the English speaker, suggesting that vowel qual- ity is a more important cue to word stress than suprasegmental information. Furthermore, for visual targets with word-initial secondary stress that do not require vowel reduction (e.g., CAMPAIGN), resembling the Dutch way of realizing stress, there was a priming effect for both speakers. Hence, our data suggest that Dutch-accented English is not harder to understand in general , but it is in instances where the language-specific implementation of lexical stress differs across languages.
-
Bilingual children, like bilingual adults, co-activate both languages during word recognition and production. But what is the extent of this co-activation? In the present study, we asked whether or not bilingual preschool children activate a shared phonological cohort across languages when hearing words only in their L1. We tested German-English children on a cross-modal priming paradigm. To ensure co-activation of languages, children first heard a short code-switch story. Compared to a monolingual control group, bilingual children in Experiment 1 showed only partial sensitivity to the L1 cohort. Bilingual children who did not hear the code-switch story (Experiment 2) showed priming effects identical to the monolinguals in Experiment 1. Results indicate that under single-language contexts, German-English bilingual preschoolers do not activate the non-target language cohort during word recognition but instead restrict cohort activation to the language of input. In contrast, presentation of the non-target language in the code-switch story appears to shift cohort activation and increase L2 activation, suggesting a highly flexible language system that is in tune to the broader linguistic context. We consider mechanisms of bilingual language control that may enable bilingual toddlers to limit cross-language phonological activation.
-
While American English infants typically segment words from fluent speech by 7.5-months, studies of infants from other language backgrounds have difficulty replicating this finding. One possible explanation for this cross-linguistic difference is that the input infants from different language backgrounds receive is not as infant-directed as American English infant-directed speech (Floccia et al., 2016). Against this background, the current study investigates whether German 7.5- and 9-month-old infants segment words from fluent speech when the input is prosodically similar to American English IDS. While 9-month-olds showed successful segmentation of words from exaggerated IDS, 7.5-month-olds did not. These findings highlight (a) the beneficial impact of exaggerated IDS on infant speech segmentation, (b) cross-linguistic differences in word segmentation that are based not just on the kind of input available to children and suggest (c) developmental differences in the role of IDS as an attentional spotlight in speech segmentation.
-
We examined how words from bilingual toddlers’ second language (L2) primed recognition of related target words in their first lan- guage (L1). On critical trials, prime–target word pairs were either (a) phonologically related, with L2 primes overlapped phonologically with L1 target words [e.g., slide (L2 prime)– Kleid (L1 target, ‘‘dress’’)], or (b) phonologically related through translation, with L1 translations of L2 primes rhymed with the L1 target words [e.g., leg (L2 prime, L1 translation, ‘‘Bein’’)– Stein (L1 target, ‘‘stone’’). Evidence of facilitated target recognition in the phonological priming condition suggests language nonselective access but not necessarily lexical access. However, a late interference effect on target recognition in the phonological priming through translation condition provides evidence for language nonselective lexical access: The L2 prime ( leg ) could influence L1 target recognition ( Stein ) in this condition only if both the L2 prime ( leg ) and its L1 translation (‘‘Bein’’) were concurrently activated. In addition, age- and gender-matched monolingual toddler controls showed no difference between conditions, providing further evidence that the results with bilingual toddlers were driven by cross-language activation. The current study, therefore, presents the first-ever evidence of cross-talk between the two languages of bilinguals even as they begin to acquire fluency in their second language.
-
Presents a critical review of arguments in favor of and against the view that prediction is necessary for understanding language. First, potential arguments in favor of the view that prediction provides a unified theoretical principle of the human mind and that it pervades cortical function are reviewed. It is discussed whether evidence of human abilities to detect statistical regularities is necessarily evidence for predictive processing, and suggestions that prediction is necessary for language learning are evaluated. Next, arguments in support of the contrasting viewpoint are reviewed: that prediction lends a ``helping hand'', but is not strictly needed for language processing. It is pointed out that not all language users appear to predict language and that suboptimal input makes prediction often very challenging. Prediction, moreover, is argued to be strongly context-dependent and impeded by resource limitations. Furthermore, it is argued that it may be problematic that most experimental evidence for predictive language processing comes from prediction-encouraging experimental set-ups. It is concluded that languages can be learned and understood in the absence of prediction. Claims that all language processing is predictive in nature are considered to be premature.
-
Do infants implicitly name visually fixated objects whose names are known, and does this information influence their preference for looking at other objects? We presented 18-month-old infants with a picture-based phonological priming task and examined their recognition of named targets in primed (e.g., dog-door) and unrelated (e.g., dog-boat) trials. Infants showed better recognition of the target object in primed than in unrelated trials across three measures. As the prime image was never explicitly named during the experiment, the only explanation for the systematic influence of the prime image on target recognition is that infants, like adults, can implicitly name visually fixated images and that these implicitly generated names can prime infants' subsequent responses in a paired visual-object spoken-word-recognition task.