-
Examined whether participants with different objective performance nonetheless subjectively experience the stimulation in the same way in 4 experiments. In Experiments 1 and 2 the same 30 subjects (mean age 23 years) participated. In Experiment 3 only 29 of the original 30 participated (1 underachiever refused to participate again), and in Experiment 4 a new group of 31 subjects (mean age 22 years) took part. In one group of observers objective performance increased with increasing target-mask stimulus onset asynchrony (SOA), whereas in another group performance decreased with increasing SOA. In addition, a group of overachievers showed ceiling effects whereas a group of underachievers hardly exceeded chance levels of performance irrespective of SOA. The differences between observers' objective measures of performance correspond to differences in participants' phenomenological reports of subjective experience. This indicates that participants differ in their access to specific perceptual cues that they use spontaneously to solve the task. When participants were instructed to use only a single specific cue, the instructed cue determined participants' objective performance considerably in 2 experiments. Nevertheless, masking functions remained similar with and without the cued instruction, and the effect of cues depended on the initial masking function of individuals. Findings suggest that individuals with different masking functions differ also in terms of phenomenology, used cues, and response strategy. The authors conclude that the relation between subjective experience, reported usage of perceptual cues, and objective performance in the metacontrast masking task deserves further investigation.
-
Replies to a comment by T. Bachmann (same issue) on a study by T. Albrecht, S. Klaptöke, and U. Mattler (same issue) on individual differences in metacontrast masking. In this study, it was found that perceptual learning enhanced 2 groups of observers with qualitative individual differences in metacontrast masking. The issues raised included initial similarities and differences in Type A and Type B observers, whether the results can be attributed to a difference in direct phenomenal experience or in criteria, and an emphasis on the importance of individual data. It is indicated that the observers were similar at the beginning of each experiment, and that further research is needed in order to locate the source of the observed individual differences and to determine what the groups have in common and where they differ. Furthermore, it is argued that the observers were not able to choose which feature they used, and that perceptual learning either improves conscious perception or the use of criteria. Finally, the importance of the data of individual participants in experimental psychology is highlighted.
-
In metacontrast masking target visibility is modulated by the time until a masking stimulus appears. The effect of this temporal delay differs across participants in such a way that individual human observers’ performance shows distinguishable types of masking functions which remain largely unchanged for months. Here we examined whether individual differences in masking functions depend on different response criteria in addition to differences in discrimination sensitivity. To this end we reanalyzed previously published data and conducted a new experiment for further data analyses. Our analyses demonstrate that a distinction of masking functions based on the type of masking stimulus is superior to a distinction based on the target–mask congruency. Individually different masking functions are based on individual differences in discrimination sensitivities and in response criteria. Results suggest that individual differences in metacontrast masking result from individually different criterion contents. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
-
In vision research metacontrast masking is a widely used technique to reduce the visibility of a stimulus. Typically, studies attempt to reveal general principles that apply to a large majority of participants and tend to omit possible individual differences. The neural plasticity of the visual system, however, entails the potential capability for individual differences in the way observers perform perceptual tasks. We report a case of perceptual learning in a metacontrast masking task that leads to the enhancement of two types of adult human observers despite identical learning conditions. In a priming task both types of observers exhibited the same priming effects, which were insensitive to learning. Findings suggest that visual processing of target stimuli in the metacontrast masking task is based on neural levels with sufficient plasticity to enable the development of two types of observers, which do not contribute to processing of target stimuli in the priming task. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
-
Professionally edited videos entail frequent editorial cuts - that is, abrupt image changes from one frame to another. The impact of these cuts on human eye movements is currently not well understood. In the present eye-tracking study, we experimentally gauged the degree to which color and visual continuity contributed to viewers' eye movements following cinematic cuts. In our experiment, viewers were presented with two edited action sports movies on the same screen but they were instructed to watch and keep their gaze on only one of these movies. Crucially, the movies were frequently interrupted and continued after a short break either at the same or at switched locations. Hence, viewers needed to rapidly recognize the continuation of the relevant movie and re-orient their gaze toward it. Properties of saccadic eye movements following each interruption probed the recognition of the relevant movie after a cut. Two key findings were that (i) memory co-determines attention after cuts in edited videos, resulting in faster re-orientation toward scene continuations when visual continuity across the interruption is high than when it is low, and (ii) color contributes to the guidance of attention after cuts, but its benefit largely rests upon enhanced discrimination of relevant from irrelevant visual information rather than memory. Results are discussed with regard to previous research on eye movements in movies and recognition processes. Possible future directions of research are outlined.
-
Eye fixations allow the human viewer to perceive scene content with high acuity. If fixations drive visual memory for scenes, a viewer might repeat his/her previous fixation pattern during recognition of a familiar scene. However, visual salience alone could account for similarities between two successive fixation patterns by attracting the eyes in a stimulus-driven, task-independent manner. In the present study, we tested whether the viewer’s aim to recognize a scene fosters fixations on scene content that repeats from learning to recognition as compared to the influence of visual salience alone. In Experiment 1 we compared the gaze behavior in a recognition task to that in a free-viewing task. By showing the same stimuli in both tasks, the task-independent influence of salience was held constant. We found that during a recognition task, but not during (repeated) free viewing, viewers showed a pronounced preference for previously fixated scene content. In Experiment 2 we tested whether participants remembered visual input that they fixated during learning better than salient but nonfixated visual input. To that end we presented participants with smaller cutouts from learned and new scenes. We found that cutouts featuring scene content fixated during encoding were recognized better and faster than cutouts featuring nonfixated but highly salient scene content from learned scenes. Both experiments supported the hypothesis that fixations during encoding and maybe during recognition serve visual memory over and above a stimulus-driven influence of visual salience. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
-
Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie’s continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer’s active matching of scene content across cuts. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
-
We tested whether viewers have cognitive control over their eye movements after cuts in videos of real-world scenes. In the critical conditions, scene cuts constituted panoramic view shifts: Half of the view following a cut matched the view on the same scene before the cut. We manipulated the viewing task between two groups of participants. The main experimental group judged whether the scene following a cut was a continuation of the scene before the cut. Results showed that following view shifts, fixations were determined by the task from 250 ms until 1.5 s: Participants made more and earlier fixations on scene regions that matched across cuts, compared to nonmatching scene regions. This was evident in comparison to a control group of participants that performed a task that did not require judging scene continuity across cuts, and did not show the preference for matching scene regions. Our results illustrate that viewing intentions can have robust and consistent effects on gaze behavior in dynamic scenes, immediately after cuts. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
-
[Correction Notice: An Erratum for this article was reported in Vol 95 of Animal Behaviour (see record [rid]2014-36193-024[/rid]). The authors specify that with regard to the faecal cortisol analyses an 11-oxoetiocholanolone immunoassay (Möstl, Maggs, Schrötter, Besenfelder, & Palme, 2002; Wallner, Möstl, Dittami, & Prossinger, 1999) was applied to measure cortisol equivalent metabolites in faeces.] Colour signals play a major role in social and sexual communication in a broad range of animal species. Previous studies on nonhuman primates showed that intense female skin coloration attracts male attention. We investigated (1) whether sexually active male Japanese macaques are attracted by intensely coloured female skin, (2) whether a preference for intense skin coloration results from the increased colour contrast between the skin area and its surroundings irrespective of the red chromaticity, and (3) whether the endocrine status of sexually active males affects their attentional selectivity (or preference) for salient female sexual skin coloration. We conducted two behavioural experiments in two consecutive mating seasons. First, we presented two female face images coloured in a natural range of red skin coloration on monitors. Second, we presented the same faces dissociated from the red chromaticity while maintaining their initial colour contrast properties. In both experiments we analysed male selective visual attention and approaches as a function of stimulus type. Faecal samples were collected after each experiment to analyse focal males' cortisol and testosterone excretion rates. We found that female facial skin coloration triggered selective behaviour in social-living male Japanese macaques. Variances in colour contrast also triggered males' selective orienting towards an intensely coloured face image but the red chromaticity remained essential to induce prolonged male interest. Furthermore, elevated cortisol facilitated male preferences for the intensely coloured female faces, sociosexual stimuli that are presumably highly relevant during the mating season. Future studies may pursue the principle of colour contrast in male attentional behaviour with respect to subtle colour changes expressed by females throughout the reproductive cycle. Cortisol-related physiological processes should be considered in studies on mating-relevant selective attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved)
-
In visual search for pop-out targets, search times are shorter when the target and nontarget colors from the previous trial are repeated than when they change. This priming effect was originally attributed to a feature weighting mechanism that biases attention toward the target features, and away from the non-target features. However, more recent studies have shown that visual selection is strongly context-dependent: according to a relational account of feature priming, the target color is always encoded relative to the non-target color (e.g., as redder or greener). The present study provides a critical test of this hypothesis, by varying the colors of the search items such that either the relative color or the absolute color of the target always remained constant (or both). The results clearly show that color priming depends on the relative color of a target with respect to the nontargets but not on its absolute color value. Moreover, the observed priming effects did not change over the course of the experiment, suggesting that the visual system encodes colors in a relative manner from the start of the experiment. Taken together, these results strongly support a relational account of feature priming in visual search, and are inconsistent with the dominant feature-based views. (PsycINFO Database Record (c) 2016 APA, all rights reserved)
-
How quickly do children and adults interpret scalar lexical items in speech processing? The current study examined interpretation of the scalar terms some vs. all in contexts where either the stronger (some = not all) or the weaker interpretation was permissible (some allows all). Children and adults showed increased negative deflections in brain activity following the word some in some-infelicitous versus some-felicitous contexts. This effect was found as early as 100 ms across central electrode sites (in children), and 300–500 ms across left frontal, fronto-central, and centro-parietal electrode sites (in children and adults). These results strongly suggest that young children (aged between 3 and 4 years) as well as adults quickly have access to the contextually appropriate interpretation of scalar terms.
-
One of the most important social cognitive skills in humans is the ability to “put oneself in someone else’s shoes,” that is, to take another person’s perspective. In socially situated communication, perspective taking enables the listener to arrive at a meaningful interpretation of what is said (sentence meaning) and what is meant (speaker’s meaning) by the speaker. To successfully decode the speaker’s meaning, the listener has to take into account which information he/she and the speaker share in their common ground (CG). We here further investigated competing accounts about when and how CG information affects language comprehension by means of reaction time (RT) measures, accuracy data, event-related potentials (ERPs), and eye-tracking. Early integration accounts would predict that CG information is considered immediately and would hence not expect to find costs of CG integration. Late integration accounts would predict a rather late and effortful integration of CG information during the parsing process that might be reflected in integration or updating costs. Other accounts predict the simultaneous integration of privileged ground (PG) and CG perspectives. We used a computerized version of the referential communication game with object triplets of different sizes presented visually in CG or PG. In critical trials (i.e., conflict trials), CG information had to be integrated while privileged information had to be suppressed. Listeners mastered the integration of CG (response accuracy 99.8%). Yet, slower RTs, and enhanced late positivities in the ERPs showed that CG integration had its costs. Moreover, eye-tracking data indicated an early anticipation of referents in CG but an inability to suppress looks to the privileged competitor, resulting in later and longer looks to targets in those trials, in which CG information had to be considered. Our data therefore support accounts that foresee an early anticipation of referents to be in CG but a rather late and effortful integration if conflicting information has to be processed. We show that both perspectives, PG and CG, contribute to socially situated language processing and discuss the data with reference to theoretical accounts and recent findings on the use of CG information for reference resolution.
-
Developmental research, like many fields, is plagued by low sample sizes and inconclusive findings. The problem is amplified by the difficulties associated with recruiting infant participants for research as well as the increased variability in infant responses. With sequential testing designs providing a viable alternative to paradigms facing such issues, the current study implemented a Sequential Bayes Factor design on three findings in the developmental literature. In particular, using the framework described by Schönbrödt and colleagues (2017), we examined infants’ sensitivity to mispronunciations of familiar words, their learning of novel word-object associations from crosssituational learning paradigms, and their assumption of mutual exclusivity in assigning novel labels to novel objects. We tested an initial sample of 20 participants in each study, incrementally increasing sample size by one and computing a Bayes Factor with each additional participant. In one study, we were able to obtain moderate evidence for the alternate hypotheses despite testing less than half the number of participants as in the original study. We did not replicate the findings of the cross-situational learning study. Indeed, the data were five times more likely under the null hypothesis, allowing us to conclude that infants did not recognize the trained word-object associations presented in the task. We discuss these findings in light of the advantages and disadvantages of using a Sequential Bayes Factor design in developmental research while also providing researchers with an account of how we implemented this design across multiple studies.
-
Studies on lexical development in young children often suggest that the organisation of the early lexicon may vary with age and increasing vocabulary size. In the current study, we explicitly examine this suggestion in further detail using a longitudinal study of the development of phonological and semantic priming effects in the same group of toddlers at three different ages. In particular, our longitudinal design allows us to disentangle effects of increasing age and vocabulary size on priming and the extent to which vocabulary size may predict later priming effects. We tested phonological and semantic priming effects in monolingual German infants at 18-, 21- and 24-month-olds. We used the intermodal preferential looking paradigm combined with eye tracking to measure the influence of phonologically and semantic related/unrelated primes on target recognition. We found that phonological priming effects were predicted by participants’ current vocabulary size, even after controlling for participants’ age and participants’ early vocabulary size. Semantic priming effects were, in contrast, not predicted by vocabulary size. Finally, we also found a relationship between early phonological priming effects and later semantic priming effects, as well as between early semantic priming effects and later phonological priming effects, potentially suggesting (limited) consistency in lexical structure across development. Taken together, these results highlight the important role of vocabulary size in the development of priming effects in early childhood.
-
From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) than adult-directed speech (ADS). Yet, IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multi-site study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants’ IDS preference. As part of the multi-lab ManyBabies 1 project, we compared lab-matched samples of 333 bilingual and 385 monolingual infants’ preference for North-American English IDS (cf. ManyBabies Consortium, 2020: ManyBabies 1), tested in 17 labs in 7 countries. Those infants were tested in two age groups: 6–9 months (the younger sample) and 12–15 months (the older sample). We found that bilingual and monolingual infants both preferred IDS to ADS, and did not differ in terms of the overall magnitude of this preference. However, amongst bilingual infants who were acquiring North-American English (NAE) as a native language, greater exposure to NAE was associated with a stronger IDS preference, extending the previous finding from ManyBabies 1 that monolinguals learning NAE as a native language showed a stronger preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes a similar contribution to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.
-
In recent years, the popularity of tablets has skyrocketed and there has been an explosive growth in apps designed for children. However, many of these apps are released without tests for their effectiveness. This is worrying given that the factors influencing children’s learning from touchscreen devices need to be examined in detail. In particular, it has been suggested that children learn less from passive video viewing relative to equivalent live interaction, which would have implications for learning from such digital tools. However, this so-called video deficit may be reduced by allowing children greater influence over their learning environment. Across two touchscreen-based experiments, we examined whether 2- to 4-year-olds benefit from actively choosing what to learn more about in a digital word learning task. We designed a tablet study in which “active” participants were allowed to choose which objects they were taught the label of, while yoked “passive” participants were presented with the objects chosen by their active peers. We then examined recognition of the learned associations across different tasks. In Experiment 1, children in the passive condition outperformed those in the active condition (n= 130). While Experiment 2 replicated these findings in a new group of Malay-speaking children (n= 32), there were no differences in children’s learning or recognition of the novel word-object associations using a more implicit looking time measure. These results suggest that there may be performance costs associated with active tasks designed as in the current study, and at the very least, there may not always be systematic benefits associated with active learning in touchscreen-based word learning tasks. The current studies add to the evidence that educational apps need to be evaluated before release: While children might benefit from interactive apps under certain conditions, task design and requirements need to consider factors that may detract from successful performance.
-
A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants’ eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonological priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.
-
Since signs and words are perceived and produced in distinct sensory-motor systems, they do not share a phonological basis. Nevertheless, many deaf bilinguals master a spoken language with input merely based on visual cues like mouth representations of spoken words and orthographic representations of written words. Recent findings further suggest that processing of words involves cross-language cross-modal co-activation of signs in deaf and hearing bilinguals. Extending these findings in the present ERP-study, we recorded the electroencephalogram (EEG) of fifteen congenitally deaf bilinguals of German Sign Language (DGS) (native L1) and German (early L2) as they saw videos of semantically and grammatically acceptable sentences in DGS. Within these DGS-sentences, two signs functioned as prime and target. Prime and target signs either had an overt phonological overlap as signs (phonological priming in DGS), or were phonologically unrelated as signs but had a covert orthographic overlap in their written German translation (orthographic priming in German). Results showed a significant priming effect for both conditions. Target signs that were either phonologically related as signs or had an underlying orthographic overlap in their written German translation engendered a less negative going polarity in the electrophysiological signal compared to overall unrelated control targets. We thus provide first evidence that deaf bilinguals co-activate their secondly acquired ‘spoken/written’ language German during whole sentence processing of their native sign language DGS.
-
Caregivers typically use an exaggerated speech register known as infant-directed speech (IDS) in communication with infants. Infants prefer IDS over adult-directed speech (ADS) and IDS is functionally relevant in infant-directed communication. We examined interactions between maternal IDS quality, infants’ preference for IDS over ADS, and the functional relevance of IDS at 6- and 13-months. While 6-month-olds showed a preference for IDS over ADS, 13-month-olds did not. Differences in gaze following behavior triggered by speech register (IDS vs. ADS) were found in both age groups. The degree of infants’ preference for IDS (relative to ADS) was linked to the quality of maternal IDS infants were exposed to. No such relationship was found between gaze following behavior and maternal IDS quality and infant IDS preference. The results speaks to a dynamic interaction between infants’ preference for different kinds of social signals and the social cues available to them.
-
Preschoolers learn selectively from others based on the speakers' prior accuracy. This indicates that they recognize the models' (in)competence and use it to predict who will provide the most accurate and useful information in the future. Here, we investigated whether 5-y.-o. children are also able to use speaker reliability retrospectively, once they have more information regarding their competence. They first experienced two previously unknown speakers who provided conflicting information about the referent of a novel label, with each speaker using the same novel label to refer exclusively to a different novel object. Following this, children learned about the speakers' differing labeling accuracy. Subsequently, children selectively endorsed the object-label link initially provided by the speaker who turned out to be reliable significantly above chance. Crucially, more than half of these children justified their object selection with reference to speaker reliability, indicating the ability to explicitly reason about their selective trust in others based on the informants' individual competences. Findings further corroborate the notion that preschoolers are able to use advanced, metacognitive strategies (trait reasoning) to learn selectively. In contrast, since learning preceded reliability exposure and gaze data showed no preferential looking toward the more reliable speaker, findings cannot be accounted for by attentional bias accounts of selective social learning.