LCICD 2016: LANCASTER CONFERENCE ON INFANT AND CHILD DEVELOPMENT 2016
PROGRAM FOR FRIDAY, AUGUST 26TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 7: Keynote
09:00
Learning where to look: infants and robots

ABSTRACT. Vision is an active process that relies on various kinds of eye movements to selectively process information from the environment moment by moment. Since infants acquire accurate control over their eyes comparatively early, many paradigms use eye movements as a window into infants' cognitive processes. But how do such processes drive our eye movements? And how do infants learn to control their eyes appropriately or even optimally? In this talk I give an overview over several lines of research studying these questions at different levels from basic aspects of binocular vision to the discovery of agency and attention sharing. A central theme of our research is the close coupling between theoretical and empirical research. Computational models and robot simulations allow us to better understand and interpret infant data while also generating testable predictions for future experiments. 

10:00-11:00 Session 8: Language Acquisition II
10:00
When prosody matters! Developing word segmentation abilities in European Portuguese learning infants
SPEAKER: Joseph Butler

ABSTRACT. Early word segmentation plays a crucial role in language acquisition, with previous studies showing variability in developing segmentation abilities across language and utterance contexts. Word position becomes crucial due to prosody and its properties, with words appearing at utterance boundaries easier to segment due to particularly salient cues (e.g., duration, pitch cues) than those in the middle. This paper is the first attempt to study emerging segmentation abilities in European-Portuguese (EP) learning infants and whether prosody constrains early word segmentation abilities. Unlike other languages, EP prosody shows mixed rhythm, and provides strong cues to intonational phrase (IP) boundaries, but not to lower boundaries. Monosyllabic word form segmentation was tested at 6, 9 and 12 months. At 6 and 9 months, segmentation abilities at the final IP boundary (sentence edge) and internally within the IP were compared. Segmentation at 12 months was tested at IP boundaries internal to the sentence, without the pause cue, and at word boundaries. At 6 months, infants only segment at the final IP boundary, but not IP internally, while 9 month-olds show evidence of developing segmentation abilities IP internally (Fig.1). Twelve month-olds segment at IP boundaries within the sentence, demonstrating similar behaviour, segmentation wise, to 6 month-olds at final IP boundaries, but are still unable to segment at word boundaries (Fig.2). These results show segmentation abilities in EP infants emerging around 6 months of age (as for English – Johnson et al., 2014), with further development evident throughout the first year. Additionally, these results show that prosody constrains early word segmentation, building upon, and going further than the edge vs internal sentence hypothesis, adding to existing knowledge of emerging segmentation abilities, and what cues constrain, or are utilised, during the development of this ability, in a prosodically ‘atypical’ language, EP, not previously studied for word segmentation.

10:30
Influence of learning schedules on infant category learning

ABSTRACT. The ability to categorise objects based on their similarities and differences constitutes a primary foundation for semantic organization. In an experimental setting, characteristics of stimulus exposure strongly influence categorisation outcomes. For example, 4 to 6 months old infants are able to categorise objects presented in pairs, while having difficulty in extracting category information from objects presented one by one (Oakes & Ribar, 2005). The aim of the current study is to explore further how these learning conditions influence infant category learning. The effects of sequential and paired familiarisation on category formation were examined using a standard familiarisation-novelty preference task in an eye-tracking experiment with 10 months old infants. Participants were presented with a set of novel objects consisting of two categories, with the overall amount of familiarisation time equated across the two conditions. Preliminary analyses revealed no novelty preference in the sequential condition, indicating no category was formed. Infants in the paired condition, by contrast, expressed a preference for the novel object, indicating that they formed two categories. These results replicate those of Oakes and Ribar (2005). However, as attentional preferences are dynamic in their nature (Houston-Price & Nakai, 2004), we considered whether the standard measure of novelty preference, expressed through an overall proportion of looking time, might be insensitive to potential effects of interest. In order to explore the time course of looking preferences, we conducted a growth curve analysis on the same dataset. This revealed that novelty preference is present in both groups of infants, i.e. infants in both experimental conditions extracted categorical information, though the latency and strength of the effects differed. In summary, this study shows that category formation is sensitive to learning conditions and demonstrates the importance of adopting more sensitive statistical approaches, such as multilevel modelling, to improve our understanding of infants’ categorisation abilities.

11:00-11:30Coffee Break
11:30-13:00 Session 9: Communication & Emotion
11:30
Mapping developmental changes in the integration of emotion perception from bodily expressions and affective sounds
SPEAKER: Peiwen Yeh

ABSTRACT. Previous studies on emotion perception have demonstrated that increased sensitivity occurs during multisensory information processing when compared with unisensory information (van Wassenhove, Grant, & Poeppel, 2005). Even though body expressions have been shown to be an effective cue for conveying emotion (de Gelder, 2006), our understanding is relatively limited in terms of how emotions are processed via body expressions combined with vocal affective information. We therefore used event-related potentials (ERPs) to measure responses to the presentation of angry sounds with emotionally congruent or incongruent body expressions, and angry sounds presented in isolation in adults and 6.5-month-old infants. In adults, the N1, a negative ERP component for sensory processing (~100 ms after onset of sound), was significantly reduced in latency and amplitude for audiovisual pairs compared to a sound-only condition. This result suggests that emotion interaction from the two modalities occurs at an early stage of processing. The modulation of congruency effects were found either in facilitation or suppression within 200ms. With the same paradigm, the infant data indicated differences between audio-only and audiovisual conditions around 300 ms after auditory onset in left-frontal regions (P350). A large negative component (N450) related to attention mechanisms in infants was elicited for the congruency effect. In sum, the current findings indicate that the capacity to integrate angry body and sound information has already developed at 6.5-months of age. It is also likely that different latencies separate processing for integrating modal information from emotional content. To further understand the neural maturation of the integration of emotion perception between these two age groups, we will conduct the same study with children (4- to 6-year-olds).

12:00
Do Infants Recognize Engagement in Social Interactions? The Case of Face-to-Face Conversation

ABSTRACT. This study explores 12-month-olds’ understanding of face-to-face conversation, a key contextual structure associated with engagement in a social interaction. Using a violation-of-expectations paradigm, we habituated infants to a “face-to-face” conversation, and in a test phase compared their looking times between “back-to-back” (conceptually novel) and “face-to- face” (conceptually familiar) conversations, while simultaneously manipulating perceptual familiarity in a 2x2 factorial design. We also analyzed dynamic changes in pupil dilation, which are considered a reliable measure of cognitive load that may index processing of social interactions. Infants looked relatively longer at perceptual changes (new speaker positions) but not at conceptual change (back-to-back conversation), suggesting that face-to-face conversation may not elicit particular expectations, and so may not carry any particular conceptual significance. Moreover, on the first test trial, larger pupil dilation was observed for familiar conditions, suggesting that familiarity with perceptual features could enhance processing of conversations. Thus, this study undermines assertions regarding infants’ conceptual understanding of the social signals underlying engagement. Infants may rather recognize such signals through their perceptual familiarity and associated positive feelings. This may then increase their engagement when observing and participating in others’ collaborative activities, in turn allowing for the development of knowledge regarding others’ intentions.

12:30
Follow me! Infants’ ability to learn about the referential nature of a cue

ABSTRACT. The ability to follow another person’s gaze is a crucial prerequisite of joint attention and language (Brooks & Meltzoff, 2005). However, little is known about how infants acquire this behavior. Corkum and Moore (1998) hypothesized that infants learn to follow gazes through reinforcement: If infants follow the gaze of others, their attention is guided to an interesting sight in the environment. Based on this idea, we applied a gaze-contingent eye tracking paradigm measuring 4-month-olds’ ability (10 female, mean age: 4 months 14 days) to learn about the referential nature of a cue through reinforcement. The study contains a baseline, training and test phase. In each trial, infants saw the face of a woman with a comic mouse to each side. The face turned to the side, looking at one and away from the other mouse. If infants are sensitive to gaze cues, we expected longer looking times to the cued than to the not cued mouse in baseline. During training, we rewarded gaze following behavior: whenever infants looked at the cued mouse, this mouse started to move lively. Test phase was identical to baseline. If infants learn through reward, they should enhance their behavior from baseline to test. A rmANOVA revealed longer looking times to the cued than to the not cued mouse in baseline and test, F(1,20)=18.52, p<0.001, ƞp²=0.49. The more infants elicited gaze-contingent animations in training, the more they enhanced their preference for the cued mouse in test, r=0.54, p=0.011. Four-month-olds showed spontaneous gaze following behavior and enhanced it when being rewarded. However, we cannot conclude that this behavior originally was acquired through reinforcement learning. In a follow-up study we therefore test infants’ ability to learn about the referential nature of a nonsocial cue whose direction they do not already follow in baseline. Data acquisition is under way.

13:00-14:00Lunch Break
14:00-16:00 Session 10: Poster 2
14:00
Cross-domain influences of early word and action learning
SPEAKER: unknown

ABSTRACT. For infants, a good strategy to learn about their surrounding world is to communicate with experienced speakers. Successful communication includes the comprehension of spoken language as well as observed actions. When a caregiver shows a new action to an infant, he or she will not only demonstrate the action for the infant to imitate, but will also use language to describe the action to the child. The infant, in turn, can use this information to learn about the action presented. There is evidence that verbal information presented during action demonstration indeed has an impact on infants' action processing and reproduction of that action. Therefore, information from the different domains might interact in social learning situations. The present research seeks to further enrich our knowledge about this interaction and investigates how different verbal information during action presentation influences subsequent action reproduction. In a video phase we will present 18- and 24-month-old infants with videos of two novel actions, each performed with an unfamiliar object, accompanied by verbal information. This information, depending on the experimental condition, will emphasize either the movement, the object acted upon, both, or none of it. Here, we will track the infants’ eye movements to examine how the verbal information influences infants’ distribution of attention to the demonstrated actions. In a subsequent imitation phase, infants will act on real-life versions of the objects. We expect that the infants will integrate the verbal information into their cognitive action-representation, and therefore we expect differences in action reproduction between our experimental conditions. Furthermore, we will be able to investigate possible relations between eye movements/attention distribution and imitation behavior. The planned research will therefore shed light on mutual influences in the early development of language and action.

14:00
What predicts infants’ pointing frequency at 12 months?
SPEAKER: unknown

ABSTRACT. Pointing is a robust predictor of infants’ later language capacities (Colonnesi et al., 2010). Yet the predictors of the development of pointing frequency in the first year of infants’ lives are not as well-known. Pointing development might be dependent on infant-driven competencies (Liszkowski & Tomasello, 2011) as well as the shaping of adults (Matthews et al., 2012). In the current study, we examine the fine motor and point following abilities of infants along with caregiver responsiveness in relation to later pointing frequency. 23 mother-infant dyads (12 girls) were examined at infants’ 10 and 12 months of age. Infants’ points and mothers’ responses to these points were assessed via the decorated room paradigm (Liszkowski et al., 2012). The verbal and/or non-verbal responses mothers provided to their infants’ points within 2 seconds were used to construe a maternal responsiveness measure. Responses were categorized as “relevant” if they were semantically relevant to the item infant pointed at, “non-relevant” if they were irrelevant, and “none” if the mother did not provide any behavior. Percentages of response categories were calculated by dividing the total number of responses in each category by the total number of infant points. Also, infants’ fine motor development and ability to follow points were assessed via the Mullen Scales of Early Learning (Mullen, 1995) and a point following procedure adapted from Mundy (2003), respectively. Results showed that the frequency of infants’ points at 12 months was significantly predicted by the percentage of relevant maternal responsiveness at 10 months, even when controlling for the frequency of infants’ points, Mullen scores, and point following scores at 10 months (F(5,17)= 3.908, p <.05, R2= .54). This study demonstrates the prevailing effect of caregiver responsiveness over infants’ own fine motor and social-cognitive abilities on the development of pointing within the first year.

14:00
The influence of speech-action relatedness on 24-month-olds’ selective imitation
SPEAKER: unknown

ABSTRACT. Meltzoff (1995) showed, using his reenactment procedure, that by 18months, infants can look past an adult’s surface behavior by detecting and reproducing in their behavior the adult’s underlying intention, even if the corresponding action was never observed by them before. Furthermore, we know that infants selectively imitate intentional over accidental actions (e.g. Carpenter et al., 1998). Based on this, we investigated if infants can process and, in their imitative behavior, rely on an adult's speech, even if that verbally announced action intention does not match the adult's following observable action. Fourty-six 24-month olds observed an adult performing one of two possible actions (e.g., up and down) on an object. Prior to each action demonstration, the adult verbally announced that she wanted to perform an action, using a telic preposition (e.g., ‘up’). In a between-subjects design, she then performed either the matching action (e.g., up; congruent condition) or the non-matching action (e.g., down; incongruent condition). In a 30-second response period, infants could then act on the object themselves. This procedure was repeated across 2 more trials with different objects, actions, and prepositions. Our results confirmed our expectations, in that they showed that infants in the congruent condition performed the demonstrated action (DA) more often than infants in the incongruent condition (p<.01), while infants in the incongruent condition performed the alternative action (AA; matching the spoken intentions) more often than infants in the congruent condition (p<.01). The two conditions only differed in the relatedness between speech and action, so we conclude that infants were able to perceive the discrepancy between the actor's verbal announcement and the performed action. This provides indication for the impact of verbal communication on infants' cognitive action representation.

14:00
The emergence of face- and body-selective brain signatures in infants
SPEAKER: unknown

ABSTRACT. Face processing is a rapidly emerging ability already present at 3 months of age (Halit et al., 2003). This, together with other behavioural evidences (e.g., Johnson et al., 1991), has led researchers to suggest that our ability to recognise faces is the result of an innate ‘social brain’ with pathways genetically pre-specified for processing social information (e.g., Baron-Cohen et al., 1999). Since bodies are social and communicative tools, they should also be ‘special’ in their function of informing us of others’ intentions and emotions. We presented 3-, 9-, and 14-month-old infants with images of upright and inverted faces and bodies while recording their brain activity. While faces elicited the well-known face-related ERP components (N290 and P400) in each age-group, these were not present for bodies at 3 months of age. However, at this age, the latency of the N290 showed some degree of sensitivity to the face stimuli’s orientation, peaking earlier for upright than inverted faces. At 9 and 14 months, the N290 was significantly affected by stimulus orientation, showing larger amplitude for upright than inverted faces and bodies. Stimulus orientation also affected the amplitude of the P400 component; however, this effect was modulated by stimulus and hemisphere at 9 months, showing a right hemisphere specialisation for faces, and a left hemisphere specialisation for bodies. Finally, at 14 months the P400 showed specialisation to face stimuli, peaking earlier for upright than inverted stimuli over the right hemisphere. These findings show that infants’ occipital-temporal cortex is sensitive to faces earlier in life than to the rest of the body. These sensitivities appear to emerge in concert with exposure to other people, which is weighted towards faces earlier in infancy, while the rest of the human body is more frequently visible when the infant begins to sit upright more.

14:00
Individual differences in the neural correlates of infants' responses towards other baby's cry and laughter
SPEAKER: unknown

ABSTRACT. A growing body of evidence has shown that infants by 10-12 weeks already show emotional resonance to adult facial and vocal displays of happiness, sadness and anger (e.g. Haviland et al., 1987). Less is known about infants’ responses to their peers’ emotions and whether such emotional responses are modulated by temperamental traits. To date, few studies have shown that during their first hours of life, newborns express reactions of self-distress in response to another newborn cry (e.g. Simner, 1971). Our study aims to address this gap by investigating whether the neural correlates of infants’ responses to their peers’ cry and laughter are related to their temperamental characteristics. Thirty 8-month old infants were presented with audio recordings of other infants’ laughter, cry and coughing. ERPs time-locked to the onset of the sounds were analysed with respect to the effect of emotion (positive/negative/neutral) and hemisphere (right/left). These were further analysed in relation to the IBQ-R scores. At frontal locations, we found a significant main effect of emotion for the mean amplitude of N1 (50-150 ms; F(2, 58)= 5.847; p= 0.005), P2 (150-250 ms; F(2, 58)= 4.481; p= 0.016) and LPC (550-750 ms; F(2, 58)= 5.179; p= 0.009), such that crying evoked a more negative N1 amplitude than laughter (p= 0.020) and a larger LPC amplitude than coughing (p= 0.045). On the other hand, laughter evoked larger P2 amplitudes than crying (p= 0.036). Infants who scored higher on IBQ-R Emotion Regulation showed lower LPC amplitudes when listening a peer’s laughter (p= 0.004). Furthermore, infants who scored higher on IBQ-R Fear showed larger P2 amplitudes for crying (p= 0.003). Taken together, our results provide evidence for an early negative bias in social-emotional development such that 8 months old infants allocate more attention towards a peer's cry compared to a peer's laughter.

14:00
Put on a happy face! Infant’s ability to discriminate happy, angry, and sad from fearful facial expressions using a Fast Periodic Visual Stimulation (FPVS) paradigm
SPEAKER: unknown

ABSTRACT. The ability to “read” other people’s facial expressions plays an important role in social interactions. This ability begins to develop early in life, however it is unclear whether infants are better at discriminating facial expressions that cross a category boundary (positive/negative – e.g., happy/fearful) than expressions from the same category (negative/negative – e.g., angry/fearful).

The current study uses a novel approach called Fast Periodic Visual Stimulation (FPVS) to evaluate this question in 7-month-old infants. FPVS is an electrophysiological technique that relies on rapid presentation of stimuli at a specific frequency. This creates a periodic response in the brain at that particular frequency that can be measured at the scalp surface.

In the current study, infants saw three emotion comparisons: a) Happy vs. Fearful, b) Angry vs. Fearful, and c) Sad vs. Fearful. In each pair, the first emotion was presented frequently and the second emotion was presented infrequently. We presented faces at a frequency rate of 6Hz (6 faces/second), while the infrequent fearful expression was presented every fifth face, creating a second “discrimination” frequency of 1.2Hz (6Hz/5 = 1.2 fearful faces/second). If infants are able to discriminate the fearful expression from the frequent expression, they will show a large response at the discrimination frequency. We predict that infants would show a larger discrimination response in the Happy vs. Fearful condition than in the other conditions.

Based on preliminary analyses (n=8), it is clear that infants show a large response at the base frequency rate (6Hz), regardless of condition (Figure 1). It seems that infants in the Happy vs. Fearful condition show a discrimination response at 1.2Hz, whereas infants in the other conditions do not, however further analyses are required to assess their statistical and practical significance. It appears infants may be better at emotion discrimination that crosses a category boundary.

14:00
Developmental differences in phonemic perception between monolingual and bilingual infants

ABSTRACT. Infants’ ability to perceive native speech sounds improves with age and language exposure. Of growing interest are the differences in native perceptual abilities between monolingual and bilingual infants. Bilingual infants exhibit the remarkable ability to categorically perceive and organize two different sets of phonemes to its corresponding languages by performing successfully in phoneme discrimination tasks. There is robust research on phonemic discrimination in monolingual and bilinguals only of contrasts both native to bilinguals and one native to monolinguals. Findings show that at younger age groups (i.e. 4-6 months), monolinguals and bilinguals are able to distinguish all speech contrasts presented. However, at an older age (i.e. 10-12 months), only bilinguals succeed in the task. The research suggests that bilinguals are less committed to their languages during infancy (e.g. broader perceptual abilities) and their development of responses to speech sounds differ from monolinguals.

My PhD seeks to introduce a non-invasive neuroimaging technique to the field by using functional near-infrared spectroscopy (fNIRS) during phoneme discrimination tasks. Much work has been done using this technology to understand hemispheric lateralization of speech. I plan to test English monolingual and English-Mandarin bilingual infants on three speech contrasts: English, Mandarin, and Hindi. Testing a third contrast non-native to both groups would advance our understanding of the development of phonemic perception between monolinguals and bilinguals and potentially reveal a prolonged neural commitment or neuroplasticity in bilingual infants. In addition, using fNIRS would allow me to examine how each language contrast will be processed in the brain. For example, the left hemisphere processes linguistic stimuli whereas the right hemisphere processes slower, spectral changes such as pitch and prosody. My hypothesis is that English monolinguals will process Mandarin tone contrasts on the right hemisphere, and English-Mandarin bilinguals will process the identical Mandarin contrasts on the left hemisphere.

14:00
The Prenatal Concept of number: Further Evidence of Visual Processing of Information before Birth.
SPEAKER: unknown

ABSTRACT. Like many other cognitive capacities, the study of the development of numerical understanding stops at neonatal research, due to the practicalities of delivering visual stimuli and measuring response in a prenatal population. For the first time, Reid et al. (May, 2016) presented shapes of light to the late-term fetus finding a prenatal preference to shapes that were of a top-heavy than a bottom-heavy configuration. This illustrated the validity of delivering visual stimuli via light to the late term human fetus. The present study aimed to investigate the processing of visual representations of number earlier in development than previously reported, before birth. Behavioural responses to stimuli were assessed in 63 participants, utilising 4d ultrasound. Participants were excluded due to technical or experimenter error (2), poor image resolution (12), or if fetuses appeared to be in a behavioural state 1F (15) represented by a lack of eye or body movements through the scanning period (Nijhuis et al., 1982). This gave a final sample of 34. Light presenting 2 or 3 dots (order counterbalanced) was positioned on the maternal abdomen for 45 seconds to the side of the fetus. Fetuses spent more time looking towards than away from the 2-dot set only, z = 2.293, p = 0.022. No significant difference was found in looking to versus away from 3 dots. This indicates a possible difference in processing of the two number sets. To our knowledge, this if the first study to investigate visual discrimination of number in a prenatal sample and provides the basis for a more thorough examination. This provides further evidence for prenatal behavioural response to visually-presented information. Additionally, this provides further evidence of the potential to address questions of prenatal visual discrimination by utilising techniques more closely resembling postnatal methodology than is currently the case in the prenatal field.

14:00
Does group membership affect overimitation in preschoolers?
SPEAKER: unknown

ABSTRACT. The current studies investigated whether children’s affiliation with a social group enhances overimitation of actions modeled by in-group members. Experimental conditions differ in how social group membership was emphasized. The child and two experimenters drew t-shirts out of a box before the overimitation-task started. In the shirts-condition the experimenters explained that there are blue and red shirts that couldn’t be exchanged once drawn. In this condition experimenters accepted the result neutrally and made no further comments regarding group formations. In the teams-condition both experimenters expressed joy about the drawn color and the experimenter forming a group with the child engaged the child into celebration. Following, in both experimental conditions (each n=28) children observed both experimenters retrieve a reward from a transparent puzzle-box: First, the in-group-experimenter using non-functional, second, the out-group-experimenter using only functional actions. After each demonstration, children removed a reward. The number of non-functional actions provided a measure of overimitation. In a baseline-condition (n=16) children operated the puzzle-box without prior demonstration. In the shirt-condition we found the same pattern as in Hoehl et al. (2014) without manipulation of group membership. After the inefficient demonstration (in-group-experimenter) children performed significantly more non-functional actions as children in the baseline-condition (t(42)=-5.01, p=.000) and reduced their overimitation after the efficient demonstration (out-group-experimenter) to baseline-level (t(41.86)=-1.6, p=.118).

In teams-condition children overimitated after the inefficient demonstration (t(42)=-6.14, p=.000) and continued doing so after having observed the efficient strategy demonstrated (t(41.99)=-3.79, p=.000).

Results show that the perseverance of overimitation (despite clear evidence of the irrelevance of certain actions) depends on whether child and model belong to the same group. Importantly, t-shirt color alone was not sufficient to elicit this effect. Only when the in-group model affectively emphasized team-membership did children pertain irrelevant actions above baseline level even after having seen the effective strategy by an out-group model.

14:00
The evocative power of words for 9-month-old infants
SPEAKER: unknown

ABSTRACT. There is increased evidence that infants exhibit a preference for words over other linguistic stimuli, suggesting that words have a privileged status from early stages of language acquisition. Yet, it remains unclear whether verbal (words) and non-verbal cues (associated sounds) activate conceptual representations in a similar manner. The present study aimed to investigate whether the activation of conceptual representations referring to word-object is more efficient than that of sound-object association for pre-verbal infants. Nine-month-old infants participated in a primed intermodal preferential (IPL) task in which they listened to either a word (e.g., cow) or sound (e.g., mooing) followed by an image containing two objects (e.g., cow – telephone), a target and a distracter, while their looking times were being recorded. Preliminary results show that upon hearing the auditory stimulus (word versus sound), infants were faster in shifting their gaze to the target compared to the distracter, demonstrating a congruency priming effect. In addition, compared to the associated sound condition, infants looked longer at the target object when it was preceded by a word. These findings suggest that conceptual representations are activated more quickly and efficiently by verbal labels as opposed to non-verbal cues, emphasizing the special status of words as referential cues during the early stages of language acquisition.

14:00
Comparison between infant and adult colour discrimination using an automated eye-tracking method.
SPEAKER: unknown

ABSTRACT. From birth, infants are able to discriminate between chromatic and achromatic stimuli (Adams & Courage, 1998). Colour vision develops greatly over the initial post-natal months (Brown and Lindsey, 2013), resulting in the presence of adult-like chromatic mechanisms by 4 months of age. Initially, infants require larger colour differences than adults for discrimination, and the ability to discriminate colours continues to develop through childhood and adolescence (Knoblauch, Vital-Durand, & Barbur, 2001).

There is evidence that adults' colour discrimination is not equal for different hues, but that some hues (blue and yellow) are discriminated poorly compared to others (e.g. red, green)(e.g Pearce, Crichton, Mackiewicz, Finlayson, & Hurlbert, 2014). It has been suggested that this variable discrimination performance may be the result of calibration to our chromatic environment (Bosten, Beer, & MacLeod 2015). In order to verify this we need to better understand how young infants discriminate colour.

Traditionally, adult colour discrimination tasks use methods such as 4 alternative forced-choice, that we are not able to use with infants. In the current study, we used an automated eye-tracking measurement of colour discrimination, a method which is appropriate for both adults and infants (adapted from Jones, Kalwarowsky, Atkinson, Braddick, & Nardini, 2014). This allows us to compare the discrimination abilities of adults and infants across hues, in order to determine the extent that experience of our chromatic environment impacts our perception of colour.

14:00
Did you expect that? Neural correlates underlying selective imitation in infants
SPEAKER: unknown

ABSTRACT. Imitation is an important social learning mechanism for infants exploring the world. Interestingly, infants do not imitate every action. Fourteen-month-olds predominantly imitated an unusual and inefficient action (turning on a lamp with one’s forehead) when the model’s hands were free compared to when the model’s hands were occupied (Gergely et al., 2002). Rational imitation accounts suggest that infants evaluate actions by the rationality principle stating that people achieve goals with the most efficient means. Thus, infants form expectations on others’ actions presumably influencing their imitative behavior. We conducted an event-related potential (ERP) study to investigate whether infants experience violation of expectation (VOE) while observing the head touch in a hands-free condition and whether this VOE changes depending on situational constraints in a hands-restrained condition. In a between-subjects design, 12- to 14-month-olds watched videos of models either demonstrating that their hands were free (N=22, 11 girls) or restrained (N=15, data collection is ongoing). Subsequent test frames showed hand or head touch outcomes. We assume that infants hold expectations on how a person normally touches an object leading to VOE in response to the unusual head touch in the hands-free condition. The opposite result pattern is expected in the hands-restrained condition. Preliminary analyses revealed an increased Negative central (Nc) amplitude (400-600ms) in the hands-free condition on central channels (C3,Cz,C4) in response to the head touch (M=-20.57μV, SD=10.86) compared to the hand touch (M=-16.23μV, SD=10.68), t(21) =-2.470, p =.022, d=0.40. Results indicate that infants discriminated head and hand outcomes with differences in the allocation of attention. The increased Nc for the unexpected action may illustrate an orienting response reflecting mismatch detection. Therefore, our study was a first attempt to examine whether infants’ selective imitation in previous studies might have been indeed rational. So far, results are in line with the rational accounts.

14:00
The development of monolingual children’s abilities to consider conventionality when understanding foreign words
SPEAKER: unknown

ABSTRACT. A recent research suggests that 2-year-old monolingual children initially expect labels to be shared across speakers of different languages, whereas bilingual 2-year-olds do not show such expectation (Byers-Heinlein, Chen, & Xu, 2014). Built on the previous research, the present research investigated when monolingual children can understand that different languages use different words to name the same referent. Two- and three-year-old Korean monolingual children were tested on tasks involving live interactions with native Korean- and Spanish-speaking experimenters. In Korean trials, a native Korean speaker presented children with one familiar object (e.g., a shoe) and one novel object. When the Korean speaker asked children to find the referent of a novel Korean label, ‘muppi’, both 2- and 3-year-old children selected the novel object which shows their mutual exclusivity (ME) assumption that a novel word refers to a novel object. In Spanish trials, a native Spanish speaker presented children with one familiar object (a dog) and one novel object and asked children to find the referent of a novel Spanish label, ‘pefo’. Two-year-old children more often selected the novel object again using the ME assumption, which was consistent with previous findings from Byers-Heinlein et al. (2014). Three-year-olds, however, selected the familiar (a dog) and novel objects equally often, suggesting that they suspended using the ME assumption when interpreting foreign words. Taken together, these findings suggest that children begin to understand that labels are not shared by both native and foreign language speakers between two and three years of age. The findings are discussed in terms of what experiences influence the development of children’s abilities to understand linguistic conventionality in native and foreign languages.

14:00
Facial mimicry in three-year-old children and its modulation by attachment security
SPEAKER: unknown

ABSTRACT. Mimicry is defined as a nonconcious tendency to mirror another’s behaviors, postures, facial expressions or speech (Duffy & Chartrand, 2015). While adult research widely acknowledges the social function of mimicry (i.e. as a “social glue”; Lakin, Jefferis, Cheng, & Chartrand, 2003), very little is known about how and when mimicry becomes socially sensitive. Pioneering work by van Schaik and colleagues (2013; in press) investigating a behavioral form of mimicry, shows that 3-year-olds display behavioral mimicry, however are not yet sensitive to social manipulations prompting affiliation. In contrast, 4- to 6-year-olds display socially sensitive behavioral mimicry (van Schaik & Hunnius, in press). Whether mimicry becomes increasingly social during development or whether the social manipulation used was ineffective for 3-year-olds thus remains unclear. To address this issue, the current study aims to investigate the relation between inter-individual differences in the intrinsic drive to seek affiliation and facial mimicry in 3-year-olds. In adults, attachment security moderates both affiliation motivation (Schwartz, Lindley, & Buboltz, 2007) and the expression of mimicry (both behavioral mimicry: Hall, Millings, & Bouças, 2012; and facial mimicry: Sonnby-Borgström & Jönsson, 2004). Accordingly, we will examine whether facial mimicry in 3-year-old children is modulated by attachment security. Subtle facial muscle activation in response to observed happy and sad facial expressions will be measured using electromyography (EMG). Attachment security will be assessed during 3-hour home observations, using the Attachment Q-Sort (Waters & Deane, 1985). This study will test whether facial mimicry changes as a function of attachment security and will examine whether attachment security modulates automatic or controlled mimicry responses. Our findings will help unravel the development of social mimicry in early childhood as well as shed light on pivotal motivational processes underlying the intrinsic drive for affiliation in young children.

14:00
L1-acquisition of finiteness in German ABER-clauses
SPEAKER: unknown

ABSTRACT. Recent studies on German L1- and L2-acquisition have shown, that the emergence of finite clause structure is affected by certain particles: Penner et al. (2000), Winkler (2006, 2009) and Dimroth (2009) describe that the particle NICHT ‘not’ accelerates while the particle AUCH ‘too’ hampers the realization of finiteness in clauses containing them compared to clauses not containing any particle. We will show that the adversative connective and particle ABER ‘but’ also effects the emergence of finite clause structure providing further evidence for a stepwise acquistion of finiteness which interacts with other acquisition processes.

Our data come from longitudinal corpora of four children at the age of 2;0-3;5. We analyzed the main-clauses produced in the 12 months after the first documented ABER-production. The criteria for identifying the emergence of finiteness as a functional projection in the children´s grammar were taken from Jordens (2012).

We observed that the acquisition of functional finiteness in main-clauses not containing a particle or connective leads to progress in the syntax of ABER-clauses on the one hand while on the other hand clear evidence for functional finiteness in ABER-clauses emerges about 2-6 months later than in clauses not containing any particle or connective. We even observed a delay of 1-2 months in the emergence of finiteness in ABER-clauses compared to AUCH-clauses.

We hypothesize that the delayed application of functional finiteness in ABER-clauses results from the semantic properties of adversative ABER and the information structural requirements in relating the alternative expressed in the ABER-clause to the (linguistic) context. Due to the complexity of this acquisition task which is similar but even higher than for AUCH-clauses, the 2-year-olds avoid the mere structural linguistic requirements for target production of ABER-clauses they have no full command on yet, and rely on syntactic structures acquired earlier on.

14:00
Measuring Exposure to English in Bilingual Children between 12- 24 months: a comparison of existing questionnaires
SPEAKER: unknown

ABSTRACT. Bilingual children usually know and produce fewer words in each language than monolinguals (Gathercole, 2007; Bialystock, 2009). It is important to estimate the amount of exposure to each language to evaluate the development in these languages (Cattani et al., 2014; Gathercole & Thomas, 2009; Hoff et al., 2012; Pearson et al., 1997; Thordardottir, 2011). Our aim is to compare the various tools developed to quantify the amount of exposure to language and to estimate their relative reliability and user-friendliness. 30 Bilingual families living in England with children aged 12 to 24 months are randomly assigned to three groups. All are first sent the Oxford CDI (Hamilton et al., 2000) to complete. Then, within each group, parents complete the Plymouth Language Exposure Questionnaire ‘LEQ’ (Cattani et al., 2014), and one of the following three exposure questionnaires:[Alberta Language Exposure Questionnaire (Paradis, 2011); Child Multilingualism Questionnaire (Yang et al., 2006) and Bosch& Galles Language Exposure Questionnaire (Bosch& Sebastian Galles, 1997)]. The order of the presentation of the questionnaires is randomised. Preliminary analyses with 23 children show that the amount of exposure to English as measured by the LEQ (M = 53.5%) correlates significantly with that by other questionnaires (M = 54.5, r = .50, p = .016). A regression analysis with age and the amount of exposure (measured by the LEQ or other questionnaires) as predictors of CDI scores shows that overall all models are significant, with age always contributing significantly, but with the contribution of the amount of exposure building up to significance. In sum, questionnaires seem to measure similar information overall. Data at this point suggest that they all contribute equally to the prediction of vocabulary scores. User-friendliness is consistently rated as higher for the LEQ than the others, which may drive the decision to use one questionnaire or the other.

14:00
Understanding Sensory Processing in Early Development.
SPEAKER: unknown

ABSTRACT. Schizotypy is a construct used to describe clusters of personality dimensions within the general population that display a predisposition to schizophrenia spectrum disorders (SSD) (Claridge, 1997). Atypical performance in specific event-related components has been shown by individuals with SSD, and also in their first-degree relatives, or those at high-risk of development (offspring of diagnosed individuals) (Carlson and Fish, 2005). Performance in an auditory paired-tone paradigm and facial emotion expression tasks will be examined in 6-month-old infants and their caregiver, who display a significant schizotypy score. The auditory paired-tone paradigm will measure the participant’s sensory gating abilities, which display atypical P50 components in those with SSD and their first-degree relatives. It is thought that those with higher schizotypy scores will display similar abnormalities, but to a lesser degree. Individuals with SSD also show atypical abilities in facial emotion expression perception (Li et al., 2010). It is expected that increased negative-central components will be observed in the infants of caregivers with schizotypy scores. Additionally, we predict that the looking behaviour in the latter half of this visual paradigm for infants of schizotypal caregivers will feature increased looking towards happy faces due to this being more novel, whereas low-risk infants will look longer towards fearful faces as they have little interaction with this expression in everyday life. A 5-minute period of free-play between the caregiver and infant will be observed, and baseline frequencies will be examined. During free-play, interactions take place that focus on the caregivers’ ability to read the child’s behaviour with reference to the likely internal states governing that specific behaviour. This is known as mind-mindedness (Meins, 1997). The baseline frequencies will be analysed as a function of mind-related comments made by the caregiver, and attachment type of the infant and caregiver, which will be examined using a series of questionnaires.

14:00
Looking but not learning: differences in gaze cue reading, visual attention and learning in 14 month old infants at risk for autism
SPEAKER: unknown

ABSTRACT. Research has suggested 13 and 36 month old infants who develop autistic symptoms follow gaze but then look less at cued objects (Bedford et al., 2012) and do not learn word-object associations (Gliga et al., 2012).

The current longitudinal study aimed to replicate and extend findings by investigating gaze cue use, visual attention and word learning in 14 month old infants at risk for autism (n=96), due to having an older sibling with autism, and in low risk controls (n=21).

Each infant viewed videos of a demonstrator turning, looking and labelling one of two different objects. Gaze behaviour was measured with an eye tracker. We measured the proportion of correct first looks, and proportional looking time to the correct and incorrect objects and to the face. Word learning was measured as preferential looking to the correct referent in “looking while listening” trials. Trial difficulty varied with either one or both objects being labelled then tested.

Findings supported the conclusion gaze reading, not simply following, is necessary for learning (Gliga et al., 2012). No group difference was found in the proportion of correct first looks but at-risk infants that developed autistic symptoms looked less at objects in general and more at faces. The whole group of at-risk infants showed no evidence of learning. This demonstrates that 14 month at-risk siblings may not use gaze cues as effectively as infants with no family history of autism. Furthermore, greater attention found to the demonstrator’s moving face suggests competition from (facial) movement may interfere with processing and using gaze to learn information.

14:00
The difference between the felt and displayed emotions: When do young children understand the necessity of regulating emotions?
SPEAKER: unknown

ABSTRACT. The problem with emotion-regulation (ER) as a construct is that it measures very broad functions. Studies tend to pick one or the other of the followings to define and assess ER; comprehension of emotions (emotional understanding-EU), regulation of negative emotionality (e.g. responses to not receiving a prize) and, rarely, regulation of positive emotionality – keeping good news secret. The cognitive underpinnings these skills have been overlooked due to the unclear assessments of ER. In Study 1 (N=117), the relationships between all the emotionality measures mentioned above were observed within the context of other rapidly changing cognitive skills in early childhood, notably social understanding and inhibitory control. Regression analyses showed that children’s comprehension of emotions was strongly related to their social understanding, assessed through the false belief paradigm, an ability to regulate positive emotionality and age. Given the centrality of EU in Study 1, the next study (N=62) explored its nature in more detail. It examined whether there is consistency in how the child reacts to scenarios which contrast ones inner feelings with the need to display other emotions: in a new Scale for Understanding the Regulation of Emotion (SURE). Study 2 found that between 3 and 5 years children come to understand the importance of displaying regulated emotion over expressions which match their feelings. EU, (i.e. a grasp of ER) may be a novel way to investigate the link between ER and the development of social understanding and self-regulation.

14:00
Predictors of reading comprehension in first grade: results of a longitudinal study.

ABSTRACT. Reading comprehension is fundamental for professional and personal achievement. It involves: naming speed, phonological awareness, letter knowledge, pseudoword and word reading, syntactic knowledge, vocabulary and oral text comprehension (Bianco et al., 2012; Ouelette & Beers, 2010; Silva & Cain, 2015). However, their respective importance for reading comprehension isn’t well understood because: a) many studies are correlational, limiting causal inferences, b) longitudinal or training studies focus on a subset of skills, c) some authors group variables into two pre-established factors, representing “decoding” and “oral language” skills (Hoover & Gough, 1990). This structure has been questioned in exploratory analysis (Kendeou, Savage et al., 2009).

Our longitudinal study, on 556 French pupils, aims at:  

1. Identifying, using regression analysis, the main linguistic abilities, measured at the beginning of first grade, causally predicting reading comprehension eight months later

2. Determining, with exploratory factorial analysis, if they represent two, or more types of competencies 

We showed that phonological awareness, letter knowledge, alphanumeric naming speed, syntax and oral text comprehension were the main predictors of reading comprehension. Nonalphanumeric naming speed, pseudoword and word reading, vocabulary breadth and depth didn’t have an independent effect. Four factors have been extracted: 1° naming speed (alphanumeric and nonalphanumeric), 2° written code skills (pseudoword and word reading), 3° oral code skills (phonological awareness, vocabulary breadth, syntax but also letter knowledge), 4° oral comprehension skills (vocabulary breadth and depth, syntax, oral text comprehension).   

Our results show the importance, at the beginning of first grade, of word reading precursors (phonological awareness, letter knowledge and alphanumeric naming speed) in the development of reading comprehension. Syntax, but not vocabulary, had an effect independent from oral text comprehension (confirming Silva & Cain, 2015). The ambiguous status of syntax and vocabulary breadth, related to oral code and comprehension skills, is made explicit (Tunmer & Chapman, 2011).

14:00
The effect of labelling on infants’ object exploration.
SPEAKER: unknown

ABSTRACT. Young children learn their first object names by associating the words they hear with items they see. Understanding the processes that help children link words with objects will offer important insight into cognitive development. One significant component of learning word-object associations is the way in which children interact with objects; however this has yet to be studied in detail. For a full understanding of infants’ object exploration it is important to investigate where exactly they look during labelling tasks; that is, whether and for how long they look at specific objects’ parts during physical interaction or passive observation, and whether their language level affects the learning of names during labelling events.

The current study will use head-mounted eyetrackers that record children’s looking to explore how children at 16 and 24-months interact with novel objects. Participants will be assigned to a physical interaction group, in which they will handle objects, and to a no physical interaction group, with no handling. Within these conditions half the children will be assigned to a labelling group, in which objects will be given novel labels (e.g., Look, a blicket!), and half to a no labelling group, in which objects will be unlabelled. Following this session and a five-minute break the experimenter will test children’s retention of label-object mappings by presenting three objects and asking children for each in turn (e.g., Which one’s the blicket?). Parents will complete a vocabulary inventory (UK-CDI; Alcock et al., in prep) to examine whether vocabulary interacts with object exploration in label learning.

We hypothesise that different ages will show different visual exploration styles. Vocabulary level and physical interaction with the objects are also anticipated to affect task’s label learning. This research will enhance our understanding of early cognition by demonstrating how children’s interaction with their environment affects their word learning.

14:00
Where's my label?! Studying how a missing label and other missing features are perceived

ABSTRACT. Previous studies have tried to determine whether labels are treated as features or markers, when learning categories. There is some evidence that the label might be treated as a feature among other features in an early stage, the behaviour shifting to be more marker-like in adults. However the role of the label is still debated, its interaction with other features remaining unclear. We plan to run an experiment challenging these questions on pre-lexical infants, pre-schoolers and adults. We will use the set of stimuli introduced in Kovic et al. (2010), namely a 5-4 categorization task with simple drawings of animal-like creatures, the categories being formed with either the more salient (head and tail) or less salient (wings and legs) features being diagnostical (highly predictive of the category). Unlike the many studies using this kind of task, we will treat the label as any other feature, making it vary amongst exemplars of a category instead of being totally diagnostical. We will then study how a missing feature influences subject’s behaviour; we are particularly interested in the effect of a missing label compared with other missing features. We will measure this effect in looking times during a test session. With respect to the Label-as-Feature theory, we expect the label to have the same role as other features, probably of high saliency. Thus, a missing label should be as surprising for the subject as a missing head or tail, depending on the exact level of saliency of the label. If however the label is treated as a marker, a missing label should have few or no effect at all on the subject’s looking times during the test session. Running this experiment at different ages will allow us to detect an eventual shift from a treatment of the label to the other during development.

14:00
How do high frequency words assist infants' language acquisition ?
SPEAKER: unknown

ABSTRACT. A key challenge facing language learners is identifying words and grammatical rules from continuous speech. Past research has suggested that these tasks are helped by infants’ ability to extract transitional information from speech and use it to infer word boundaries and linguistic regularities. Critically, studies suggest that infants’ statistical language learning may benefit from the presence of high-frequency marker words (Bortfeld et al., 2005) that may act as anchors around which speech segmentation can occur, while also assisting with grammatical categorisation (Monaghan & Christiansen, 2010).

To address these claims, we familiarised 10- and 20-month-old infants (each N = 24) with a continuous stream of speech comprising repetitions of 4 bisyllabic target words, and compared learning to the same language but with high-frequency monosyllabic marker words preceding target words, and distinguishing them into two distributionally-defined categories. We assessed infants’ ability to segment the speech by using a head-turn preference task to monitor looking times to words versus part words. We also examined whether infants used the high frequency words to help them form grammatical categories. For this, we measured looking times to short streams of words containing items from the same versus different grammatical categories. For both tests, gaze direction and duration was measured using video-recording and eye-tracking. The results enable a test of whether infants can use high frequency words to segment speech and to learn about the grammatical structure of the language at an early stage of language development.

References

Bortfeld, H., Morgan, J.L., Golinkoff, R.M., & Rathbun, K. (2005). Mommy and me: familiar names help launch babies into speech-stream segmentation. Psychological Science, 16, 298-304.

Monaghan, P. & Christiansen, M. H. (2010). Words in puddles of sound: modelling psycholinguistic effects in speech segmentation. Journal of Child Language, 37, 545-564.

14:00
The end of the line: children learn more words from storybooks that do not rhyme
SPEAKER: unknown

ABSTRACT. Rhymes are highly prevalent in children’s language input. For example, over 40% of children’s books for under-5s rhyme (The Book Trust, 2014). Recently, Read (2014) demonstrated that preschool children learn monster names better if they are placed at the end of a rhyming stanza than in the beginning or middle. However, because all children heard rhyming stories, it remains unknown whether rhyming storybooks facilitate word learning to a greater degree than non-rhyming storybooks.

In the current study we read preschool children purpose-written illustrated storybooks in either a rhyme or non-rhyme format. Critically, the only difference was the arrangement of the words (in the non-rhyme condition the same words were re-arranged so the lines no longer rhymed). Children were read the same storybook three times and received 12 total exposures to two novel objects and their names. Children were tested on their immediate recall of the name-object associations using a 4-alterantive forced-choice picture-pointing task.

Only children in the non-rhyme condition recalled more words than expected by chance, t(11) = 5.65, p < .001 d = 1.65. They also recalled more words than children in the rhyme condition, t(22) = 2.53, p < .01, d = 1.03. In a follow-up study, another group of children were presented with the same storybooks and tested on both immediate recall and retention 7 days later. Children in the non-rhyme condition retained more words than children in the rhyme condition, t(34) = 2.54, p < .02, d = .85 (see Figure 1).

Taken together, these data demonstrate that preschool children learn more words from storybooks that do not rhyme. These findings have both theoretical implications for understanding children’s word learning as a function of ease of encoding as well as practical implications for improving the vocabularies of young children via shared storybook reading.

14:00
Infants expect subordinates to comply with an authority’s but not a bully’s instructions
SPEAKER: unknown

ABSTRACT. In the adult morality and evolutionary literatures, a simple form of dominance (a social asymmetry in which a dominant individual prevails over subordinates in competitive situations) is functionally distinguished from a more complex form of dominance often referred to as authority (a social asymmetry in which the power of an authority over subordinates is deemed rightful or legitimate by the parties involved). Here we investigated whether 21-month-olds already distinguish between these two forms of dominance: We asked whether infants would expect subordinates to comply with an instruction given by a leader (authority condition), but not an instruction given by a bully (bully condition). We presented infants with computer-animated events involving geometric characters. In the authority condition, we familiarized infants with three subordinates who bowed to the leader as soon as she arrived and gave her the ball with which they were playing. During test events, the leader instructed the subordinates to go to bed. Subordinates either complied while the leader watched but disobeyed after she left (disobedience event) or continued to comply after she left (obedience event). In the bully condition, the leader was replaced by a bully who in the familiarization event hit the subordinates and stole their ball. Infants looked reliably longer at the disobedience than at the obedience event in the authority condition, but looked equally at the two events in the bully condition. These results suggest that by 21 months, infants expect subordinates to comply with instructions given by an authority, but not those given by a bully, and as such are already sensitive to the complex dynamics of power and authority.

14:00
Look and Learn: A Model of Gaze Contingent Learning
SPEAKER: unknown

ABSTRACT. How do infants learn to manipulate the world? How do they learn causality? We aim to shed light on these questions by combining infant experiments and computational modeling. We apply a gaze contingency paradigm that enables infants to control their visual environment (Wang et al., 2012). In this experiment, 8-month old infants are looking at a screen with two peripheral red discs. One of the discs has the function of triggering the appearance of an animal picture (presented centrally) if fixated, while the other one is non-functioning. Consecutive fixations on the functioning disc trigger the appearance of new pictures. Results indicate that the infants develop a gaze preference for the functioning over the non-functioning disc (functioning bias). In order to study the learning processes during the experiment, we adapt and extend a computational model of the basal ganglia, a brain region implicated in action selection and discovery. The original model was developed as an embodied model of action discovery and was able to reproduce contingency learning effects in an ethological experiment (Bolado-Gomez and Gurney, 2013). It captures learning as adaptations in cortico-striatal projections, which give rise to behavioral action preferences. This learning is modulated by a sensory prediction system comprising novelty salience and prediction errors as embodied by phasic dopamine. Our model reproduces the functioning bias effect that we find in the experimental data, which allows us to estimate learning progress of different individuals based on their gaze behavior. We conclude that our model captures the essence of learning during gaze-contingent experiments and may thereby help to bridge the gap between neuronal processes and human behavior.

14:00
The Brightness-Weight Correspondence in Infants
SPEAKER: unknown

ABSTRACT. Adults have been shown to appreciate the correspondence between brightness and weight, wherein brighter stimuli are associated with lighter weight and darker stimuli are associated with heavier weight (Walker, Francis & Walker, 2010). To date, no research has examined the presence of this correspondence in infants. Research has shown that pitch-sharpness and pitch-vertical placement correspondences are appreciated by infants as young as 3-4-months (Walker et al, 2010). Therefore we expect the brightness-weight correspondence may be observable in young infants. To test correspondences between sound and visual stimuli, looking-time measures have been successfully employed. Infants are shown displays which are cross-modally congruent and incongruent. It is assumed that if infants appreciate the correspondence they will look longer towards the incongruent display as it is surprising. To examine the brightness-weight correspondence however, this method would assume that infants understand complex weight principles. As a novel alternative, we have chosen to use motion-capture to examine infants’ appreciation of the brightness-weight correspondence. Through a series of studies, infants will be presented with real objects, which vary with regards to brightness. We will examine whether infants approach these objects differently because of their anticipated weight. It has been shown that infants as young as 9-months use differentiated, manipulative force for objects they expect to differ in weight (Mash, 2007). Motion-capture equipment will be used to take various measures as the infants reach for, grasp, and transport objects. We anticipate that speed, acceleration, trajectory and distance travelled by the wrist may differ for darker and brighter objects. We also expect that the overall grip-force will vary depending on anticipated weight. Examining appreciation of the brightness-weight correspondence at various points throughout infancy helps us to reveal more about whether the correspondence is innate or learnt through experience with language and the environment.

15:30-16:00Coffee Break
16:00-17:00 Session 11: Colour Learning
16:00
The Development of Colour Word Knowledge in Infants and Toddlers
SPEAKER: Samuel Forbes

ABSTRACT. Colour words are considered to be difficult to learn for infants, due in part to the categorical nature of colour terms, vs the continuous nature of the colour spectrum. In the present study, we compare the colour word comprehension abilities of children from around 12 months to over four years of age using an eye-tracking paradigm. Following the onset of the target word, proportion of looks to target increase slightly in all age groups, and significantly after 18 months. This demonstrates that the presence of the auditory label increases attention to objects with the target colour. Total looking times between target and distractor overall indicates that the colour label draws attention to the target visual stimulus in some age groups. The data indicate that there are signs that infants may know some colour words as early as 19 months. The data was compared to data collected from the Oxford CDI (Hamilton, Plunkett, & Schafer, 2000), which instead showed comprehension only increasing with production, suggesting that parents may underestimate the colour word knowledge of their children. The results of this study suggest that a very basic knowledge of colour categories is learned earlier than expected from past research and from parental accounts, and that those categories continue to be refined over the following years. Previous accounts have suggested that colour word acquisition happens after the second year. However, once the process of learning begins, colour names are learned quickly. Conversely, the results presented here suggest that colour words may be learned earlier than 24 months at a basic level, and that this knowledge is built upon over a period of several years.

16:30
Toddlers and Robots Learn More Object Names When Everything They See Together is the Same Colour
SPEAKER: Jessica Horst

ABSTRACT. Toddlers are exceptionally skilled at learning new words. Recent studies demonstrate that reducing the number of objects present when novel names are first introduced significantly improves toddlers’ word retention (Horst, Scott & Pollard, 2010). One explanation is that reducing the number of possible word meanings under consideration narrows the problem space, helping children focus on and encode target information, which then facilitates word learning. The current studies examine if narrowing the problem space with object features facilitates word retention and generalization in both toddlers (Experiment 1, N = 36) and the iCub humanoid robot (Metta et al., 2010; Experiment 2, N = 36).

On every learning trial participants saw three objects (two known, one novel) and were asked to choose the referent of a known or novel name. In the same colour condition, the objects presented together were always the same colour (e.g., red kazoo, red camera, red ladybird). In the different colours condition the objects presented together were always different colours (e.g., red kazoo, blue boat, pink pig). Critically, all participants saw the same objects the same number of times and received the same test trials: retention trials with the original novel objects followed by generalisation trials with new exemplars (e.g., blue kazoo).

Overall, both toddlers and robots did well on referent selection and retention (all ps < .05). However, only participants in the same colour conditions generalised the novel names to new exemplars of the target categories more than expected by chance (children: t(17) = 5.59, p < .001; robots: t(17) = 6.14, p < .001) and more than the other conditions (children: t(34) = 2.12, p < .05; robots: t(17) = 2.87, p < .01). These data demonstrate that less information presented during the learning phase allows toddlers and robots to demonstrate multiple types of word learning.