LCICD 2016: LANCASTER CONFERENCE ON INFANT AND CHILD DEVELOPMENT 2016
PROGRAM FOR THURSDAY, AUGUST 25TH
Days:
next day
all days

View: session overviewtalk overview

09:15-10:15 Session 1: Keynote
09:15
Biased vocabulary + visual attention and memory processes = a shape bias: A Dynamic Neural Field model

ABSTRACT. From roughly 24-months-of-age, children generalize novel names for novel solid objects to new instances based on similarity in shape (Landau, Smith & Jones, 1988). In this talk, I examine the mechanisms behind both the development of the “shape bias” and its manifestation in real-time via simulations in a Dynamic Neural Field model. In particular, I ask whether the same model that Samuelson et al. (2011) used to demonstrate the role of spatial memory in early word learning also produces a shape bias when taught a vocabulary with the same statistics as the early noun vocabulary. Five sets of simulations capture the emergence of the shape bias from the growing noun vocabulary; differences in the bias depending on specifics of the novel noun generalization task; acceleration of vocabulary development following the training of a precocious bias; and differences in the bias depending on the specifics of the trained words (Perry et. al., 2009), or individual differences in vocabulary (Perry & Samuelson, 2011). Findings support established links between the bias and the developing noun vocabulary and provide insight on connections between visual cognition and word learning biases. I argue the model compares favourably to existing models of word learning and the shape bias on several model comparison metrics (Sims & Colunga, 2013; Christiansen & Chater, 2001).  

10:15-11:15 Session 2: Action Understanding
10:15
Do verbal cues given by the model influence young infants' imitation of goal-directed actions?
SPEAKER: Birgit Elsner

ABSTRACT. Previous research has shown that social-communicative cues displayed by the model can help infants to identify the important aspects of observed actions (e.g., Gergely & Csibra, 2005; Southgate et al., 2009). Moreover, infants selectively reproduce either the observed movement or goal, depending on whether the demonstrated action had a salient action goal (resulting in infants’ focusing on the goal) or not (focusing on the movement; e.g., Carpenter et al., 2005). In several studies, we presented 14- to 24-month-old infants or 3- to 5-year-old preschoolers with imitation tasks in which we varied the salience of action goals and the verbal information given by the model. For instance, the model verbalized either the action style or the end position during action demonstration, or both action components, or none of the components (cf. Elsner & Pfeifer, 2012). Results showed that preschoolers show a preference for reproducing the verbalized action component (movement or goal, respectively), especially when the demonstrated action did not have a salient action goal (see also Williamson & Markman, 2006; Wohlschläger et al., 2003). However, in infants, results were not so clear: The salience of the action goal seemed to have strong impact on infants’ imitation, with infants reproducing the goal when a salient goal is present, but reproducing the movement when the action does not have a salient goal. In contrast, the verbal cues did not seem to influence infants’ imitation. We will discuss these findings with regard to differences in action knowledge, verbal capacities, and social skills in infants as compared to preschoolers. In sum, the presented data will give further insight in the role of context conditions for infants’ interpretation and reproduction of others’ behavior.

10:45
The neural processing of syntactic violations of action sequences in 5- to 7-month-old infants: an ERP investigation

ABSTRACT. Human behavior is a complex and continuous flow of information and relies on a hierarchical organization. This organization helps extracting critical information that serves both the anticipation and the understanding of action and language. Interestingly, the organization of the motor system shows interesting parallels with that of language. Goal-directed actions, as well as sentences in language, can be sequenced in simpler units, which are organized according to a hierarchical plan (e.g., Fadiga et al. 2009). Both systems share the need to establish a correct temporal order determining the way in which single elements are grouped. Research in adults has shown that a manipulation of the action structure, that is, the action syntax, leads to a similar signature in the EEG as a corresponding violation of a processed sentence (Maffongelli et al., 2015). In adults, language and action knowledge are based on the life-time experience in both domains. In infants, in contrast, both domains are just emerging and it is thus possible to examine the processing of nascent knowledge structures, both separately and in context to each other. Whereas recent infant research focused primarily on the processing of the semantics of an action (meaning), we investigated the neural processing of the syntax of an action (structure) in infants at 5 to 7 months during the observation of action sequences. We presented action sequences in some of which two elements were swapped resulting in an incorrect sequence order. This violation occurred in the middle of an action. The action-goal could only be achieved when the right temporal order was maintained. Preliminary results show that a violation of this order elicits a frontal bilateral EEG negativity peaking at around 400 ms post-stimulus onset, suggesting that infants rely on structural regularities and that they are able to segment the action flow into discrete units.

11:15-11:45Coffee Break
11:45-12:45 Session 3: Computational Modelling
11:45
Curiosity-Driven Development of Tool Use Strategies: a Robotic Model

ABSTRACT. The understanding of tool use development in young children is a key question for the understanding of the ontogeny of human cognition. For instance, the advanced control of multiple interacting objects requires mental transformations and planning operations which are fundamental to human cognition. Child development has been described as staircase-like successive stages [Piaget, 1952], but more recently, Siegler's overlapping waves theory described variability in a child's set of current methods to solve a problem. In a task where young children had to retrieve an out-of-reach object with a tool [Chen and Siegler, 2000], they showed that different strategies are concurrently used by children, and that the probability of using each strategy evolve over time.

In our work, we focus on the study of the evolution of these overlapping waves of behaviours in a robotic model and in particular on the use of concurrent strategies in a similar tool use problem.

An interesting approach to model autonomous life-long learning is the implementation of curiosity-driven learning mechanisms in robotic setups. Such mechanisms have been argued to allow the self-organization of developmental trajectories similar to those found in infants [Oudeyer and Smith, 2014].

However, existing models have considered the learning of a single mapping between a motor space and a task space, but in the perspective of an open-ended development of reusable skills, and specifically in tool use, multiple interdependent and hierarchically organized task spaces should be available to the agent. We define such hierarchies of sensorimotor models that structure the sensory space, and we call Model Babbling the choice of model to explore this hierarchy.

We study how the intrinsic motivation in Active Model Babbling influences the evolution of overlapping waves of behaviours, and how the concurrent use of non-optimal strategies may ultimately lead to improved behaviours in more similar problems.

12:15
Representational Re-description in hSOMs
SPEAKER: Olivia Guest

ABSTRACT. We present a type of network model, based on the self-organising map (SOM), that derives clusters in the input set at different levels of description (e.g., subordinate, basic, superordinate). The hierarchical structure of the self-organising map (hSOM) model allows for the emergence of both finer-grained as well as more global-level representations of the environment to which it is exposed. Thus, as a function of level within the hierarchy, more compressed or more detailed representations of the input can be accessed once trained. The hSOM model displays a developmental trajectory: a rudimentary structure is initially present throughout the levels. As a result of activation passing from lower to higher levels and training, the representations at each level become more refined, reflecting both novel and pre-existing distinctions in the input. Such a delay in learning imposed by constraints in the architecture provide an account of why certain categorical distinctions take longer to learn than others. The discovery and creation of, say, basic, subordinate, and superordinate representations to explicitly describe the environment can be seen as a form of representational re-description (Karmiloff-Smith, 1992). Direct access to the representations at each layer allows an easy way for both model and modeller to evaluate what the hSOM knows — for the former to carry out executive operations over perceptual and semantic knowledge, for the latter to gain a better understanding of the hSOM and the predictions/explanations it provides. This model can be seen as capturing the behaviour of a single perceptual pathway, in which low-level sensory input is incrementally organised into higher-level convergence zones using the similarity-in-topography principle implemented by SOMs (Damasio, 1989; Simmons & Barsalou, 2003). Augmentation of this modelling work includes a dual-pathway version to further capture the developmental interactions of different modalities in conceptual processing.

12:45-14:00Lunch Break
14:00-15:30 Session 4: Language Acquisition I
14:00
Labels shape infants' object representations

ABSTRACT. As adults, language affects our cognition, a phenomenon known as linguistic relativity. However, when this relationship emerges is not known. Here, we trace its roots to infancy: infants incorporate labels into their object representations even before speech begins. Ten-month-olds were trained with two novel toys; critically, only one toy was named. In a subsequent eyetracking task in which the objects were seen in silence, infants showed evidence of having formed different representations for the named and unnamed objects. These data demonstrate that language shapes cognition from the outset.

14:30
The detection of grammatical gender dependencies in German-learning 24-month-old children
SPEAKER: Tom Fritzsche

ABSTRACT. Children become sensitive to grammatical gender mismatches around the age of 18-20 months (Cyr & Shi, 2013; van Heugten & Christophe, 2015). At 24 months they are able to use gender information to identify visual referents (Johnson, 2005; Lew-Williams & Fernald, 2007; van Heugten & Shi, 2009) and to represent abstract gender information (Melançon & Shi, 2015). These findings have been obtained from children acquiring French, Dutch, or Spanish – languages that mark two genders and no case. Does this generalise to German with its three genders and four case categories all of which affect the form of the determiner? We tested thirty 24-month-old monolingual German-learning infants in the head-turn preference paradigm, measuring how long they listened to grammatical and ungrammatical (i.e. gender-mismatch) combinations of determiners with highly familiar nouns. In addition, we obtained their vocabulary scores. Gender assignment in German is not entirely arbitrary but phonological or morphological cues are probabilistic in nature and can only be discovered in larger word corpora, which led Mills (1986) to postulate a connection between gender and vocabulary acquisition. Given the complexity of the German determiner system, we asked whether German-learning infants are sensitive to gender mismatches at 24 months – an age at which they start to produce determiners (Mills, 1986). Furthermore, we explored the relationship between the looking behaviour and vocabulary size. Results indicate that their looking times are affected by an interaction of the factors grammaticality, vocabulary size, and trial position. Pairwise comparisons showed longer looking times for grammatical over ungrammatical combinations only for the infants with low vocabulary scores in the second part of the experiment. This suggests that German 24-month olds do not show a robust sensitivity to violations of gender dependencies. More data is currently being collected and will be presented by the time of the conference.

15:00
The role of showing and pointing in the vocabulary growth of children aged 8-15 months
SPEAKER: Amy Bidgood

ABSTRACT. The idea that language acquisition develops from children’s early non-communicative abilities is central to many constructivist approaches (see Clark, 1993; Tomasello, 2003). Studies have investigated the increasing complexity of infants’ gesture use (e.g. Cameron-Faulkner et al., 2015) or the relationship between gesture and vocabulary growth (e.g. Bates & Dick, 2002; Iverson & Goldin-Meadow, 2005). Others have investigated the effect of babble on language production (McGillion et al, in press). However, none have investigated all predictors together in the same children, so we cannot draw robust conclusions about the relative importance of each, or relationships between predictors.

The current study investigated if infants’ showing and giving gestures predicted declarative pointing, if pointing predicted later receptive vocabulary, and if early vocal production predicted later expressive vocabulary. Eighty infants participating in the longitudinal Language 0-5 Project took part in structured 25-minute play sessions at11 and 12 months, designed to elicit a range of gestures. Vocabulary was measured using the UK-CDI at both sessions and at 15 months. Vocal production measures were taken from LENA recordings and parental report of infants’ babble.

Preliminary regression analyses (N=10) suggest infants’ showing and giving gestures at 11 months predicted their declarative pointing at 12 months (R2=0.30, F(1,9)=3.37 p=0.10) and that declarative pointing at 11 months predicted receptive vocabulary at both 12 (R2=0.66, F(1,9)=15.70, p=0.004) and 15 months (R2=0.51, F(1,9)=8.45, p=0.020). We found no relationship between infants’ early vocal production, measured either by parental report or by LENA child vocalisation counts, and expressive vocabulary.

In summary, our results show a line of predictive relationships: from hold-out and give gestures, through declarative points, to vocabulary growth. We discuss the relationship between early communicative competence and the developing complexity of early gestures, and potential reasons why some gestures, in particular, may be predictive of later language growth.

15:30-16:00Coffee Break
16:00-17:00 Session 5: Culture & Socialisation
16:00
Good night, good morning: How sleep quality affects infants’ morning mood.

ABSTRACT. Infant sleep problems are among new parents’ greatest concerns and the importance of sleep quantity and quality for infant development is an under-researched topic. This project reports the results of studies conducted in São Paulo, Brazil, and London, UK. In the Brazil study mothers of 117 infants (53 female, mean age = 13.9 months, range = 2-27m) provided background demographic data, general information on their child’s sleep and completed the appropriate version of the short infant behaviour questionnaire (IBQ-R, Rothbart & Gartstein, 2000; EBQ, Putnam & Rothbart, 2006). They also completed a 10-day sleep diary indicating the time babies went to sleep and woke up, night time wakes, feeds and diaper changes and the morning happiness and energy of their baby on a 10 point scale. Preliminary analysis indicated that overall infants were in bed for an average of 9h46 ± 1h12 and woke up happy (mean score 8.2 ±1.55) and energetic (mean score 7.2 ±2.50). A regression analysis showed that babies’ morning energy level was positively affected by the number of night time wakings (β=0.32, p<.001) and total sleep (β=0.42, p<.001). By contrast, happiness was negatively affected by nighttime wakings (β=-0.31, p<.001) but showed an interaction between total sleep and diaper quality (total sleep: β=0.13, p<.003, interaction β=-0.14, p<.02). These patterns are shown in Figure 1. Sleep and morning mood were also affected by sleeping arrangements and infant temperament (not shown). Overall, the data showed a complex relationship between infant sleep quality and morning mood but that parents can potentially improve morning mood by minimising night-time disturbances and using more absorbent diapers.

A comparison study is currently underway in the UK and will be reported at the conference.

16:30
The socialisation of self: Investigating the link between autonomous parenting and early mirror self-recognition

ABSTRACT. Although the ability to self-recognise onsets universally by around the age of two years, there is growing evidence to suggest cross-cultural variation in the early emergence of self-awareness. Specifically, infants from autonomous cultures typically pass the mirror mark test of self-recognition earlier than infants from relational settings. This advantage could mark the early socialisation of individualistic perspectives on self. Although the distinction between individualism and collectivism is well established in adults, there is little data to elucidate the development of these distinct perspectives. This is because self-recognition has traditionally been considered a fixed cognitive development rather than a social process.

Ross, Yilmaz, Dale, Cassidy, Yildirim, and Zeedyk (in press, Developmental Science) demonstrate that the autonomous advantage in self-awareness development is test-specific. When self-awareness was measured using an alternative method to the mirror mark test (the body-as-obstacle task), Zambian 15- to 18-month olds outperformed their Scottish counterparts. There were associations between distal parenting practises (favoured by Scots) and mirror self-recognition, and between proximal parenting practices (favoured by Zambians) and body-as-obstacle performance. This data implies that self-awareness tests vary in their cultural sensitivity, and that different socialisation practises can be associated with different performance profiles in self-awareness tests. However, it is an open question whether infants performed differently on the self-awareness tasks because of qualitative differences in early self-awareness.

If there are socialised differences in the quality of early selves, we should also see intra-cultural variation in self-awareness performance associated with distinct parenting practises. Since the environment is shared, this data would also help to rule out the contribution of non-self related differences (e.g. mirror familiarity). Measuring self-awareness and social interaction from 6 through to 24 months, we report an intra-cultural comparison of Scottish infants’ performance on the self-awareness tasks, relating performance to infants’ experiences of distal versus proximal parenting styles.

17:00-19:00 Session 6: Poster 1
17:00
Perceiving one’s own body in the first months of life

ABSTRACT. I report recent research investigating how infants in the first year perceive their own bodies. In a recent set of studies (Exp. 1) we have investigated developmental changes in the ways touch processing is influenced by other modalities of information. We presented 10-, 6-, and 4-month-olds with vibrotactile stimuli in combination with either visual flashes or auditory beeps which moved backwards and forwards between their hands. Measuring visual preferences for spatially congruent vs. incongruent bimodal events, our analyses demonstrate that ability to co-locate visual and tactile stimuli is early developing, and potentially in place by 4 months. Ability to co-locate tactile stimuli with respect to stimuli arriving from vision and audition is fundamental to the ability to perceive the interface between the body and the world. Visual-tactile interactions are also crucial to social perception, and I will report the findings of a second study (Exp. 2) investigating whether cortical responses to tactile stimulation in four-month-old infants would be modulated by visual information specifying another person being touched. We presented vibrotactile stimuli to a group of four-month-old infants’ hands. In synchrony with the tactile stimuli, we showed video stimuli presenting another another person’s hand resting on a surface, and being either touched by a soft paintbrush (“touch” trials), or being approached by but not touched by that same paintbrush (“no touch” trials; the paintbrush just touched a nearby surface). We observed somatosensory evoked potentials (SEPs) over centroparietal scalp sites with amplitude greater for the “no touch” than for the “touch” condition contralaterally to the touch. Thus the somatosensory cortex can be vicariously recruited by seeing other people being touched even at four months of age.

17:00
Understanding the causes of (dis)agreement of the current infant ERP editing methods
SPEAKER: unknown

ABSTRACT. One of the challenges when conducting infant event-related potential (ERP) studies is to identify artefact-free trials that can be included in the final ERP. Current methods for selecting valid ERP trials usually include a manual or an automatic editing step, as well as a combination of the two. In a previous study, we investigated the agreement between current infant editing methods and found that there is a low agreement between editors in terms of trials accepted due to noise in the EEG as well as a low agreement in the number of channels interpolated. The differences in the editing methods had also an influence on the final ERP morphology, as well as in amplitude and peak latency of the Nc component between conditions. The aim of the present study is to investigate further the reasons that led to the agreement –and disagreement- between editors. Variables that characterize the signal and the level of noise in the EEG are calculated in a trial by trial basis and advanced statistical methods such as generalised linear mixed effects models are applied to find which variables have the greatest influence in the agreement of the current infant editing methods. The results are intended to help shed light on how the current infant ERP editing methods can be improved and standardized within the infant ERP research field.

17:00
The Development of the Neural Correlates of Body Schema Processing During Childhood
SPEAKER: unknown

ABSTRACT. It has been shown that observing body expressions evoked similar neural response in the human brain compared to those elicited by faces (Stekelenburg & de Gelder, 2004; Gliga & Dehaene-Lambertz, 2005). Similar to faces, processing body postures integrates to a certain extent information about the configurational relation between its different elements (Thierry et al., 2006; Righart & de Gelder, 2007). When this information is altered by changing the orientation of the bodies (i.e., upright vs inverted), bodies tend to be less accurately recognized. Particularly the ERP component N170 seems to be delayed and have a higher amplitude for the inverted bodies compared to the upright ones (Stekelenburg & de Gelder, 2004). However, the body inversion effect has been so far only investigated in adults. Although previous studies show that there are developmental changes in the body schema processing in infants (Missana, Atkinson and Grossmann, 2014), it is yet unknown whether the associated neural mechanism continue to develop beyond infancy. The current study aims to reduce this gap by showing how the ERP components associated with body schema processing mature throughout childhood. Two- to 11-years-old children were presented with images of human bodies with a neutral posture and objects with a similar structure (i.e., hat stands). Both the bodies and the objects were presented with an upright and an inverted orientation, while continuous EEG was recorded with a 128 electrodes Geodesic Sensor Net (EGI). Preliminary results show that across all age group inverted body image evokes more positive amplitude in N170 component, and the amplitude of N170 decreased with age. This study provides a developmental account of the maturation of the body schema over childhood

17:00
Infants’ understanding of teleological actions after ostensive communication
SPEAKER: unknown

ABSTRACT. Infants interpret actions as goal directed (Hunnius & Bekkering, 2010) and are also sensitive to ostensive communication (Csibra, 2010). When ostensively addressed, infants perceive the informative content of the communication as relevant, meaningful and generalisable (Csibra & Gergely, 2009). In the following study we ask whether ostensive communication can change the interpretation of an arbitrary action in 9-month-old infants. We used N400 ERP component, sensitive to semantic processing, to answer this question.  

Based on Reid et al. (2009), we measured the N400 ERP component in an expectancy violation paradigm. We investigated 9-month-old infants in a communicative condition and in a non-communicative control condition. In the communicative condition infants were presented with an actor addressing them ostensively (direct eye contact, infant directed speech) and subsequently performing an action that can have an anticipated or unanticipated outcome.

For the N400 we investigated a group of electrodes in the parietal area, similar to Reid et al. As the data showed no clear peaks, we ran an Anova for the 600-800ms time window mean based on previous research (Reid et al., 2009). We found a significant positive deflection for unexpected outcomes (F(1,15) = 8.6, p = .01). We did not find a significant communication-outcome interaction (F(1,15) = 0.5, p = .40). In addition to our hypotheses, we found a main effect of communication between 150-200ms, (F(1,15) = 7.4, p = .02).

We also analysed the frontal Nc-component, to investigate whether communication facilitates action understanding through arousal. However, no significant interaction or main effects were observed (all p > .50).

The current paradigm did not confirm the hypothesis that ostensive communication changes the interpretation of actions in 9-month olds.

17:00
“Getting into synch”: the development of mutuality
SPEAKER: unknown

ABSTRACT. Development is guided by processes constructed within interaction (e.g., Fogel, 1993; Wootton, 2005). We propose that early forms of mutuality arise within mother-infant interactions as part of coordinating with each other. This coordination entails the online mutual responding to each other’s behaviour, which acts as a coupling mechanism, enabling mother and infant to enter an interaction and maintain or stabilise it (Rączaszek-Leonardi et al., 2013). In these interactions, gaze is one of the first and most important means of communication and coordination in parent-infant dyads, as it is the first dyadic system in which both mother and infant have almost equal control over the same behaviour (Stern, 1974). It allows the infant to take part in conversation and maintain his or her participation (Filipi, 2009). In the present paper, we argue that emerging patterns of mutual gaze constitute a form of coordination with which mutuality enters the interaction system. We used a longitudinal corpus of 17 German mother-infant dyads filmed during an everyday routine activity when the infants were 3, 6 and 8 months old. Adopting a mixed methods approach, we conducted qualitative analysis of the sequential organisation of gaze. We then coded the gaze behaviour of mother and infant and subjected the data to recurrence analysis (e.g. Warlaumont et al., 2010). Our findings indicate systematic differences across development: While the general similarity of gazing behaviour decreases with age, the behaviour of mother and infant becomes more tightly coupled, more structured around a predictable temporal pattern. We propose that mutual understanding may emerge from a gradual “getting into synch”, a process of participation in co-actions which generate mutual experiences. As the skillfulness of the infants increases resources may become available enabling infants to establish mutuality outside of the dyad.

17:00
Replicability of Findings in Infant Attention Research
SPEAKER: unknown

ABSTRACT. In recent years, the replicability of research studies in general, and psychological studies in particular, has been repeatedly examined. It has even been argued that study results should be reproducible rather than merely replicable (see Drummond, 2009). Vast discrepancies have been found between results from adult studies reported in original articles and their respective replications undertaken for a collaborative project (see Open Science Collaboration, 2015). The question arises how reliably replicable results are that were reported from developmental psychology studies. In an attempt to replicate results reported for a spatial cuing paradigm (e.g., Richards, 2000, 2005), we collected pilot data from a group of 2- to 9-month-olds. We presented trials involving a central animation and different combinations of cues, gaps, and targets. We reproduced the commonly seen gap-/overlap-effect with delayed saccade onsets in the overlap condition relative to the gap condition. Moreover, for trials involving a cue occurring on the same side as the target and with a gap duration of 150ms, we found the previously reported facilitation of the target side (i.e., earlier saccade onsets) compared to trials with an incongruent cue and a gap duration of 150ms. However, as reported in earlier studies attempting to replicate the same findings, we did not reproduce the inhibition of return for congruently cued trials with a gap duration of 1000ms. In fact, keeping infants engaged in and attending to an empty screen for a full second without shifting gaze direction to the cued side or anywhere else showed to be rather challenging. Therefore, we urge researchers to caution when reading about similar results that others repeatedly fail to replicate.

17:00
Shared sensory experiences modulates understanding others in childhood
SPEAKER: unknown

ABSTRACT. Recent studies have found evidence that the multisensory stimulation on body-awareness can be extended to face involving social-cognitive processes in adults. The shared sensory experience (e.g. seeing someone else's face being touched on one's own face simultaneously) elicits changes in the mental representation of self-other boundary. In our studies we examined the development of multisensory process elicited by the enfacement illusion. Exp 1 children (aged 3-, 4-, 5-, and 6-years) saw an unfamiliar face touched synchronously or asyncronously while feeling touch on their own face. We tested whether the synchronous multisensory stimulation facilitates the reading emotion of others (happy, fear, disgust). Findings supported that synchronous multisensory stimulation elicit changes in self-other boundaries, facilitates the recognition of emotions and the developmental shift emerged between age of 3 years and 4 years. Exp 2 we studied whether the shared sensory experiences between two people could alter the way peripersonal space was represented, and whether this alteration could be influenced the ability to take another person’s viewpoint in perspective taking task. We measured the shared sensory experience effect in a perspective taking task varying first person perspective and third person perspective. Findings suggest that the multisensory integration of the peripersonal space can be dynamically modulated by the social interactions with partners and contribute to the mechanism of social cognition such as understanding others’ actions, and predicted better understanding others’ perspective. The shared multisensory experiences between self and other, even in childhood, can change the perceived similarity of others relative to one's self which resulted a better emotion recognition and taking perspectives of others.

17:00
Association between prenatal measures and postnatal life
SPEAKER: unknown

ABSTRACT. The study of foetal development has grown over the last decades (e.g., DiPietro et al., 1996, 2010), yet a great amount of research still needs to be done to uncover the associations between prenatal and postnatal life. Previous evidence (DiPietro et al., 1996) showed stability in foetuses heart-rate (FHR) and in the variability of the heart-rate (FHRV) during pregnancy starting at mid-gestation, and associations between these measures and infant temperament. In a more recent study, DiPietro et al. (2010) found an association between a measure of ‘coupling’ between FHR patterns and foetal movements (FM), and the neonatal development of the central nervous system. Another study (DiPietro et al., 2004) reported that coupling and FHR patterns are affected by the family’s socio-economic status. In the currently on-going longitudinal study (N=60), we are recording FHR, FHRV, and FM during the last 6 weeks of pregnancy and collecting information on family’s social background. Neonatal measures include heart-rate recording, a gap/overlap task, and the infant temperament questionnaire (IBQ-R-VS). Based on previous studies, prenatally, coupling between acceleration in FHR and the amount of FM is expected, and infants with lower socio-economic status are expected to have less coupling compared to those with high socio-economic status. Such prenatal measures are expected to affect the newborn’s state of alertness as measured in the gap/overlap task, and a positive correlation between foetal and infant HR measures is also predicted.

17:00
Neural mechanisms of speech versus non-speech detection and discrimination in children with autism spectrum disorder
SPEAKER: unknown

ABSTRACT. In the current study, we utilized a Rapid Auditory Mismatch (RAMM) paradigm in order to investigate event-related potential (ERP) responses associated with the detection and discrimination of speech and non-speech sounds in children with ASD. Specifically, we compared a group of 4- to 6- year old high-functioning children with ASD with typically developing (TD) children matched on gender, chronological age and verbal abilities. ERPs were recorded while children passively listened to pairs of stimuli that were either both speech sounds, both non-speech sounds, speech followed by non-speech, or non-speech followed by speech. Control participants exhibited temporal cortex N330 match/mismatch responses reflecting speech versus non-speech detection, bilaterally, whereas children with ASD exhibited this effect only in the left hemisphere. Furthermore, while the control groups exhibited match/mismatch effects at approximately 600 ms (temporal P600, central N600) when a non-speech stimulus was followed by a speech stimulus, these effects were absent in the ASD group. These findings suggest that children with ASD fail to activate right hemisphere mechanisms, likely associated with social or emotional aspects of speech detection, when distinguishing non-speech from speech stimuli. Furthermore, the ASD participants failed to detect the change from non-speech to speech at a late cognitive stage of evaluation, when speech stimuli followed a non-speech sound. Together, these findings are consistent with the hypothesis that children with ASD rely more distinctly on physical stimulus properties versus social or emotional cues when distinguishing speech from non-speech sounds. We are currently collecting these RAMM ERP data from typically developing infants and toddlers at high versus low risk of developing ASD.

17:00
Visuo-spatial orienting triggered by biological motion walking direction: ERP evidence from 6-month-old infants
SPEAKER: unknown

ABSTRACT. The ability to detect social signals represents a first step to enter our social world. Behavioral evidence has demonstrated that early in life sensitivity to biological motion emerges (Simion et al., 2008), which later in infancy would enable more efficient orienting responses towards stimuli cued by the direction of motion than towards uncued stimuli (Bardi et al., 2015). Yet, from a developmental perspective, the functional meaning and neural underpinnings of this priming effect remain to be understood.

Our study aimed at addressing these issues by using a spatial cueing paradigm while recording EEG from 6-month-olds. Infants were presented with a point-light walker displayed at the center of a monitor, randomly facing to the left or to the right. This spatially-non-predictive cue was followed by a single peripheral target randomly appearing at a position congruent (valid trials) or incongruent (invalid trials) with the cue walking direction. We examined ERP responses to targets in valid and invalid trials and coded saccades by using an offline frame-by-frame coding procedure.

First, saccadic localization latency was affected by cue direction, as infants shifted their gaze faster toward targets appearing at congruently cued locations as compared to incongruent positions. This priming effect replicated previous findings with a similar paradigm in an eye-tracker study (Bardi et al., 2015). Secondly, the priming effect was coupled with an enhanced P100 ERP component to targets in valid trials. This P100 validity effect might be explained by hypothesizing that the point-light walker would trigger covert orienting towards walking direction, yielding a gain control or selective amplification of sensory information in the visual pathways and a facilitation of oculomotor responses to stimuli appearing at attended locations.

Overall, results suggest that biological motion walking direction acts as a cue to orient visual-spatial attention in infancy, enabling sensory facilitation in processing potentially relevant information.

17:00
Investigating executive function in toddlers at risk for ASD and/or ADHD
SPEAKER: unknown

ABSTRACT. ASD and ADHD are neurodevelopmental disorders that frequently co-occur. There is strong evidence to suggest that disruption to executive function (EF) – by which is meant those higher-order self-regulatory processes that allow for the flexible modification of thought and behaviour in response to changing cognitive or environmental contexts – is implicated in both ASD and ADHD, at least from the later preschool years onwards. However, the profile of impairment between and within the disorders is inconsistent and it is as yet unclear whether atypical performance on EF tasks in these populations might be linked to a distinct cognitive phenotype attributable to the deterioration or inadequate maturation of the prefrontal cortex, or to the accumulation of interference from elsewhere in the brain.

There is a thus a need for longitudinal investigation of early EF ability in children who later go on to develop ASD and/or ADHD. This investigation has to date been hampered by the fact that performance on the limited available measures of early EF (emerging between the ages of 2 and 3) may be influenced by language ability and social engagement/compliance – both potentially-impaired domains in this population. In this talk I introduce a novel task which aims to circumvent these limitations and present some initial data from a sample of 2- and 3-year-olds at risk for ASD and/or ADHD.

17:00
Individual Differences in Children's Iconic Gesture Use: The role of cognitive abilities and personality
SPEAKER: unknown

ABSTRACT. Considerable variation exists in how frequently people gesture as they speak. This variation has been mainly attributed to speakers’ verbal and spatial skills. However, the nature of the relation remains unclear. For instance, individuals with poor visual and spatial working memory use more gestures1. In contrast, high spatial skills and low verbal skills have also been found to lead to more gesture use2. Finally, verbal, but not visuospatial skills have been found to negatively influence gesture frequency3. In addition to cognitive skills, personality traits- specifically extraversion and neuroticism- positively influence gesture production4.

Speech and gestures develop in close relation throughout childhood5,6,7,8. Previous studies focused on the type of gestures that children use, and how these gestures relate to children’s language or cognitive development. To date, no study has investigated why some typically developing children gesture more than others.

This study examines individual differences in children’s frequency of iconic gesture use to determine whether they are related to differences in verbal skills, spatial abilities, personality traits, and memory abilities. To date, nineteen children aged 4 to 6 (mean age= 5;3) participated. Four tasks were used to elicit gestures. We measured children’s verbal skills, spatial skills, memory, and personality traits using standardised tests.

Preliminary results show that higher frequency of iconic gesture use was related to higher verbal working memory, and poorer visuospatial short-term memory. In addition, gender also influenced gesture frequency such that being a girl was associated with an increase in iconic gesture use.

This is the first study to show that cognitive abilities play an important role in iconic gesture use during development. Our results extend previous findings on individual differences in adults’ gesture production, and indicate that verbal and visuospatial memory abilities determine how frequently individuals use iconic gestures even at the age of 5.

17:00
Intention or attention before pointing: Do infants early hold out gestures reflect evidence of a declarative motive?
SPEAKER: unknown

ABSTRACT. Gestures are the first signs of conventional communication within infants and can reflect various motives. Gestures with a proto-declarative motive display an understanding that others can share attention in an external referent. This is a uniquely human trait and has been linked to later language development (Cochet & Vauclair, 2010). It is still unclear when this proto-declarative motive develops within infant gestures. Previous research on early pointing suggests that declarative pointing develops around 12 months, however this may not reflect the earliest onset of this skill. Precursory gestures, such as showing a toy, may also reflect a declarative motive. The current study investigated the motives behind these earlier gestures, to establish whether from 10 months infants use ‘hold out’ gestures declaratively. Infants were placed in an experimental setting aimed at eliciting a ‘hold out’ gesture in a declarative context; the experimenter reacted to infant gestures in different ways, and the infants’ responses were recorded. Results of a pilot study with 12-month-olds suggest that when the experimenter engaged in Joint Attention (i.e. shared interest and alternated gaze between the infant and the toy), infants displayed more hold outs across trials and higher levels of satisfaction. When the experimenter engaged just with the toy or infant alone, infants gestured less across trials and repeated their gestures more within trials. Furthermore, in the conditions where the experimenter did not display joint attention, there was an increased number of behaviours, including vocalisations, waving and throwing the toy, which appeared to be attempts to establish joint attention. Overall, these preliminary results suggest that gestures that emerge prior to pointing are socially motivated and used to share attention and interest with others. Here, we will compare these findings with those from findings with younger infants to cast light on the developmental trajectory of these behaviours.

17:00
Differential impacts of action language on action prediction in infants and toddlers
SPEAKER: unknown

ABSTRACT. Understanding other people’s actions is a crucial skill for interaction in the social world. To infer action goals, humans can make use of cues from various domains, such as first-hand action experience, observational experience with actions, or auditory cues like action-associated sounds or language. Research has demonstrated that these domains are interrelated, especially when perception and action were assessed via the suppression of the mu-rhythm. This project aims at investigating the neural interrelation between action and language, since there is very only little research in infants in this particular field. Two recent studies indicated an association between early action prediction and later language abilities on a neural level (Kaduk et al., in press) as well as an influence of action verbs on action prediction, measured through predictive gaze-shifts that is related to the infants’ language skills (Gampe & Daum, 2014). Since predictive gaze-shifts are causally linked to the motor system, one would expect to find an impact of language on the sensorimotor activity during the processing of observed actions, reflected by the power of the mu-rhythm. This assumption will be tested in the proposed project. We will conduct an EEG study applying a combined time-frequency and event-related potential approach. Children between 12 and 24 months of age will be presented with spoken action verbs, after which a video depicting the labeled action will be shown. We will choose the action verbs with respect to familiarity and congruency to the action. Results are expected to show an impact of familiarity and congruency on the mu-rhythm and N400 response to subsequently observed actions. This impact is assumed to depend on the infants’ age and thus language status. The proposed study will provide us with a deeper understanding of the development of the interaction between language and action and its neurophysiological underpinnings.

17:00
Children’s verb learning from touchscreen apps
SPEAKER: unknown

ABSTRACT. Children live in a digital age, with around 80% of 2-4-year-olds having used a mobile device (Radesky, Schumacher & Zuckerman, 2015). However, while apps have considerable potential as a learning platform, such educational claims have not been tested experimentally (Hirsh-Pasek et al., 2015). Educational apps claim to teach children number, words and many other skills. In the present experiment, we investigate whether children can learn verbs from touchscreen apps. Typically, children produce verbs much later than nouns (Childers & Tomasello, 2002; Imai et al., 2008), potentially due to the transience of motion information (Monaghan et al., 2015). 3-4 year-old children are randomly assigned to an app or live condition. In both conditions, three novel actions are demonstrated. Each action is demonstrated in turn with four different objects and labelled with a novel label four times. In the app, four different pictures of the actions and one video are used for the action demonstration. Children’s verb learning is tested using a three-choice pointing task and children are asked to reproduce each action with a novel object in an imitation test. Children’s performance on the pointing task will be compared to chance (.33) and children’s performance on the imitation test will be compared to the spontaneous performance of a baseline group of children who did not see the action demonstrations either live or on the app. Approximately 12 children will be tested in each condition; at present we have a sample size of 11 participants so preliminary analyses are not possible at this stage. However, the results of this research will contribute to our understanding about the potential for touchscreen apps to aid children’s vocabulary acquisition, and about whether children can learn from apps more generally.

17:00
Children’s processing and comprehension of complex sentences containing temporal connectives: The influence of memory on the time course of accurate responses.
SPEAKER: unknown

ABSTRACT. In a touch screen comprehension paradigm, we recorded 3- to 7-year-olds’ (N = 108) accuracy and response times to two-clause sentences containing 'before' and 'after'.

We manipulated whether the presentation order matched the chronological order of events: ‘He finished his homework, before he played in the garden’ (chronological order) vs ‘Before he played in the garden, he finished his homework’ (reverse order). The sentences were narrated to the children whilst they viewed animations of the actions in each clause. After each sentence, they were asked to select the event that happened last to assess their understanding of the temporal order.

Children were influenced by order: performance was most accurate when the presentation order of the two clauses matched the chronological order of events. Differences in response times for correct responses varied by sentence type: accurate responses were made more speedily for sentences that are associated with lower memory demands. An independent measure of memory predicted this pattern of performance.

These findings will be discussed in relation to knowledge of connective meaning and the processing requirements of sentences containing temporal connectives.

17:00
The role of experience in the early development of prosocial responses.

ABSTRACT. The existing data on empathy development during infancy and toddlerhood do not explain the motivational value of empathy for prosocial behavior at this age [Hoffman, 2000; Eisenberg, Strayer, 1987; Knafo et al, 2008; Dunfield, 2014]. The focus of this research is to understand the nature of the triggers inside the linkage between empathy and prosocial development in infancy and toddlerhood. I propose that the structure of the sympathetic reaction accommodates factors related to emotional experience and social competence serving together as triggering elements of pro-social responses. In order to test these hypotheses, I aim to investigate the role of the behavioral skills to act in the benefit of another and of the own experience with feeling distress in certain situations as factors underlying the sympathetic behavior in infants and toddlers. I suggest cross-sectional investigations into the role of emotional and behavioral experience in empathy and prosocial development. This study will also investigate the developmental changes in prosocial responses from infancy to toddlerhood. Recently the design has been worked out, and the study has been piloted. Now I am in process of collecting data in the first section of 30-36 months’ subjects. This is two-factor between-subject experiment with four groups of subjects each including individual variation of factors. The design includes interaction involving the independent factors and the part of interaction that measures the dependent variable-child’s prosocial acts of comforting. I have no final statistical evidence now since there are only 24 participants. Despite this, initial data show the strong trend that endorses my model. Most of children display comforting in cases of both of factors presence. I plan to have the full empirical confirmation to August in order to present it in the full volume and to have a discussion.

17:00
Agents, patients, and actions: What is encoded in 12-month-olds’ perceptions of dynamic events?
SPEAKER: unknown

ABSTRACT. Perception of causal events requires infants to identify individual objects and participants, to encode relations between them, and to use these relations to form inferences about causality. By the end of their first year infants are able to process simple events using these steps, and have also been found to process more sophisticated event components, such as goals and intentions, path and manner of movement, and the animacy of the event participants (Gergely, Nadasdy, Csibra, & Biro, 1995; Pulverman, Song, Hirsh-Pasek, Pruden, & Golinkoff, 2013; Rakison, 2005). For many of these more advanced event components, however, infants’ processing seems initially restricted to specific elements of particular events, so that, for example, a particular act is only associated with a specific agent (Cohen & Oakes, 1993).

For infants to establish mappings between non-linguistic event components and linguistic categories they must learn to generalise from specific features of particular events to meaningful, abstract, conceptual categories independent of those events. Two such categories, semantic roles (e.g. agent and patient) and types of action, are to be investigated in the current series of eye tracking experiments. Twelve-month-old infants will be habituated to animations of two-participant causal events. At test, novel animations presented to infants will be systematically varied to assess infants’ sensitivity to 1) changes of action in the events, 2) changes to the semantic roles of event participants, and 3) changes in the animacy of the participants. We will present results of looking behaviour at test which will reveal if and how previously learned event information is generalised to novel events, and thus provide insight as to whether abstract categories of agents, patients, and/or action type are part of infants’ event processing at the end of the first year. Implications for event perception and language learning will be discussed.

17:00
The message is in the medium: Electronic vs. paper picture-books influence joint attention in mother-infant interaction
SPEAKER: unknown

ABSTRACT. Background: Some studies indicate that joint attention episodes occur more often during shared book reading than in other play situations (Sato & Uchiyama, 2012; Sugai et al., 2010; Yont et al., 2003). Recently tablet devices, like iPads, have been in widespread use and many children use electronic picture-books from early childhood. However, it is simply assumed that shared e-book reading promotes the same kinds of interaction and joint attention as printed picture-book reading. The goal of this study is to clarify these effects by examining the frequency of joint attention episodes in printed and electronic picture-book reading contexts. Methods: Participants were 10 pairs of 12-month-old infants and their mothers. We conducted printed picture-book reading, electronic picture-book reading with narration sounds, and electronic one with no sound context (respectively 3 minutes), using the same story in each medium – a book entitled ‘Mari’. The videotaped mother-infant interactions were coded according to the coding system concerning the infants’ responding to joint attention requests form the mother (RJA) and initiating joint attention (IJA), developed by Osorio et al. (2011). Results: A series of ANOVAs suggest that [1] the duration of mother’s looking at the child was lower in the electronic picture-book with narration/sound than in printed book context; [2] the proportion of maternal looking at the child and alternate gazing preceded RJA more than did parallel attention; [3] a clear trend in our analyses to date show that infant-mother RJA (reciprocal gazing) occurred less in the electronic book with sound than when mother and infant were sharing a paper book. Conclusions: These results show that electronic picture-books with narration/sound may reduce the frequency of the child’s RJA. We discuss the implications of the increase in technology for infants and the possible effects on e-books with sounds on joint attention development in infancy.

17:00
Maternal prenatal DHA intake and infant performance on a free-play attention task at 22 months
SPEAKER: unknown

ABSTRACT. We present the results of a study examining maternal prenatal docosahexaenoic acid (DHA) intake, during the second and third trimesters, and an infant free-play attention task at 22 months. DHA is a long-chain polyunsaturated fatty acid member of the Omega-3 family and higher prenatal maternal intake has been previously linked to improved infant cognitive development. No study as yet has determined whether this is specifically related to intake in a particular trimester period. Fifty-seven infants completed the free-play attention task (a portion of a larger sample of 125 women and children who participated in this ongoing longitudinal study). Maternal DHA intake estimates were gained using a comprehensive Food Frequency Questionnaire (FFQ) and were divided into high or low groups for analysis. One-way ANOVA’s revealed a number of significant differences in task performance, between the high and low groups, in relation to third trimester DHA intake. There were no significant findings for second trimester intake. The results initially appear to indicate better performance for the low DHA group with this group showing longer task duration (p = 0.01); greater number of attention events (p = 0.01); and greater attention duration (p = 0.04) throughout the task. As we would have expected better performance in the high DHA group we discuss the appropriateness of the outcome measures in this task and the interpretation of which attentional patterns are taken to indicate ‘better performance’. It may be that children with lower interest are, in fact, better information processors. Or that perhaps the task becomes quickly boring, and those children with a heightened sense of exploring the environment score lower than more placid children. While an open question at this stage, we nevertheless observe a significant link between DHA intake in the third trimester of pregnancy and cognitive outcome late in the second year.

17:00
Neurophysiological measures of object representations following occlusion and communication
SPEAKER: unknown

ABSTRACT. Previous research has established the role of gamma synchronization as a measure of object representation after occlusion, also specifically to occluded objects instead of faces and to the agent’s beliefs about occluded objects. However, less is known about infants’ direct online processing of object representations following referential communication. The aims of the present study are twofold: to replicate the established signature of object representation in our sample of 12-month-old infants and to test whether similar results would also be evident when processing a communicative pointing gesture towards an occluded object. If this was the case, it would strongly support the notion that babies expect an object when they follow a point.

A life-action video sequence depicting object occlusion and disappearance are presented to 12-month-old- infants. After the event, the actor points at the occluder. At present, we have completed the EEG recording from about 50 infants and after attrition, expect that at least half the number of data sets comprise our final sample. Due to the procedural difficulties with the analysis of infant EEG data, it has been challenging at this point to state the precise location of gamma activation for the experimental conditions of interest. However, our expectation is that the referential communication towards the occluder also elicits gamma synchronization in the right temporal areas as reported in the earlier studies. We also expect these event-related activations to be significantly greater than the baseline.

Data reduction and analyses of the pointing phase of the study are ongoing. If we are able to replicate the Kaufman 2005 effect in this novel paradigm, it would provide converging evidence for neurophysiological markers of object representations. In addition, if the current data elicit comparable results from the referential communication, it would add a new dimension to infant object representation.

17:00
Perception and expectations in young children at-risk for Autism Spectrum Disorder (ASD)
SPEAKER: unknown

ABSTRACT. This PhD project is part of a longitudinal study to identify early markers of ASD, which so far cannot be reliably diagnosed before 3 years of age. Participants are infants who have an older sibling with ASD. Around 20% of these infants will develop ASD themselves, compared to around 1% of the general population.

The Predictive Processing account of ASD posits that people with ASD rely less on expectations and previous experience than people without ASD when it comes to interpreting new sensory information. The current project will test how children adjust these expectations.

Study 1: EEG and eye-tracking task Two-year-olds view faces looking in different directions, and then faces looking in one extreme direction. This should help them build an expectation of where people tend to look. They then see people looking in different directions again. At-risk infants are expected to habituate to the faces more slowly (more gradual attenuation of N290 and P400) and to differentiate less between the direction they are used to seeing and the direction they have had less exposure to (smaller ERP-amplitude differences between conditions) than typical infants.

Study 2: Touch screen task Three-year-olds will play a categorisation game. Stimuli belong to different categories which are manipulated regarding their similarity to each other and their standard deviations, as well as their frequency of presentation. These are parameters for optimal categorisation as defined in Signal Detection Theory. At-risk children's responses are expected to be less affected by the frequency of presentation of the different categories than typical children's. That is, they should be less influenced by previous experience.

If fundamental expectation-building and -adjusting are impaired, as Predictive Processing claims, we should see this difference at an early age, which would lead to better understanding of the causes of ASD, and possibly earlier diagnosis.

17:00
Two Sides to Every Story: Children Learn Words Better From Single Page Displays
SPEAKER: unknown

ABSTRACT. Picturebooks provide richer sources of vocabulary than conversations alone (Montag et al., 2015), thus facilitating young children’s word learning (Senechal, 1997). Although young children learn better from realistic images than line drawings (Ganea et al., 2008), research on word learning from storybooks has neglected to examine the influence of number of illustrations. In the current study we tested whether displaying two storybook pages simultaneously or just a single storybook page affects word learning. We read the same stories to three groups of 3-year-old children (N=36). In the one illustration condition children saw a single illustration (i.e., other page was blank) and in the two illustrations condition children saw two pages simultaneously (i.e., one left, one right). Children in the control group were read stories printed in A3 format so they saw a single illustration but had the same surface area to scan as in the two illustrations condition. All children heard three different stories, providing a total of 12 exposures to two different novel objects and their names. Word learning was assessed using a 4-alternative forced-choice test (cf. PPVT). All children learned words from the stories. The ANOVA yielded a main effect of condition F(2, 33) = 4.10, p = .03, η^2 = 0.20 (see Figure 1), specifically, children in both the one illustration and control group learned words to the same degree (p = ns) and more than in the two illustrations condition (p < .01). We argue that pre-reading children likely struggle to determine when the text has moved on to the next illustration. Previous research demonstrates that learning words from three different stories is difficult for young children (e.g., Williams & Horst, 2014), however, the current data indicate that simply reducing the number of illustrations enables learning in this already challenging situation. An additional intervention is suggested.

17:00
Influence of Foreign Language Experience on Early Language Development
SPEAKER: unknown

ABSTRACT. Recent study shows that experience in bilingual environments influences infants’ expectations about the nature of word meanings; monolingual infants expect word meanings to be shared across different speakers who use the same language, but bilingual infants do not have such expectation (Henderson & Scott, 2015; Scott & Henderson, 2013). The current study examined whether regular exposure to a foreign language has the same effect as bilingual experiences on infants’understanding of conventional properties of language. Korean-speaking infants (mean age = 13.26 months) were tested in the violation-of-expectation paradigm. Some of the infants had very little or no exposure to English (monolingual) but the others were regularly exposed to English mainly via audio or video (exposed group). To start, the infants watched two experimenters alternately singing nursery rhymes in Korean, observing that the experimenters spoke the same language. Then the infants were familiarized to a scene in which the first experimenter provided a novel label for one of the two novel objects. During test trials, the second experimenter used the same label to refer to the same (same-object event) or different object (different-object event). Monolingual Korean infants looked significantly longer at the different-object event than at the same-object event, suggesting that they expected the same language speakers to share object labels. On the other hand, the infants with regular exposure to English looked about equally as long at the two test events, showing that the exposed group did not assume the object labels to be shared across different individuals who used the same language. Note that the results from the exposed group resemble the previous findings from bilingual infants (Henderson & Scott, 2015). The results suggest that experiences with foreign languages can influence early language development and such experiences do not necessarily have to be of immersive and bilingual.

17:00
Females come first in the development of the other-race effect in infants.

ABSTRACT. Recent studies have showed that the other-race effect (ORE) in infancy emerges first for female own-race faces (e.g., Hayden et al., 2007; Sangrigoli & de Schonen, 2004) and later for both female and male own-race faces (Tham et al., 2015). This may be related to infants’ predominant experience with gender and race of their primary caregiver (Quinn et al., 2002; 2008).

To understand the effects of exposure and how faces may be represented in infants’ memory, it is important to include studies on individuals with multi-racial experiences. The current research aims to extend Tham et al’s (2015) study by testing 4-month-old (4m) and 9-month-old (9m) infants from a multi-racial population - the Malaysian population. The key group selected is the Chinese ethnic group. Specifically, we targeted these individuals that were born and raised in the capital of Malaysia, Kuala Lumpur (population breakdown: 45.2% Malays, 42.3% Chinese, and 11% Indians). Using a visual paired comparison paradigm (VPC), we assessed Malaysian-Chinese (n = 50) infants’ ability to discriminate faces from three racial groups (Chinese, White, and Malay) of both face gender (female and male). According to the primary caregiver hypothesis, we expect 4m infants to show an ORE discriminating Chinese female faces and 9m infants to discriminate Chinese and Malay faces regardless of face gender.

As predicted, 4m infants demonstrated recognition only for female own-race faces, t (8) = 2.93, p = .019. In contrast, 9m infants demonstrated a recognition advantage for female Chinese and female Malay faces, t (8) ≥ 2.78, p ≤ .024. The pattern of performance in this study suggests that the perceptual system can be modified according to increased familiarity and that it is vital to take into account the role of face gender when investigating on face perception within the first year of life.

17:00
Learning different kinds of tool use actions in early childhood
SPEAKER: unknown

ABSTRACT. Already at the end of their first year of life, children start to use tools. Tools are objects which transform a user's operating movement into the desired outcome (effect) at the tool's effective end. The development of tool use seems to arise gradually from existing manual behaviour, for instance hammering from hand banging. Consequently, tool use in young children is mainly investigated with tools like a hammer or a crook for which the transformation is rather simple: The effect at the tool's tip obviously mirrors the operating movement at the tool's handle. However, in everyday tool use, humans are confronted with various and more or less complex kinds of transformations. For instance, these can be compatible (effect same as operating movement) or incompatible (effect inverse to operating movement), transparent, causally opaque and finally even virtual. Research has revealed that at least in adults, the kind of transformation substantially influences the speed and accuracy of movement selection and furthermore activates different strategies. So far, not much is known about how young children deal with different kinds of transformations. The presented project aims to fill this gap by investigating how young children learn to use tools which entail compatible or incompatible transparent, opaque and virtual transformations between operating movements and resulting effects. We developed a lever paradigm allowing to implement the different transformations, remaining equal in all other aspects (e.g., complexity of the operating movement). Results illustrate that 28-months-olds deal with all these kinds of transformations, but that there are remarkable differences in the process of learning and in the facility of transfer. Results are discussed with respect to the ascending complexity of transformation from simple manual behaviour to technically sophisticated tool use.

17:00
The effect of input frequency on children’s production of morphologically complex verb forms in Japanese
SPEAKER: unknown

ABSTRACT. Differences in children’s proficiency with different inflectional forms are often explained in terms of differences in their relative input frequency (see Ellis, 2002 and Ambridge, Rowland, Theakston & Kidd, 2015, for reviews). However, input frequency is often confounded with other factors, such as morphological complexity (high-frequency forms tend also to be simple forms). The aim of this study was to disentangle the effects of input frequency and morphological complexity by focusing on Japanese: a language with both simple and complex past-tense verb forms. 30 children aged 3;5-5;3 participated in a production experiment designed to elicit simple (e.g. tabe-ta ‘ate’) and complex completive past-tense forms (e.g. tabe-chat-ta ‘ended up eating’) using a combined priming/sentence-completion paradigm. Half of the verbs were more frequent in simple than completive form in a representative corpus of child-directed speech (MiiPro corpus, Nishisawa & Miyata 2009; 2010), and half displayed the opposite pattern. A mixed effects model revealed a significant positive relationship between the relative frequency of completive versus simple forms in the input and children’s production of completive versus simple past tense forms (Beta=2.92,χ2= 5.21, p<0.03). That is, children produced more complex than simple forms for verbs that were more frequent in complex (completive) than simple form in the input, This finding constitutes evidence against accounts under which complex forms are always generated from simple or root forms by the application of a morphological rule or process, and in favour of accounts under which children learn and reproduce forms directly from the input, in a way that is highly sensitive to input frequency.