ECEM2022: 21ST EUROPEAN CONFERENCE ON EYE MOVEMENTS
PROGRAM FOR WEDNESDAY, AUGUST 24TH
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 15: Keynote: Miriam Spering

Eye Movements as a Window into Human Decision-Making

Miriam Spering

University of British Columbia, Canada

Seeing and perceiving the visual world is an active and multimodal process during which the eyes continuously scan the visual environment to sample information. My research group uses human eye movements as sensitive indicators of performance in real-world interceptive tasks. Tasks such as catching prey or hitting a ball require prediction of an object’s trajectory from a brief glance at its motion, and an ultrafast decision about whether, when and where to intercept. I will present results from two research programs that use eye movements as a readout of these types of decision processes. The first series of studies investigates go/no-go decision making in healthy human adults and baseball athletes and reveals that eye movements are sensitive indicators of decision accuracy and timing. The second set of studies probes decision making in patients with motor deficits due to Parkinson’s disease and shows differential impairments in visual, motor and cognitive function in these patients. I will conclude that eye movements are both an excellent model system for prediction and decision making, and an important contributor to successful motor performance.

09:30-10:00Coffee Break

Takes place in LT5

10:00-12:00 Session 16A: Symposium: Eye movements in memory processes

The role of eye movements in memory processes: between working memory and long-term memory

Organiser: Shlomit Yuval-Greenberg (Tel Aviv University)

From the day we are born and until our death we are constantly engaged in exploration of our ever-changing environment. In the continuous process of visual exploration, our eye movements play a critical role as they repeatedly shift the center of gaze towards locations of interest. However, eye movements are not only driven by the physical presence of stimulation, but also by internal representations of stimuli that are not physically present. Such internally-driven eye movements are thought to play a key role in memory processes at both shorter- and longer-term time scales. This symposium will bring together scholars studying different types of eye movements, including saccades, microsaccades and smooth pursuit,  in both working memory and episodic long-term memory tasks. The first three talks will focus on working memory and discuss how different types of eye movements can be used as windows into working memory processes. The next three talks will focus on episodic memory and examine the dynamics of gaze during encoding and retrieval and their neural correlates. The goal of this symposium is to lead to a discussion comparing the various types of eye movements and their roles in short and long-term memory.

 

Location: LT1
10:00
Utilising directional microsaccade biases as a 'tool' to track selective attention inside working memory in time and space

ABSTRACT. Selective attention can not only be directed to external sensations, but also to internal representations within working memory. We have recently uncovered how such internally directed selective attention is associated with directional biases in microsaccades – extending the role of the oculomotor system to internal orienting of attention. In my talk, I will show how we have started to utilise directional biases in microsaccades as a novel tool to track internal attention along 3 dimensions: to track (1) whether internal attention is deployed, (2) when it is deployed, and (3) where it is deployed. Doing so, I will illustrate how the study of microsaccades can be used to uncover new insights into the mechanisms of internally directed selective attention in dynamic and immersive settings.

10:20
What the variations in saccade metrics and visual memory across the visual field tell about saccadic selection in visual working memory

ABSTRACT. Saccades select content that is currently maintained in visual working memory—resulting in better memory at locations that are congruent with the saccade target than at incongruent locations. Using a data set of nine experiments (including eight published experiments) with more than 100k trials, we further substantiate the claim of a fundamental saccadic selection mechanism in memory by assessing whether saccadic selection is effective in all participants and at all tested locations.

In the experiments, we briefly presented arrays of oriented stimuli at eight possible locations that were arranged on an imaginary circle around central fixation. Several hundred milliseconds later, a movement cue prompted saccades to one of these locations. Next, participants were prompted to report the orientation (clockwise vs. counterclockwise) of the memory item at a randomly highlighted location.

Using Bayesian hierarchical models, we observed saccadic selection in memory at all tested locations. Individual differences in saccadic selection were compatible with a population-wide model of effective saccadic selection. Moreover, saccade metrics and visual working memory varied strongly across the visual field. Trial-by-trial variations in saccade metrics were associated with memory performance, providing additional evidence for a bi-directional link between the oculomotor system and visual working memory.

10:40
Eye movements as a window into time-dependent memory processes

ABSTRACT. An important capacity of human memory is to help us perceive, recognize, and keep track of visual objects and events. This ability enables us to perceive a stable visual world despite dynamic changes caused by object motion or by body movement. To achieve perceptual stability, our memory system must create an episodic representation of successive states of objects and events in our environment. How visual information is integrated across space and time to form such representations is not well understood. Here I will present a series of experiments that utilizes smooth pursuit eye movements as a sensitive probe into time-dependent memory processes. Observers tracked a cluster of moving, coloured objects that temporarily disappeared behind an occluder. Upon reappearance, object color was either the same, new, or switched between two objects. Observers’ reaction time and performance were best when the display was the same, and worst when the color switched. Smooth pursuit velocity at object reappearance immediately reflected perceptual judgments, indicating that eye movements can indicate processes of memory formation at high temporal resolution.

11:00
Gaze behavior supports episodic memory: insights from electrophysiological data

ABSTRACT. Episodic memory allows us to re-experience past events in exquisite detail in the “mind’s eye”. Previous research suggests that eye movements may play a functional role during encoding by binding together event details into a coherent memory representation and during retrieval by facilitating the reconstruction of the past event. However, less is known about the interplay between gaze and memory mechanisms as encoding and retrieval unfold in time. The talk will describe two studies that combined eye-tracking and electrophysiological recordings of brain activity (EEG) to capture the temporal dynamics of gaze-memory interactions. The first part concerns encoding and the neural mechanisms that subserve episodic memory formation across saccades to event elements during free viewing. The second part examines retrieval and the neural correlates of the looking-at-nothing effect, i.e., superior performance when gaze location overlaps between encoding and retrieval. The results support claims of a functional role of eye movements in memory and reveal the time course of memory-related oscillatory neural activity. : i) synchronization in the theta band during encoding predicts subsequent relational memory for event elements, ii) desynchronization in the alpha/beta band during retrieval covaries with facilitated episodic remembering at congruent gaze locations.

11:20
The intersection of memory and active vision in aging.

ABSTRACT. The oculomotor and hippocampal memory systems interact in a reciprocal manner. Eye movements accumulate information from the visual world, contributing to the formation or updating of coherent memory representations. Conversely, memory influences ongoing viewing by increasing the efficiency of active vision. Eye movements contribute to memory retrieval by reconstructing the rich, vivid, spatiotemporal details from memory (gaze reinstatement). However, the interactions between the oculomotor and memory systems are altered in aging. In older adults, neural activity in the hippocampus is not modulated by gaze fixations to the same extent as observed in younger adults, despite the fact that older adults typically enact more gaze fixations than younger adults. Older adults also exhibit less unique patterns across different stimuli, and across repeated viewings of the same stimulus. Together, these findings suggest that the memory representations formed by older adults may be less complete and/or less distinct than those of younger adults. Consequently, even when older adults engage in gaze reinstatement at retrieval, such reinstatement does not necessarily support accurate memory performance, and instead, may explain memory errors that are observed with aging. This work highlights how age-related changes in the hippocampal memory system may have a broad impact on active vision.

11:40
What makes eye movements a memory retrieval cue?

ABSTRACT. We normally move our eyes when we wish to focus our gaze on objects or locations of interest. However, there are also times when we do so even in the absence of visual stimulation. Sometimes we shift our gaze towards places where we remember having seen something. This behavior was previously suggested to reflect the role of eye-movements as retrieval cues, but it is hitherto unknown what are the factors contributing to this role. In a series of studies, we examined this question by contrasting two hypotheses. The motor hypothesis states that a crucial factor contributing to the role of eye-movements as retrieval cues is the match in the pattern of muscle contraction between encoding and retrieval. The visual hypothesis states that the crucial factor contributing to memory, is the encoding-retrieval match of the visual image that falls on the retina following an eye-movement. Our findings show that both the visual and the motor factors of eye movements may contribute to memory performance as retrieval cues, depending on the task. Furthermore, we find that people vary in their ability to gain from eye-movement-related cues, and that the gain from visual cues is tightly linked to the gain from motor cues.

10:00-12:00 Session 16B: Chinese reading
Location: LT2
10:00
Word Length and Frequency in Chinese Reading: Evidence from Eye Movements

ABSTRACT. Previous studies on alphabetic languages showed mixed results regarding how word length and frequency jointly affect reading. A probable reason is that the relationship between word length and frequency varies across the (wide) range of word lengths in alphabetic languages. Unlike alphabetic languages, almost all Chinese words are one or two characters long, resulting in much less such variability. How, then, does word length interact with frequency during Chinese reading? We orthogonally manipulated word frequency (high or low) and word length (a one-character word, a two-character word with equivalent total stroke number, or a two-character word with its first character stroke number equivalent to that of the one-character word). Uniquely, this represents a manipulation of length without visual complexity confounds. The results showed reliable effects of word length on skipping, landing positions, gaze and total fixations and robust word frequency effects on reading times. There were no reliable interactions on any measures. These effects appeared regardless of whether stroke number for the overall word, or its first character was matched. The results suggest character level processing in word identification and independent word length and frequency influences on eye guidance during Chinese reading.

10:20
The role of radicals during parafoveal processing of Chinese characters

ABSTRACT. Although character-level phonological and semantic preview benefits have been observed during Chinese reading, less clear is the role of phonological and semantic coding at the sub- lexical level. We conducted two eye movement experiments and manipulated the parafoveal preview of a two-radical Chinese character using the boundary paradigm to examine whether parafoveal processing of phonetic (P) and semantic (S) radicals is based on their function or position. In Experiment 1, the character had either a left-to-right (SP) or a right-to-left (PS) structure and was presented in the parafovea with one, both, or none of the radicals being masked by a meaningless radical. In Experiment 2, the character had a SP structure, and both identity and position of the radicals were manipulated, such that an identical or meaningless mask was presented in the parafovea, and one, both, or none of the radicals were presented in the correct position. The data from both experiments suggest that the phonetic radical is especially important during parafoveal processing, and its disruption is more costly to processing than disruption of the semantic radical. We will discuss our findings in relation to psychological models of Chinese word recognition and eye movement control during reading.

10:40
Reading Classical Chinese fables with implicit moral point: Eye-movement evidences of lexical difficulty, paragraph focus and order effects

ABSTRACT. Fables consist of the story and moral point and aim to convey moral messages via the story. Classical Chinese fables are very difficult; not only because its terseness style but also the ancient moral lessen might be less explicit for the modern readers. Undergraduates were asked to read two Classical Chinese fables (lexical easy vs. difficult), each consisting of two paragraphs (story and moral point). Two issues were examined: (1) Which paragraph did the reader focus more when reading fables with implicit moral points? (2) Whether a fable with different paragraph orders or various lexical difficulty would affect comprehension outcomes and eye-movement measures? The results showed when reading lexical easy or story-first order fable, readers had better comprehension. For eye-movement measures, the main effect shows implicit moral point gained more attention than story. However, the effect was moderated by order and lexical difficulty. When reading moral point-first fable, readers spent more first-pass reading time and rereading time on moral point than story, especially for lexical difficult fable. When reading story-first fables with easy or difficult lexicon, there is no significant different total reading time between moral point and story, implying that stories helped readers to successfully achieve decoding and integration.

11:00
Foveal and Parafoveal Processing of Chinese Four-character Idioms and Phrases in Reading

ABSTRACT. Research has demonstrated that Chinese three-character idioms are represented and processed foveally and parafoveally as Multi-Constituent Units (MCUs, see Zang et al., 2021). Chinese four-character idioms and frequently used four-character phrases extend further into the parafoveal region during natural reading. Are they also processed as MCUs? Using the boundary paradigm (Rayner, 1975), we manipulated the preview of the first, and the second, two character constituents of four-character idioms (Experiment 1) and frequently used phrases (Experiment 2). Previews were identities or pseudocharacters. Both experiments produced greater preview benefit for the second constituent when the first constituent was an identity preview compared with when it was a pseudocharacter preview suggesting that the presence of the first constituent licensed parafoveal processing of the second. In a third experiment, we compared preview effects in frequently used four-character phrases (judged as single four-character words), ambiguous four-character strings (judged equally often as single four-character words and as two two-character words), and strings that were unambiguously two two-character words. Preview effects for the second constituent were more pronounced for the former than the latter two strings. Together these results indicate that four-character idioms and frequently used phrases are processed foveally and parafoveally as single, unified lexical representations.

11:20
Flexible parafoveal encoding of character order supports word predictability effects in Chinese for both young and older adult readers

ABSTRACT. Eye-movement studies in Chinese show both that (a) character order is encoded flexibly during parafoveal processing, and (b) target word predictability can influence this early stage of a word’s processing. However, it is unclear whether these effects change in older age. Accordingly, we investigated this issue in an eye movement experiment using the boundary paradigm (Rayner, 1975) with 36 young (18-30 years) and 36 older (65-75 years) adults. These participants read sentences containing two-character target words with high or low contextual predictability. Prior to the reader's gaze crossing an invisible boundary, each target word was shown normally (i.e., a valid preview) or with its two characters either transposed or replaced by unrelated characters to create invalid nonword previews, which reverted to the target word as soon as the reader's gaze crossed the invisible boundary. The results replicated previous findings of a transposed-character effect (larger preview benefits for transposed-character than unrelated previews), and a word predictability effect (shorter reading times for words with high than low predictability) following valid and transposed-character previews, but not unrelated previews. We take these findings to show that both flexible character order processing and an early influence of contextual predictability is preserved in older Chinese readers.

11:40
Word Length Effect in Developing Chinese Readers during Sentence Reading
PRESENTER: Nina Liu

ABSTRACT. Word length has a fundamental role in determining where and when the eyes move during reading in both Chinese and alphabetic languages. However, surprising little is known about how the influence of word length develops in Chinese reading, since more difficult to segment words from unspaced text to obtain word length information for children readers. Accordingly, to gain insight into its use during reading development, we examined the effect of word length of a specific target word (one-, two-, or three-character word) in sentences on the eye movements of developing Chinese readers (children in 3rd and 5th grade of primary school and adults). The findings show that longer words were fixated for longer and less likely to be skipped and the saccade targeting was closer to the right side of the word than that of short words for both children and adults. More importantly, the effects of word length decreased with age on fixation times, but reversed developmental patterns on saccade targeting; and developing readers with stronger word knowledge were more skillful at using word length to modulate both fixation times and saccade targeting. We discuss implications for models of eye movement control and developing reading ability in Chinese children.

10:00-12:00 Session 16C: Visuo-motor
Chair:
Location: LT8
10:00
Neglect-like visual exploration by gaze-contigent manipulation of scenes

ABSTRACT. Selective spatial attention is a crucial cognitive process that guides us to the behaviorally relevant objects in a complex visual world by using exploratory eye movements. The spatial location of objects, their (bottom-up) saliency and (top-down) relevance is assumed to be encoded in one “attentional priority map” in the brain, using different egocentric (eye-, head- and trunk-centered) spatial reference frames. In patients with hemispatial neglect, this map is supposed to be imbalanced, leading to a spatially biased exploration of the visual environment. As a proof of concept, we altered the visual saliency (and thereby attentional priority) of objects in a naturalistic scene along a left-right spatial gradient and investigated whether this can induce a bias in the exploratory eye movements of healthy humans (N = 28; all right-handed; mean age: 23 years, range 19–48). We developed a computerized mask, using high-end “gazecontingent display (GCD)” technology, that immediately and continuously reduced the saliency of objects on the left—“left” with respect to the head (body-centered) and the current position on the retina (eye-centered). In both experimental conditions, task-free viewing and goal-driven visual search, this modification induced a mild but significant bias in visual exploration similar to hemispatial neglect. Accordingly, global eye movement parameters changed (reduced number and increased duration of fixations) and the spatial distribution of fixations indicated an attentional bias towards the right (rightward shift of first orienting, fixations favoring the scene’s outmost right over left). Our results support the concept of an attentional priority map in the brain as an interface between perception and behavior and as one pathophysiological ground of hemispatial neglect. Consequently gaze-contingent manipulation of scenes might be used for diagnostic and therapeutical purposes.

10:20
Familiar objects benefit more from transsaccadic feature predictions

ABSTRACT. Transsaccadic predictions of how object appearance changes at different eccentricities can be made on the basis of object-specific peripheral-foveal associations. However, it is unclear if transsaccadic predictions are limited to familiar objects where these associations can be acquired through experience. In two experiments, we tested whether there is an advantage for familiar compared to novel objects in peripheral-foveal matching and transsaccadic change detection tasks. In both experiments, observers unknowingly familiarized with a subset of objects. Subsequently, observers in the first experiment completed a peripheral-foveal matching task, where the foveal object matching to the peripheral probe had to be selected. Observers in the second experiment performed a transsaccadic change detection task, where they had to detect whether a peripheral target was or was not exchanged immediately after the saccade or after a 300 ms blank period. In both experiments, familiar objects had an advantage over novel objects. In the first experiment, we found that the familiarity effect depended on foveal-peripheral predictions. In the second experiment, we showed that peripheral-foveal associations explained the advantage of familiar objects. A postsaccadic blank improved the change detection overall but more for familiar objects. In conclusion, we found that transsaccadic predictions are facilitated for familiar objects.

10:40
This vortex cannot be pursued

ABSTRACT. Non-rigid motion of water, clouds, smoke, fire etc. is omnipresent in our world and its visual perception is intermingled with eye movements. Here, we investigated the pursuit of non-rigid motion by presenting a vortex motion pattern in a random dot distribution. The vortex moved across the screen, independent of the first-order motion within it. We asked 15 participants to pursue this vortex. We found pursuit gain was almost zero and the frequent catch-up saccades were too short, landing where the vortex had been during saccade planning. Furthermore, participants reported that the vortex appeared to jump during pursuit. In contrast, the vortex was perceived as moving smoothly when, at each saccade landing, it jumped backwards to the position where it had been during saccade planning. In a control fixation task, participants perceived the motion as smooth, without jumps. We conclude that the pursuit system cannot incorporate the movement of motion patterns, despite our earlier reported findings that participants can accurately perceive such motion. Additionally, a subsequent direction discrimination task with varying stimulus duration showed that non-rigid motion processing takes longer than rigid motion processing. We propose a separate non-rigid motion processing pathway that does not feed into the pursuit system.

11:00
Nasal-temporal differences in the Remote Distractor Effect: how the presence of placeholders affects saccade latencies

ABSTRACT. The remote distractor effect (RDE) is a robust phenomenon whereby an increase in saccade latency is observed when a remote distractor appears simultaneously with a target. However studies with hemianopes reported conflicting findings, whilst Rafal et al. (1990) described an inflated RDE when distractors were presented in the temporal hemifield, Walker et al. (2000) reported no RDE when distractors were presented to either the blind nasal or the blind temporal hemifield. To understand these opposite results, in a first study, we investigated the inhibitory effect of a distractor on saccade latency in normal human subjects. Participants were tested monocularly and we compared the effect of a nasal/temporal distractor in the presence or absence of a placeholder. Interestingly, when placeholders were used, we observed a RDE solely when the distractor was nasal. One explanation for this finding is that the sudden onset of the placeholders triggered a transient reflexive shift of attention, followed by a sustained Inhibition of Return. In a second study, we tested this assumption using the same paradigm but manipulating the timing of the placeholder onset. Overall, our results suggest that placeholders may sustain the engagement of additional inhibitory/attentional processes that bias selection towards stimuli in the nasal hemifield.

11:20
Neural correlates of handedness related modulation of the Vestibular-Ocular Reflex

ABSTRACT. Stable visual perception during head movements is facilitated by the vestibular-ocular reflex. This reflex performs this function by generating compensatory eye movements that are of the same velocity but in the opposite direction to the head acceleration. Our previous research has demonstrated that when we combine viewing of binocular rivalry (i.e. different visual stimuli are presented to each eye so the brain cannot fuse the images into a single coherent percept) with vestibular stimulation that preferentially recruits the left hemisphere, it results in an asymmetrical modulation of the vestibular-ocular reflex (Arshad et al., 2013 JoNS). The observed modulation is handedness dependent, with right-handers suppressing left-beating nystagmus and left handers suppressing right-beating nystagmus. Here we sought to investigate the neural correlates of this modulation using 32-channel electroencephalography whilst subjects received combined visuo-vestibular stimulation. To assess modulation of eye movements we used video-oculography to measure the suppression in peak slow phase eye velocity. We observed that the stimulation technique modulated alpha activity, specifically focused over parietal areas. Critically there was a correlation between alpha power modulation and the degree of nystagmus suppression. Our results demonstrate that top-down modulation of the vestibular-ocular reflex is associated with alpha rhythm activity and this may be mediated by cortico-thalamic interactions.

11:40
Sound influences visually-guided eye and hand movements during manual interception

ABSTRACT. Accurate processing of motion information is necessary in order to intercept moving objects. Most research emphasizes the role of visual input for motion prediction, neglecting the contribution of other sensory cues. For example, humans naturally associate loud batting sounds to harder hits and higher ball velocities. We hypothesized that observers integrate auditory cues with visual motion information to inform eye and interceptive hand movements. We presented the initial 100 or 300 milliseconds of a simulated baseball flight curve, accompanied by a batting sound of varying intensities. After tracking the ball with their eyes, participants intercepted it at a predicted location using their right index finger. Eye and hand positions at the moment of interception were affected by sound intensity: louder sounds resulted in overshooting of the target trajectory, implying that target speeds were overestimated. Interestingly, this finding was only observed for the short presentation time (100 ms), indicating that auditory cues are mostly used when visual information is sparse. Eye position data revealed that the influence of the batting sound emerged approximately 250 ms prior to interception. Our findings suggest that observers integrate visual and auditory cues for motion prediction, especially under visual uncertainty.

12:00-13:00Lunch Break

Bennett Lower Ground Lobby

13:00-14:00 Session 17A: Special populations
Location: LT2
13:00
Activation of ASL signs during sentence reading for deaf readers: evidence from eye-tracking

ABSTRACT. Prior research has established that bilinguals activate both of their languages as they process written words, regardless of orthographic system or modality (i.e., spoken or signed). These effects have been documented in single-word reading paradigms; however, less is known about co-activation when reading in context. The present study used eye-tracking to determine whether deaf bilingual readers activate American Sign Language (ASL) translations as they read English sentences. Stimuli sentences contained a target and one of two possible primes: a related prime which shares phonological parameters (location, handshape, or movement) with the target when translated into ASL, or an unrelated prime which has no form overlap in English or ASL. Eye-tracking measures from 23 deaf native signers revealed that first fixation durations and total gaze durations on target words were shorter when the target was preceded by primes with shared parameters in their ASL translations. These data suggest that ASL phonology is activated when deaf signers read English, facilitating access to related words even without phonological relationships in the written language. These effects were not moderated by reading skill, suggesting that degree of ASL activation does not decrease with increased proficiency in English, in contrast to previous studies.

13:20
Skilled, efficient reading in deaf child signers: A small-scale eye-tracking study

ABSTRACT. Deaf adult signers who are skilled readers read more efficiently than hearing non-signers, resulting in shorter fixations, longer saccades, and fewer fixations and regressions 1,2,3,4. The only study investigating reading behavior in native-signing deaf children used an unnatural moving window paradigm, but suggests that reading-age matched deaf signers make fewer, longer fixations compared to hearing children1. We report reading behavior data from 10 deaf native signers and 14 hearing non-signers ages 10-13 who completed a passive reading paradigm on an EyeLink 1000. Unlike the previous study, while hearing readers read above age-expected levels, and deaf readers performed at age-expected levels. Regression analyses predicting the number and duration of fixations based on reading fluency5 and participant group demonstrated that, on average, deaf children made fewer (p = 0.019*) and shorter fixations (p = 0.029*) than hearing children. In addition, for deaf readers fixation durations decreased with increase reading fluency (p = 0.03*; Figure 1), but the fixation durations of hearing readers did not differ based on reading fluency. These results confirm that deaf children make fewer and longer fixations, but also suggest that their eye movements reflect a transition to skilled, efficient reading during early middle school.

1. Bélanger, N. N., Lee, M., & Schotter, E. R. (2018). Young skilled deaf readers have an enhanced perceptual span in reading. Quarterly Journal of Experimental Psychology, 71 (1), 291-301 . 2. Bélanger, N. & Rayner, K. (2015). What eye movements reveal about deaf readers. Association for Psychological Science, 24 (3), 220-226. 3. Costello, B., Caffarra, S., Fariña, N., Andoni Duñabeitia, J., & Carreiras, M. (2021). Reading without phonology: ERP evidence from skilled deaf readers of Spanish. Scientific Reports, 11 (1), 1 - 11 4. Traxler, J. M., Banh, T., Craft, M. M., Winsler, K., Brothers, T. A., Hoversten, L. J., Piñar, P., & Corina, D. P. (2021). Word skipping in deaf and hearing bilinguals: Cognitive control over eye movements remains with increased perceptual span. Applied Psycholinguistics, 2021, 1-30. doi:10.1017/S0142716420000740 5. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson III Tests of Cognitive Abilities. Rolling Meadows, IL: Riverside.

13:40
Gender and the formation of co-reference links during reading in autism

ABSTRACT. Autistic people often experience reading comprehension challenges. This experiment assesses competing theoretical predictions of how co-reference processing, which is often essential for comprehension, might differ for autistic and non-autistic people. This study examines 1) the efficiency of co-reference link formation, 2) whether referential processing is modulated by the type of information that informs a link (lexical vs. world knowledge), and 3) whether co-reference links are maintained in the mental representation of text. Data from (at least) 24 autistic and 24 non-autistic participants will be presented. Both participant groups completed a battery of cognitive assessments and read passages of text as their eye movements were recorded. Texts were manipulated to include occupation nouns with a definitional gender (e.g., fireman) or a stereotypical gender (e.g., firefighter), that were followed by a target reflexive pronoun that either matched (e.g., himself), or mismatched (e.g., herself) the definitional/stereotypical gender. Preliminary gaze duration and total time data for the target pronoun (currently 19 autistic, 11 non-autistic participants) indicate that all participants experienced disruption to reading when a gender mismatch occurs, but that the time course of these effects may differ between groups. Full findings will be presented and discussed in relation to cognitive theories of autism.

13:00-14:00 Session 17B: Bilingual reading
Location: LT8
13:00
Bilingual Parafoveal Processing During Reading: Orthographic Preview Benefits in L1 and L2

ABSTRACT. We examined the amount of orthographic information extracted from the parafovea during sentence reading in German-English unbalanced bilinguals and monolingual English-speakers, using the boundary paradigm (Rayner, 1975). Participants read cognate target words embedded in sentences while their eye movements were recorded. Ninety sentences were created with identical word order in English and German. Three previews were generated from each target word: identity (Hamster/hamster), transposed-letter (Hasmter/hasmter), and substituted-letter (Harvter/harvter). Bilinguals read half of the sentences in English and half in German, while monolinguals read all sentences in English. Results showed an orthographic preview benefit in single, first fixation, and gaze durations: Fixation durations were shorter for the identity and transposed-letter previews than the substituted-letter previews. Bilinguals exhibited this benefit in both L1 (German) and L2 (English), although the orthographic preview benefit was greater in their L1, suggesting that language proficiency influences parafoveal processing. Although bilinguals showed slower reading times overall for English sentences than English-monolinguals, their orthographic preview benefit was similar. These findings suggest that proficient unbalanced bilinguals can extract orthographic parafoveal information in L2 as efficiently as English-monolinguals. This study extends previous evidence by comparing parafoveal processing between readers’ first and second language and between L1 and L2 language-users.

13:20
Semantic and orthographic parafoveal processing in bilingual readers

ABSTRACT. Recently, a GCB1 study investigating parafoveal processing in L1-English and L1-German/L2-English, found that L2ers derived an interference from a non-cognate translation parafoveal mask (arrow vs. pfeil(translation:arrow)), but derived a benefit from an orthographic parafoveal mask (arrow vs. pfexk) in English2. Suggesting bilingual readers incurred a switching cost from the complete German parafoveal word, but derived a benefit by keeping both lexicons active from the partial German parafoveal word. In this registered report (IPA)3, we replicate and expand on this with three additional masks/comparisons. (1) English pseudo-word (Clain) to test whether facilitation from the orthographic mask was due to orthographic overlap, or because the mask is “word-like”3. (2) English-word condition (Array) to test whether the semantic information between the German word and English translation (Pfeil and Arrow) leads to the inhibition (language switch) or whether it’s the non-identical word is in the parafovea that leads to inhibition. (3) Comparison between the English translation and the two English non-words (Arrow vs. Arrzm+Clain) to test whether the non-word masks cause less inhibition than a real L2 word. This would provide further evidence of semantic facilitation and interestingly evidence that, unlike L1ers, L2ers show inhibition for L2 words and facilitation from non-words.

13:40
Your eyes tell your story: how eye-movement patterns during natural reading develop with L2 proficiency

ABSTRACT. The effects of length and frequency of the currently fixated word (n) (Reichle et al., 1998) and neighbouring words n-1 and n+1 (Heister et al.,2012) on fixations on word n in L1 skilled readers are well-studied. However, little is known about how these effects differ in L2 readers, particularly, how fixations change as proficiency in L2 changes. This study compares the eye-movements of 40 native speakers and two groups of Chinese L2 English readers with different English proficiency levels (27 intermediate, 12 advanced) while reading news excerpts. Nested linear mixed-effects models reveal that the length and frequency of word n, as well as the frequency of words n-1 and n+1 exhibited similar effects on the First Fixation Duration, Gaze Duration, and Total Reading Time on fixated word n across the three groups. However, the length of words n-1 and n+1 had distinct effects on three measures for both Chinese groups, with the more advanced group resembling the native group more closely. Our results present different parafoveal processing and attentional distributions of three groups which suggest gradual shaping of native-like patterns in L2 readers. Practice effectively helps eye-movement control even in the case of a rather disparate L1/L2 combination.

15:00-15:30Coffee Break
16:30-18:30 Social Activities

Social Events

Various locations TBD

19:00-22:30 Conference dinner

TBD: directions, dress code, dietary requirements, staffing ...