View: session overviewtalk overview
Miniscule eye movements play a major role in binocular vision disorders
Fatema Ghasia (Cole Eye Clinic)
My lab's primary focus is to understand the role of abnormal neural circuits in strabismus and amblyopia and apply novel strategies for their treatment. As a pediatric ophthalmologist, I witness first-hand the problems and nuances associated with diagnosing and treating patients with binocular vision disorders. As an oculomotor scientist, I have discovered and realized the value of obtaining eye movement recordings in these patients. To resolve a desperate need that I experienced as a clinician, I leveraged my role as an eye movement scientist to understand fixation eye movement abnormalities as they relate to amblyopia diagnosis and treatment outcomes. We have built a cutting-edge infrastructure for tracking eye and head movements simultaneously with high accuracy and precision in children under different viewing conditions. Over the last several years, we have investigated the utility of eye movement measurements in children with binocular vision disorders. The systematic analysis of eye movement traces obtained in the lab has revealed for the first time several features that can be utilized to detect the presence of amblyopia, clinical types, and severity. We have also found that FEM abnormalities correlate with reduced contrast sensitivities and depth perception, and inter-ocular suppression experienced by these patients.We have also found that assessing FEM characteristics can be a valuable tool to predict functional improvement after patching therapy and recent data as it relates to newer amblyopia dichoptic treatments.
Take place in LT5
Unstable fixation and nystagmus with a focus on the next generation of researchers
Symposium Organisers: Frank A. Proudlock (University of Leicester), Mervyn G. Thomas (University of Leicester), Jonathan T. Erichsen (Cardiff University)
This symposium aims to better understand the continuum of abnormal fixational eye movements, from unstable gaze in paediatric eye diseases up to more overt involuntary oscillations of the eyes in the form of nystagmus.
The session summarises the effects of unstable fixation and nystagmus on spatial and temporal aspects of functional vision. The symposium will also review the structural anomalies associated with disrupted foveal development, especially in relation to genetic causes of nystagmus and outline the impact of eye oscillations on clinical electrophysiological testing of underlying retinal abnormalities.
The session has been designed to provide an opportunity for an up-and-coming generation of researchers in the field of unstable fixation and nystagmus to present their work.
10:00 | Fixation eye movements in pediatric eye diseases ABSTRACT. Dr. Fatema Ghasia is clinician-scientist with expertise in pediatric ophthalmology and binocular vision disorders and research interests in systems neuroscience, with emphasis on human and primate ocular motor control. She is Associate Professor and directs the Vision Neurosciences and Ocular Motility Laboratory at Cole Eye Institute, Cleveland Clinic. One of the main emphasis in the laboratory is studying the visual sensory and oculomotor effects of abnormal visual experience in early life that results in amblyopia and strabismus and investigating treatment effectiveness. We have shown that the fixation instability in amblyopia arises from nystagmus or alterations in physiologic fixation eye movement (FEMs). The systematic analysis of FEM traces has revealed several features that can be utilized to detect the presence and severity of amblyopia and angle and control of strabismus. We have found that FEM abnormalities correlate with reduced contrast sensitivities and depth perception, and inter-ocular suppression experienced by these patients. We have also found that assessing FEM characteristics can be a valuable tool to predict functional improvement post amblyopia and strabismus repair. She will share her clinical and scientific interests and her journey to date and highlight the importance of studying eye movements in a variety of childhood eye diseases. |
10:20 | Accuracy and precision of fixation is correlated with gaze angle ABSTRACT. Purpose – In infantile nystagmus, the null zone tends to favour better oculomotor control even when it is eccentric. We investigated whether gaze position affects fixation in typical participants. Methods – Nine emmetropes fixated vanishing optotype Landolt C targets at 4m, for 7 gaze angles (±45°; 15° apart) while performing a resolution threshold task. Eye movements were recorded at 1000Hz. Eye position accuracy and precision was derived from a bivariate probability density function. The isocontour surrounding the gaze positions with the highest 68% probability density was selected for further analysis. The length of the vector from the target position to the isocontour centre measured accuracy (perfect fixation is zero), and larger contour area reflected less precision. Results – Mean eye position accuracy had a significant positive correlation with gaze angle [r(2) = 0.973, p = 0.027) as did precision [r(2) = 0.990, p = 0.010]. Mean contour shape (min/max diam.) had a significant negative correlation (less circular) with gaze angle [r(2) = -0.998, p = 0.002]. Conclusion – Fixation performance is progressively less accurate and precise with increasing eccentric gaze. Just as in people with nystagmus where nystagmus worsens outside the null zone, fixation appears to become more unstable as gaze shifts away from primary position. |
10:40 | Investigating "Time to See" in infantile nystagmus ABSTRACT. Infantile nystagmus (IN) is characterised by a continuous, involuntary oscillation of the eyes. Individuals with the condition may anecdotally report feeling slow to make visual discriminations, although the exact nature of this ‘time to see’ phenomenon has not yet been established. We hypothesise that the continuous oscillation (c. 3-4Hz) of the eyes in those with IN, which introduces an additional temporal component to their vision whereby their foveas are often moved off-target, is the cause for this increased ‘time to see’. We are working to characterise ‘time to see’ in IN by using the novel approach of determining and comparing the presentation duration required by typical and IN participants to accurately resolve optotype targets. We will present preliminary results from our ongoing study of duration thresholds in IN, investigating the additional hypothesis that, within as well as across participants, longer duration thresholds will be associated with increasing nystagmus intensity (i.e. amplitude x frequency). Such a relationship could have potential applications in outcome measures for the efficacy of treatments which aim to reduce the intensity of nystagmus eye movements and their impact on visual perception. |
11:00 | Phenotyping in Infantile Nystagmus ABSTRACT. Infantile nystagmus is characterised by the involuntary rhythmic oscillation of the eyes. It often arises from mutations in genes expressed within the developing retina and brain. Genes implicated include those involved in melanin biosynthesis, solute carriers, transcriptional factors, G-proteins and ion channels. Afferent system defects are common and visualised using high resolution imaging techniques such as optical coherence tomography (OCT). Foveal hypoplasia represents arrested retinal development with varying grades of severity. The relationship between genotype and phenotype remains unclear. In this talk, we will share the spectrum of eye movement disorders seen in infantile nystagmus, the techniques used to characterise foveal developmental defects and the relationship to genotype. We will utilise nystagmus twins to explore variability and penetrance of the nystagmus phenotype in shared genotypes. To understand the relationship between structure and function, we will correlate vision to the severity of the arrested foveal development and genotype. |
11:20 | The fovea is horizontally elongated in infantile nystagmus ABSTRACT. Infantile nystagmus (IN) develops in the first few months of life, prior to the appearance of the fovea. Despite the constant eye movements, visual perception in IN is usually stable. We hypothesised that the foveal pit would be horizontally elongated in adults with IN, corresponding to the streak of the retina over which visual attention constantly oscillates. Horizontal and vertically orientated foveal images were acquired with a long wavelength (λc 1040 nm) optical coherence tomography (OCT) system from 15 adults with idiopathic IN or IN associated with conditions not known to affect the fovea, and from 15 controls (age, sex, and ethnicity matched). Horizontal and vertical foveal pit diameter were calculated. Foveal shape factor (vertical:horizontal pit diameter ratio) was significantly lower (more horizontal) in participants with IN, as compared to controls (0.88 vs. 0.96, p = 0.02, BF10 = 2.05). These results suggest that early-onset nystagmus may have a direct impact on foveal development, since IN typically develops before the appearance of the fovea. The findings have important implications for understanding the relationship between eye movements and visual development. |
11:40 | Abnormal electroretinography in albinism and idiopathic infantile nystagmus PRESENTER: Zhanhan Tu ABSTRACT. Albinism and idiopathic infantile nystagmus (IIN) are two common forms of infantile nystagmus with involuntary oscillation of the eyes which are commonly associated with retinal diseases. Severe morphological abnormalities of the fovea, optic nerve head (ONH) and peripapillary retinal nerve fibre layer (ppRNFL) have been confirmed in albinism and IIN using optical coherence tomography (OCT). Full-field electroretinography (ffERG) can be used to diagnose abnormal retinal responses to photopic and/or scotopic light stimulation. In this talk, our primary aim was to determine whether ffERG responses are normal in albinism and IIN when measured in a large sample of adults using a robust methodology. A secondary aim was to investigate the effect of nystagmus on ffERG responses. Sixty-eight participants with albinism, 43 with IIN and 24 controls were recruited for comparing ffERG responses. Within-subject comparisons of ffERG responses when nystagmus was more or less intense were performed on 18 participants. Overall, our study found that individuals with IIN and albinism have abnormal ERG responses under photopic conditions. Nystagmus can negatively affect ERG recording and lower the ERG amplitude under the scotopic condition. |
Bennett Lower Ground Lobby
13:00 | Fixation classification: how to merge and select fixation candidates ABSTRACT. Eye trackers are applied in many research fields. To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data of high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5 deg), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by two rules: 1) select saccades with amplitudes > 1.0 deg, and; 2) select fixations with durations > 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters. |
13:20 | Web-based attention-tracking with an eye-tracking analogue is reliable and valid ABSTRACT. Psychological research is increasingly moving to the internet, where larger and more diverse samples of participants can be reached. While much work has been done on webcam eye-tracking, the technique still suffers from high levels of attrition (~60%!) and imprecision. MouseView.js was developed to circumvent this issue. This software uses a mouse-locked aperture of high resolution (analogous to the fovea), while blurring the rest of a stimulus display. It can thus act as an analogue to eye tracking, for example to measure overt attention in online experiments. Here, we present findings of a validation study in which MouseView was compared directly with eye tracking (EyeLink 1000) in preferential looking tasks. We found that mouse-guided dwell time (collected via the internet) was at least as reliable as gaze dwell time (collected in the lab). In a second study, we show that there was a strong correlation between dwell time measured with MouseView and with eye-tracking (collected in the lab, within-participants). The only clear deviation between mouse-guided and gaze behaviour was in the first second of stimulus presentation, suggesting eye-tracking more accurately captured involuntary attention. We conclude that reliable and valid dwell data can be collected in web-based experiments using MouseView.js. |
13:40 | Characterising Eye Movement Events with an Unsupervised Hidden Markov Model ABSTRACT. Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately. |
14:00 | The amplitude of small eye movements can be accurately estimated with video-based eye trackers ABSTRACT. Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging since 1) a video camera is limited in terms of spatial and temporal resolution, and 2) because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (<<1 deg). We show how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that saccades with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation. |
14:20 | Event level evaluation of eye movement event detectors ABSTRACT. Dozens of eye movement event detectors exist to date, however the reported results of performance evaluation are usually neither directly comparable between the papers nor easily interpretable even by the field experts. To a large degree it is a direct consequence of the multitude of available evaluation methods and approaches. The number of reported metrics alone is impressive (sensitivity/specificity/F1 scores, accuracy or disagreement rates, Cohen's kappa, etc.), while the details of their application and implementation, lead to fundamental dissimilarities even when the same metric is used in the evaluation. This is especially prominent when considering event level evaluation. In this talk we review existing practices of evaluating eye movement event detection algorithms and present the empirical analysis of different combinations of eye movement event matching methods and metrics computed on the results of the matching step. We also give recommendations on improving the event detection evaluation pipeline that aim to ensure the high quality of future publications, as well as encourage inter-comparability and reproducibility in this field of research. |
14:40 | Eye tracking: empirical foundations for a minimal reporting guideline PRESENTER: Kenneth Holmqvist ABSTRACT. In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that existing reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research. This is an international collaboration involving 46 authors. |
The When and Where of the Looking at Nothing Effect: Examining Eye Movements During Memory Retrieval ABSTRACT. Looking at Nothing (LAN) describes the behavior that people look at empty spatial locations when trying to retrieve information from memory which was previously associated to these locations. This study investigated LAN for retrieval from working memory. We tested whether LAN is directed to all or only some of the associated spatial locations, when LAN occurs, and its relation to retrieval performance. During encoding, participants saw four word-pairs in four different spatial locations on a screen. During retrieval, they heard two words and had to indicate whether the words came from one previously seen word-pair (positives) or from two different pairs (lures). We found that participants only showed LAN to the first probe’s location, but this occurred only when hearing the second probe, irrespective of the correctness of the response. The results speak in favor of memory processes leading to LAN during the recollection of information in working memory. |
The context effect on implicit sequence learning using an ocular version of the Serial Reaction Time (O-SRT) task ABSTRACT. Objectives: We aimed to evaluate the effect of contextual information on implicit sequence learning (ISL) using an ocular version of the Serial Reaction Time task (O-SRT). Participants and Methods: A total of 76 young adults were tested on the O-SRT using two alternating sequences simultaneously. Participants were randomly assigned to one of two versions of the task: with or without context. In the former each one of the two sequences was presented with a different context (shape and color), and in the latter both sequences were presented with the same context. Eye movements were recorded by an SMI-RED eye-tracker (250Hz). Results: Correct Anticipations of next spatial location were analyzed applying Mixed Design ANOVA, with Group (with and without context) and Learning trials (1-6), between and within-subjects factors, respectively. The group with context showed significant learning in the later trials of training, compared to the group without context that showed significant improvement in the earlier trials. Conclusions: Contextual information might impede ISL in the early learning phase, possibly because it fosters explicit exploration of task or because it requires the processing of additional information, compared to the no context condition, which is proved to be beneficial for the later learning stages. |
Eye-tracking in innovative neuropsychological assessment of visual working memory ABSTRACT. In both the laboratory and clinical neuropsychological assessment, visual working memory (VWM) is typically estimated by means of the maximum storage load. However, these assessment settings ignore that in daily life, information is generally available in the external world. We can easily sample information from the environment by making eye-movements, reducing the need to use the maximum VWM storage load. Vice versa, reliance on VWM capacity increases when accessing external information is difficult or costly. We investigated whether people reduce VWM load when sampling is possible, and whether they memorize more information when sampling is costly. Patients with severe memory problems (Korsakoff’s syndrome) and controls were instructed to perform a copy task while their eyes were tracked. The availability of the example puzzle was manipulated by introducing a gaze-contingent waiting time to provoke different strategies (sampling vs. storing). Preliminary data confirms that controls successfully shifted from sampling to storing when information became less readily available. Although patients also showed less sampling indicating an attempt to adjust strategy, they could not memorize more items at once and made more errors. This suggests that successfully switching strategy from sampling to storing is dependent on VWM functionality. |
Gaze and visual short-term memory for localizing part of an image ABSTRACT. Visual short-term memory (VSTM) is impaired in conditions such as Alzheimer’s disease, impacting daily life in many ways. We ask whether VSTM can be tested during free viewing of natural scenes, with the aim of understanding the importance of gaze allocation on VSTM under ecologically valid conditions. Recognition memory for scenes is close to ceiling for long presentation durations and short time intervals between encoding and retrieval, which makes it an impractical measure to test the impact of gaze allocation on memory. Instead, we used an image-part localization task, in which observers freely explore visual scenes (from a variety of categories) for 8 seconds and 2 seconds later localize a randomly selected image-part within a scene placeholder. Short-term memory for image-part localization is quite poor, unlike recognition memory for whole scenes, but correlates positively with an individual’s fixation density (weighted by fixation duration) on the image-part prior to localization. This task could be used to investigate VSTM during free viewing in normal aging and dementia. |
Testing memory strength with pupil dilation as a function of strategic and automatic memory retrieval. ABSTRACT. Previous research indicates that pupil responses during automatic recognition of previously seen information reflect the aggregate strength of a memory trace and not cognitive effort per se. In contrast, during recall or effortful recollection, the retrieval process is predominantly reliant on strategic processes and cognitive effort, as an active searching strategy is needed for successful retrieval. In such cases, consequently, we expected that as the strength of the memory trace increases, fewer mental effort is needed for the retrieval resulting in a negative link between memory strength and pupil dilation. Thus, memory strength might be differently related to pupil dilation in different forms of memory tests. To test this hypothesis, we implemented two testing paradigms on verbal stimuli: a paired-associate learning paradigm and the source-monitoring framework. We manipulated memory strength by presenting words different times (one vs. four times) on different spatial locations in source monitoring and by presenting word-pairs different times (one vs two times) in the paired-associate learning. In the subsequent memory test, we tested them on recognition and recall/recollection and measured pupil responses. Our preliminary results suggest that the link between memory strength and pupil dilation is modulated by the form of retrieval. |
Pupil responses: indices of individual memory performance ABSTRACT. Studies have suggested that pupil size changes reflect activity of the Locus Coeruleus (LC). Thus, by measuring fluctuations in pupil diameter over time, we can indirectly monitor ongoing attentive processes. An ample number of pupillometry studies have already investigated within-subject effects. In contrast, less research has focused on exploring how individual differences in pupil responses correlate with criterion variables. To this aim, in our present research, we inspected between-subjects variabilities in phasic pupil responses as possible predictors of individual memory performance. In one experiment we used an incidental memory task targeting mnemonic discrimination (Mnemonic Similarity Task), whereas in the other task, a 2-back design was used. We had the participants' pupils recorded during both tasks. For our correlational analyses, we measured baseline corrected event-related pupil dilation (ERPD). We conclude that individual differences in task-evoked pupil behavior can be used to predict cognitive performance. This might be caused by the modulating role of LC on attentional processes. |
A field test of appearance-based gaze estimation ABSTRACT. Appearance-based gaze estimation (ABGE) refers to techniques that estimate gaze direction from video recordings of the eyes or face. Although many ABGE methods have been developed, most of their validations can only be found in the technical literature (e.g., computer science conference papers). We aimed to 1) identify which ABGE methods are usable by the average experimental psychologist, and 2) validate those methods. We searched the existing literature for methods that don’t require calibration and have clear documentation. Only OpenFace and OpenGaze were found to fill these criteria. We evaluated the methods by having adult participants fixate points displayed on a screen for three conditions with different degrees of head movement. We demonstrate that (1) gaze estimation sufficed to distinguish between all fixated points for some but not all participants, (2) there was large variability in the accuracy and precision of gaze estimation, (3) gaze estimates were not independent from head orientation, and (4) OpenGaze outperformed OpenFace. We conclude that both methods can potentially be used in sparse environments with horizontally separated areas of interest. |
eyetRack - Shiny application for recurrence quantification analysis ABSTRACT. eyetRack is a new R package and a Shiny application which facilitates the accessible analysis of eye-tracking data from SMI or Tobii eye-trackers. It offers a basic analysis for the initial conception of the number and duration of fixations. Barplot visualization allows to show a number of fixations in each Area of Interest and Dwell Time. The tool also offers visualization of the scanpath above the stimulus. The essential functionality of the application is the analysis through recurrence and recurrence quantification analysis. The recurrence plot can be displayed. However, visualization of recurrent fixations can often predispose to subjective bias when evaluating a set of results. For that reason, we used recurrence quantification analysis measures, which allow us to quantify data displayed in the recurrence plot. Using RQA, we can compare different tasks or compare multiple participants. The last functionality of the application is the calculation of coefficient K, which helps distinguish focal and ambient attention. The tool will be freely available at www.eyetracking.upol.cz/tools. |
An open-source device for vestibular stimulation and eye-movement tracking in head-fixed mice ABSTRACT. Visual virtual reality (VR) is widely used to study cortical processing in awake, behaving mice. It allows for tight control of animal-driven visual stimuli and provides the ability to change the coupling between behaviour and visual stimulus. However, most visual VR approaches render animals motionless in space (i.e., head-fixed), resulting in the vestibular system being taken out of play. Consequently, the head-direction (HD) system, which is primarily driven by vestibular input and plays a pivotal role in navigation, is severely compromised. Here we present a novel experimental apparatus to overcome this limitation. Using an open-source approach, we have built a modular and affordable device allowing the rotation of head-fixed, behaving mice. It can be used in open-loop mode to study vestibular sensory representation and processing. In closed-loop mode, the apparatus allows animals to navigate in rotational space and self-generate vestibular input, providing a better substrate for 2D navigation in virtual environments. We show that our approach is compatible with the electrical recording of brain activity at the cellular level and results in the robust recruitment of HD cells. We further demonstrate its utility by combining the tracking of vestibular and visually evoked eye-movements with optogenetic interference of specific neuronal populations. |
Metacognitive Modeling Effect of Reading Illustration First for EFL Readers: A Study of Eye Movement Evidence ABSTRACT. Eye-trackers have been adopted to investigate the instructional effects of modeling in science reading and reveal underlying cognitive processes. However, fewer studies investigated how to help less capable EFL readers read illustrated narratives by utilizing eye-trackers. Modeling illustration reading first has been expected to form a macrostructure before reading the text. This study explored the metacognitive modeling effect of reading illustration first with metacognitive questions as prereading guidance for EFL readers with beginning language capacity. Participants were randomly assigned into four groups (intervention: modeled/non-modeled; lexical difficulty of article: low/high), in which illustrated narratives were provided with story structure (prologue/climax/resolution). Modeled groups were instructed to read illustrations with 5Ws questions before reading text as the model did, while non-modeled groups read in their own manners. Two-way ANOVAs were conducted to analyze posttest performance and eye measures, fixation counts (FC), dwell time (DT), and run counts (RC). Lexical level demonstrated substantial impacts on FC, DT, and RC in both text and illustration areas of prologue and climax structures. Intervention effect shows for FC and RC in both text and illustration, but only in prologue structure. Results found that the metacognitive modeling strategy is only influential in reading the beginning of a story. |
Reading search page results: Evidence from an eye tracking study on 11-12-year-olds ABSTRACT. This study examined whether 11-12-year-old Finnish readers can differentiate task-relevant search page results from the irrelevant ones. The participants (N=34) read simulated search engine results pages (SERPs) while their eye movements were recorded. Each page included 8 search hits, which were described with a title and a short description, which could be either relevant or irrelevant to the search task given to the readers. The position of the relevant search results on the search page was manipulated, and average reading speed in a separate reading task was used as a measure of reading skill. The results of the fixation time on titles showed that skillful readers spent less time reading the titles towards the end of the search page, regardless of relevance. Less skilled readers did not show such a speed-up. As for the fixation time on descriptions, skilled readers spent less time on irrelevant segments towards the end of the pages, whereas the fixation time for relevant segments did not change as much. Less skilled readers’ fixation times on irrelevant segments did not decrease across the pages. In sum, reading skill modulates how relevant and irrelevant search results are attended to on a search page. |
Beginning to Characterise Children’s Eye Movement Control during Reading in English: A Corpus Study ABSTRACT. Past research examining beginner child readers’ eye movement behaviour during silent sentence reading has primarily compared when such readers move their eyes relative to skilled adult readers. The other key question regarding eye movement behaviour during reading- where readers fixate- has received much less attention. We have created a corpus in English, based on the results of three experiments (adults n = 132; children n = 132), which we are using to characterise where typically developing 8- to 9-year-old child readers move their eyes during silent sentence reading. Our systematic analyses include assessments of differences in launch site, initial landing position, refixation probability, and skipping rates (in relation to foveal and parafoveal processing of words). The results will provide insight into how child readers typically encode information about words during reading, and how visual and linguistic characteristics of words determine where the eyes move and how such behaviour differs in comparison to adult readers. We believe that such understanding will be critical to the development of models that capture and represent on-line lexical processing and eye movement control in an ecologically valid way. |
Interactive effects of semantic diversity and word frequency in natural reading ABSTRACT. Word frequency exhibits one of the strongest influences on reading behavior - increasing skipping rates and reducing fixation durations (Rayner, 1998). However, some have argued that semantic/contextual diversity better represents word difficulty (Adelman et al., 2006) and has a stronger effect on fixation behavior when frequency is controlled (Plummer et al., 2013). We investigated whether these factors influence the reading process differently across the time course of reading behavior. We performed a secondary data analysis on a sentence reading study with target words that ranged in frequency and semantic diversity. We found that only word frequency affected skipping rates. However, there was an interaction between word frequency and contextual diversity wherein high frequency words were read faster when they were low in contextual diversity. Conversely, low frequency words were read faster when they were high in contextual diversity. This suggests that, for familiar words, having a specific meaning facilitates word recognition whereas for unfamiliar words, having more diverse semantic features makes recognizing at least one of those meanings more accessible. Our findings support prior literature by arguing that word frequency facilitates early stages of word recognition prior to meaning retrieval while semantic diversity influences more fine-grained semantic processing downstream. |
Do Chinese deaf readers develop a unique cognitive mechanism during visual word recognition? The effect of oral language experience and reading ability PRESENTER: Nina Liu ABSTRACT. For most deaf readers, learning to read is a challenging task. Visual word recognition is crucial during reading. However, little is known about the cognitive mechanism of Chinese deaf readers during visual word recognition. In the present study two experiments explored the activation of orthographic, phonological, and sign language representations during Chinese word recognition. Eye movements were recorded as participants read sentences containing orthographically similar words, homophones, sign language related words or unrelated words. All deaf readers showed shorter reading times for orthographically similar words compared to unrelated words. However, when reading ability was controlled, the homophone advantage was observed only for deaf readers with more oral language experience, whereas the sign language advantage was observed only for deaf readers with less oral language experience. When oral language experience was controlled, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels had more stable orthographic and sign language representations. Deaf college readers with more oral language experience activate word meanings through orthographic and phonological representation, whereas deaf college readers with less oral language experience activate word meanings through orthographic and sign language representation, reflecting a unique cognitive mechanism, and, reading ability moderates this process. |
Individual differences in word learning associated with reading skill and vocabulary: An eye-movement investigation ABSTRACT. A large proportion of an individual’s vocabulary is learned incidentally, during reading. We examined individual differences in lexical acquisition during reading, and whether low-frequency words are processed differently to pseudowords during lexical acquisition. Rigorous pre-screening ensured that the low-frequency words were not known by our target population. Participants’ eye-movements were measured as they read sentences containing unknown words (either low-frequency or pseudowords) in a learning phase and a subsequent test phase. First, each new word was presented in four meaningful sentences during the learning phase, providing a diverse semantic context. We then took individual assessments of both reading ability and vocabulary. In the test phase, each new word was presented in a further four meaningful sentences, and reading time measures provided an index of the ease with which participants were able to read the new words. Finally, participants completed a semantic categorisation task to examine whether semantic representations for the new words had been successfully formed. We predict that greater reading skill and larger vocabulary sizes, will be associated with more efficient lexical acquisition. We also predict that there will be no differences between low-frequency words and pseudowords, validating the use of pseudowords within word learning experiments. |
The role of the left perceptual span in L2 reading: An eye-tracking study ABSTRACT. Substantial cognitive resources are required for processing the foveal area, leaving fewer cognitive resources available for parafoveal processing. Proficient first-language (L1) readers have a perceptual span of 3-4 characters to the left and 14-15 to the right of the foveal fixation1. Given that second-language (L2) processing requires more cognitive resources2, it stands to reason that L2ers will have a smaller perceptual span than L1ers. We hypothesize L2ers will have a smaller, more symmetrical perceptual span relative to L1ers, allowing them to make use of the left span to reconfirm what they previously read. We test the symmetry of the perceptual span using the GCMWP3 and manipulate the information available (3,6,9 characters-left/3,9,15 characters-right). Additionally, we account for the influence of English skills with German L1ers/English L2ers reading in English (n=53). L2ers benefit from an increase of window size 3-6 to the left of fixation and from 3-9 to the right of fixation, with only higher-skilled L2ers further benefiting from an increase in window size up to 15 characters to the right of the fixation. We plan to compare our data to L1ers of different ages. Overall, our data suggest that only highly skilled L2ers exhibit an L1-like asymmetric perceptual span. |
Lexical access in L2 reading: evidence from self-paced reading and eye tracking data ABSTRACT. We present a study on lexical access while reading in L2. We analyze word-reading data obtained using two paradigms: self-paced reading and eye-tracking. According to (Frank et al. 2013), eye-tracking measures in L1 highly correlate with reading times in SPR for the current word and for the following word, reflecting spillover in SPR and parafoveal preview in eye-tracking. Chinese-speaking learners of Russian (A2-B1) read the Sentence Corpus for L2 learners of Russian (90 sentences) either in SPR mode (n=65) or in an eye-tracking experiment (n=30). They read for comprehension, comprehension questions asked after 30% of sentences. In self-paced reading data we found significant effects of word frequency, word length and predictability on reading times. In eye tracking data, we analyzed 4 measures (Frank et al. 2013): first-fixation time, first-pass time, right-bounded time and go-past time, and found strong positive correlations between all these measures and reading time of the previous (not the following) word in SPR averaged over subjects. It can be explained by the lack of parafoveal preview and strong spillover effect in eye-tracking mode compared to SPR mode in L2 reading. Funded by the research grant no. ID: 92566385 from St Petersburg University. |
GECO-CN: Ghent Eye-Tracking COrpus of Sentence Reading for Chinese-English Bilinguals ABSTRACT. GECO-CN presents the very first eye-tracking corpus of Chinese-English bilinguals reading a novel in their two languages. Participants read half of the novel in Chinese as first language and the rest in English as second language, in a counterbalanced order. They also completed a series of language proficiency tests and a language background questionnaire (LEAP-Q). This work presents some important descriptive statistics and compares the reading performance of two languages on eye movement measures such as average reading times and skip rate. In addition, this study used the same reading material as GECO (Cop et al., 2017), which studies the performance of Dutch-English bilinguals. By comparing two bilingual eye-movement corpora in which the similarity of the second language to the native language is different, this corpus is useful to investigate the influence of different Eastern and Western first languages on reading in the second language. This unique eye-tracking corpus will be freely available online, enabling future research to examine theories of bilingual reading by investigating similarities, differences, and mutual influences between two different writing systems. |
The processing strategies for illustrated science reading and Chinese academic words with different semantic transparency among middle-school students: An eye-tracking study ABSTRACT. This study uses eye tracking to explore the cognitive process and strategies of seventh-grade students with different reading abilities in reading illustrated scientific texts, and how readers deal with academic words with high (paraphrase) and low (transliteration) semantic transparency. Seventh-grade students (N=65) were divided into groups of reading ability through a pre-test. After reading four science texts, they answered free recall and reading comprehension questions, and finally participated in cued retrospective think aloud (CRTA). The results show that reading ability is significantly positively correlated with reading comprehension and free-recall performance. When reading transliterated words, students of all abilities have a longer gaze duration than when reading paraphrasing words, indicating the difficulty in understanding the meaning of academic words from morphemes. Furthermore, regardless of the reading ability, students use the text part as the main source of reading comprehension. However, the students realize that the form illustration has a high amount of integrated information. The eye movement retrospective think aloud data shows that high-ability students often use inference and integrated reading strategies; middle-ability students often use information extraction strategies; low-ability students often use negative reading processing methods. It is recommended that the differences in students’ reading ability should be considered. |
Eye Movements and Reading in Children Who Survived Cerebellar Tumors PRESENTER: Marina Shurupova ABSTRACT. Previous investigations have demonstrated that cerebellar tumor survivors tend to have a variety of oculomotor impairments, such as hypermetria and poor gaze stability. In the current study, we aimed to evaluate the oculomotor deficits and reading parameters in children who survived cerebellar tumors. Two groups of 65 patients and 47 healthy controls, all aged 8–17, participated in the study. We analyzed the performance in several oculomotor and reading tasks. Eye movements were recorded every 1/60 s monocularly using an Arrington eye tracker. We revealed pronounced reading impairments in the patients as compared to healthy children, including longer fixation durations, greater numbers of fixations and regressive saccades, longer reading time. The patients showed gaze fixation instability and long scanpath reflecting the return of the gaze to the already counted objects. We also observed significant correlations between basic oculomotor functions and reading parameters in both groups. All these tendencies indicate that cerebellar tumor and its treatment cause oculomotor changes which can lead to disturbances in higher cognitive functions, such as reading. Our results highlight the necessity of considering these deficits in current rehabilitation protocols for pediatric cerebellar tumor survivors. |
The role of phonological and orthographic parafoveal processing during silent reading in Russian children and adults ABSTRACT. Parafoveal processing allows readers to recognize a word before fixating on it. However, there is still a debate about the type of information that people might get from the parafovea. Studies have shown that adults and children use phonological and orthographic parafoveal processing, but their role depends on age and language. In the present study, we investigated the development of phonological and orthographic parafoveal processing during silent reading in 56 Russian-speaking second graders, 48 fourth graders, and 65 adults. The participants read sentences with embedded target nouns, while their eye movements were recorded in a gaze-contingent boundary paradigm. The target nouns were presented in the parafovea in original, pseudohomophone, control for pseudohomophone, transposed-letter and control for transposed-letter conditions. The comparison of fixation durations between the conditions allowed us to assess the reliance on phonological and orthographic information in each age group. We found that adults used both phonological and orthographic information from the parafovea, whereas second graders and fourth graders relied on orthographic parafoveal information. These results might indicate that Russian-speaking children do not have fully developed phonological recoding skills by grade 4, but can recognize a word in the parafovea as a whole orthographic unit already in grade 2. |
A two-tier taxonomy of gaze behaviours for free-moving participants ABSTRACT. A Gaze Event Detector (GED) is an algorithmic component that parses a time series of eye positions and directions into meaningful gaze events (aka oculomotor behaviours). The best-known gaze events are fixations and saccades. There are also smooth pursuits (SPs), events caused by vestibulo-ocular and opto-kinetic reflexes (VOR and OKR), vergence shifts, and more. Many GED algorithms have been developed in the last 50 years; most popular are versions of either IVT or IDT. Most of these are suited to the experimental paradigms with the participant sitting in front of a flat stimulus display. Thus they often equate gaze movement with eye movement, disregard head movement, cannot account for vergence changes,etc. Nowadays, eyetracking technology rapidly becomes a ubiquitous component of most XR devices. One barrier for an adoption of gaze analysis in XR is a lack of GED algorithms that process coordinated eye-head-body gaze movements in complex 3D stimulus scenes. We propose a novel two-tier taxonomy of gaze behaviours, that combines atomic eye-head-body movements into meaningful gaze behaviours, such as: focusing on a static target; following a moving target; shifting attention; internal thinking; etc.; and present a preliminary algorithmic implementation of it. |
GlassesValidator: Data quality tool for eye tracking glasses ABSTRACT. According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a validation poster and written accompanying Python software. Here we present this work. We tested the poster and procedure with 61 subjects. In addition, the software has been tested with six different wearable eye trackers. The validation procedure can be administered within a minute per subject and provides accuracy, precision and data loss. Calculating the eye tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills. |
Fixation sequences when walking up and down stairs in daily life PRESENTER: Andrea Ghiani ABSTRACT. In our daily life there are many situations in which it is not directly evident where gaze should be directed: not all tasks require constant visual guidance and there may be reasons to look elsewhere. Walking up or down a staircase is a good example of such a situation. We investigated participants’ gaze behaviour when walking on stairs as part of a navigation task in their own house. Participants did not know that stairs were the focus of investigation. We analysed the order in which participants fixated the steps, confirming earlier reports that people often looked at each step sequentially. However, we found that participants also often made fixations back to steps that had already been fixated and that they regularly skipped looking at several steps to fixate further ahead. The main difference between ascending and descending the staircase was found when approaching the first step: when descending participants looked extensively at the beginning of the staircase, whereas when ascending they did not. This study shows that focussing on sequences of fixations is useful when investigating stair climbing with a variety of staircases in different environments. |
Investigating the effects of task and body movement on the generalizability of scene viewing experiments. ABSTRACT. Scene-viewing experiments conducted in the laboratory attempt to understand human gaze behavior and to generalize findings. This generalizability has often been questioned. Recent advances in eye tracking technology allow for experiments outside the laboratory with a high degree of mobility and enable participants to have more freedom of movement. In the current study we investigate eye movements using mobile eye tracking devices, but remain in the laboratory in order to investigate how distinct experimental conditions affect eye movements. Here we present the effects of A) the given scene viewing task and, B) the possibility of head movements, on scan path statistics. The given task clearly affects both temporal and spatial gaze parameters. We find differences in behavior even for apparently minor changes in task instructions such as free viewing and guessing tasks with only little task constraints. In our experiments, the subjects' freedom of movement hardly affects temporal gaze parameters but noticeably affects spatial parameters. Our results are consistent with the view that laboratory factors such as a chin rest do not cause artefacts that limit the generalizability of laboratory findings. However, the absence of a task, or a free viewing task, significantly affects gaze behavior. |
Automated Discrimination of Stable and Non-stable Gaze Events in Dynamic Natural Conditions ABSTRACT. Introduction It is challenging for people with visual field defects to perform daily tasks that rely on having a good visual overview. For helping people with such a condition, an essential step is to quantify their scanning behavior. However, there are no accurate gaze event detectors suitable for use in dynamic natural conditions, limiting research in such settings. We aim to design a gaze-event detector for free head and body movements conditions. Methods and Results Our event detector interprets environmental movements using optic flow estimation. Additionally, it employs point tracking on surrounding patches of gaze location to analyze the gaze path. By combining this information, the detector can discriminate between and describe and visualize stabilizing and non-stabilizing gaze events. We tested our method on samples recorded using a Pupil Invisible in 15 participants who conducted a series of predefined activities, including simultaneously moving and following an object of interest. The method successfully discriminated between our two classes of events and is being improved. Conclusions We conclude that our gaze event detection method is suitable for examining visual scanning behavior in dynamic natural settings. Researchers can benefit from this method in investigating the scanning behavior in people with visual field defects. |
Investigating face perception during free-viewing in a naturalistic virtual environment. ABSTRACT. Face perception is commonly investigated in standardized lab settings with high experimental control during which eye movements are generally restricted and the fixated stimuli are predetermined. While faces are considered prevalent and important stimuli (e.g., Wheatley et al., 2011), little research has explored the perception of faces in naturalistic settings. The current study combines high experimental control with natural viewing and movement behavior by investigating face perception in a virtual environment. Our virtual city consists of houses, various background stimuli, and notably static and moving pedestrians. Participants freely explore the virtual scene while eye-tracking and EEG data are recorded. We investigate participants’ distribution, duration, and distance of gaze events on faces and the participant's movement path. Preliminary results indicate large between-subject differences in the number of gazes on the bodies and faces of pedestrians. Additionally, big differences in the subjects’ movement patterns can be observed. The findings of this study will provide insights into face perception in a naturalistic virtual environment. Wheatley, T., Weinberg, A., Looser, C., Moran, T., & Hajcak, G. (2011). Mind perception: Real but not artificial faces sustain neural activity beyond the N170/VPP. PloS one, 6(3), e17960. |
Gaze Aversion in Human-Robot Interaction: Case Studies in Physical and Virtual Settings ABSTRACT. Human Robot Interaction (HRI) is an interdisciplinary research domain that focuses on verbal and nonverbal communication between physical or virtual robotic agents and human interlocutors. As in human-human dyads, gestures and gaze have been primary nonverbal communication modalities in HRI. This study presents our findings from an HRI framework, which has been designed and implemented to support developing applications that employ gaze and other nonverbal modalities for interacting with people. In experimental investigations, we investigated gaze contact and aversion in communication between humans, between humans and physical robots, and between humans and virtual agents. We have been developing the framework to model gaze-mediated communication between two agents, where an agent can be a robot or an avatar, to allow the agent to interact naturally and intuitively with a human user. This study aims at presenting a snapshot of the state of the art in HRI, the experience gained from experimental investigations, and the items for future work. |
Gaze aversions serve as social signals conveying the performer’s cognitive state ABSTRACT. When engaged in effortful cognitive processing, we often avert our gaze to the periphery. Studies explained this phenomenon as an attentional mechanism of distraction avoidance. Here we propose that, in addition to its contribution to attentional processes, gaze aversion also serves as a signal in social interaction, conveying information regarding the performer’s cognitive state. As a first step in investigating this hypothesis, we examined how well perceivers infer other people’s cognitive states in social interaction, and how this ability depends on eye movements. In two experiments, participants (N=40 each) watched short (5s) muted videos depicting individuals during social interactions. Results of the first experiment showed that participants succeeded in identifying when other individuals were engaged in cognitive processing, relative to listening or tapping their feet. Furthermore, participants were more likely to correctly identify an individual as engaged in cognitive processing, when this individual was shifting their gaze. In a second experiment, we found that when individuals performed gaze aversions while they were engaged in an effortful cognitive task, they were rated by others as more concentrated in the task, and more likely to provide a correct response. Together, our findings suggest that effortful cognitive processing is communicated via gaze aversions. |
Semantics of gaze: Deciphering the meaning of a listener’s gaze direction, gaze position changes, and blink frequency ABSTRACT. In the present study, we developed a novel methodology to understand the semantics of perceived gaze patterns. Using a qualitative approach, we presented participants with videos showing a person that is involved in listening to brief (neutral vs. emotional) stories. The eye movements of the listening person were systematically manipulated regarding gaze direction, changes of gaze position, and blink frequency. After each presentation of a subset of these videos, participants were asked how they would verbally characterize the different gaze patterns. By applying semantic categorization procedures, we were able to link particular gaze patterns to distinct semantic categories (e.g., attentiveness, nervousness, empathy etc.). The resulting exploratory findings are subsequently submitted to rigorous experimental testing. Limitations in the generalizability of the present findings to other situational and social contexts will be critically discussed. |
Looking for speaking: What determines language-specific expressions in motion event descriptions ABSTRACT. This cross-linguistic experimental study examines the relationship between event construal and linguistic expressions using the eye-tracking method. We focus on descriptions of motion events elicited from a picture book by English, Italian and Japanese speakers. Because these languages are grouped into two based on the typology of motion expression. Slobin (1991, 1996) presented a modified form of linguistic relativity: “thinking for speaking”. He claimed that the language we learn shapes the way we perceive reality based on the linguistic experiment of various languages. Our study replicated Slobin’s experimental method and investigated the speakers’ eye movement when describing pictures. We examined how the speaker’s construal of an event tends to reflect in language expressions used in describing the event. We have obtained the data from 9 participants from each language and analysed a scene where “a boy fell off the cliff into the pond”. We found individual tendencies in each language, what kind of path components (DOWN, FROM, INTO) are expressed frequently, and how eye movements, such as fixation duration, correspond to them. A comparison of patterns of eye movement and language expressions revealed the relation between perception and speech production to be either universal or individual. |
Silent or Oral Reading in L2: An Eye-Tracking Study ABSTRACT. This study aims to answer two questions: is there any effect of reading modality (oral vs. silent) on L2 text processing, and what type of reading contributes to more successful text processing and translation into L1. According to Hale et.al. (2007) reading aloud facilitates understanding of the text, despite the fact that it has a larger processing load. Fuchs et al. (2001) suggested that oral reading fluency reflects overall reading competence. It was also shown that silent reading was stronger for retelling narratives (Schimmel & Ness 2017). Translation task can be used as method for checking reading comprehension skills (Karimnia, 2014). In a two-group experimental design, native speakers of Russian (N=20, B2-C1 level of English) read either orally or silently two English texts, estimated the subjective difficulty of each text, and then translated them into Russian. Both texts were of the same length, topic and level of readability (checked via http://readable.com). The eye movements of the participants were recorded (EyeLink 1000 Plus by SR Research). We measured the total reading time (TRT), total fixations count (TFC), average fixation duration (AFD), and regressions count (RC). The quality of the translations were assessed by Gilmullina's test (2016) of quantitative analysis. Mann-Whitney U test showed that reading was significantly slower (RT: p=0,008; TFC: p=0,027; AFD: p=0,036) when subjects read text aloud, as opposed to silently. No interaction was found between quality of the translations and subjective difficulty of the stimuli. So oral reading slows down text processing without any contribution to comprehension. Supported by the research grant no. ID: 92566385 from St Petersburg University |
Scanpath analysis of eye movements during reading in children with high risk of dyslexia ABSTRACT. The study presents a comparison of the global reading processes estimated via scanpath analysis (von der Malsburg & Vasishth, 2011) in Russian schoolchildren with and without reading difficulties. We tested 144 children from grades 1-5 (age range 7-11; 54 girls): 72 children were at high risk of dyslexia according to the Standardized Assessment of Reading Skills (Kornev, 1997), and 72 had no reading difficulties. The children read 30 sentences while their eyes were tracked. We identified five global reading processes that all children engaged in while reading. The processes differed in fixation durations, the probability of rereading single words, and the probability of rereading entire sentences. The comparison between grades and groups revealed that children without reading difficulties progressed quickly and by grade 4 engaged in a fluent adult-like reading process. Children with high risk of dyslexia started with the beginner reading process, then engaged in the intermediate and upper-intermediate reading processes in grades 2-to-4. They reached the advanced process in grade 5 but rarely adopted the fluent reading process. In sum, the scanpath analysis revealed that children in the high-risk group and typically developing peers adopt similar reading processes, but the former group progressed much slower, with a 2-3-year delay. |
Modeling Task-Dependency of Eye Movement during Scene Viewing ABSTRACT. Eye movements during scene viewing reflect how the visual system processes and prioritizes information. Selection of fixation locations is driven by image features, as well as by top-down factors, and task constraints. Additionally, dynamical aspects such as scan path history and fixation duration influence selection. The SceneWalk model is a biologically inspired dynamical model for predicting fixation sequences on the basis of static saliency maps. In the current work we added a spatial-temporal likelihood to the existing framework to jointly model fixation durations and location selection. To explore the influence of task and saliency, we investigated two model versions, using either general- or task-specific saliency maps as a basis. We separately fitted model parameters for each individual in two guessing and two searching tasks. Parameters were inferred using a fully Bayesian likelihood-based approach. We find that the optimal parameters differ significantly between the tasks. The parameters of our model represent interpretable quantities such as the attention span. Therefore, differences in parameters allow insight into how the visual system adapts to task demands. Posterior predictive checks show that the model can reproduce individual differences in scan path statistics. We also show that dynamic components of eye movement improve model fit more than task-specific saliency. |
Using Eye tracking techniques for oculomotor sign of neglect PRESENTER: Marina Shurupova ABSTRACT. The common outcome after a right hemisphere stroke is unilateral spatial neglect. These patients demonstrate visual attentional deficits towards to the contralesional side. The aim of our pilot study was to investigate oculomotor signs of neglect using a new experimental paradigm. Six patients (mean age 54.5±9.8) and twelve age-matched controls participated in the study. Patients had subacute phase of stroke (1 hemorrhagic, 5 ischemic) and were diagnosed with neglect by a neuropsychologist. Patients performed an oculomotor task in which they had to select and fix a target stimulus (a blue star) appearing simultaneously with a distracting factor (a black dot) to the left or right of it. Eye movements were recorded at 250 Hz sample rate. Patients showed a higher error rate when the target appeared on the left, although most patients made self-correction within two seconds after choosing a distractor. In addition, patients showed a higher delay to the appearance of the target on the left compared to the right. All patients needed more than one fixation to fix the target, while the control group needed one. Our experimental results demonstrate oculomotor signs of neglect. Future studies will add more quantitative results and may be applicable in clinical practice. |
Oculomotor Control and Dual-Task Interference ABSTRACT. For a long time, oculomotor control was regarded as largely unaffected by additional actions in other effector modalities. However, recent research suggested that saccade control - although being prioritized over other simultaneous actions - can still exhibit substantial impairments. In the present study, we examined temporal dynamics of oculomotor performance decrements in dual tasks by applying the psychological refractory paradigm, in which we varied the stimulus onset asynchrony (SOA) between a saccade and a manual RT task. Across 4 experiments we examined differential effects of task order, task order instructions and spatial task compatibility on dual-task performance. Results revealed that performance of both saccade and manual responses suffered at close temporal proximity indicating structural and content-based interference mechanisms. Structural interference was observed in form of longer RTs for Task2 at short SOAs suggesting a serial response selection bottleneck, and content-based interference emerged as longer RTs for incompatible than for compatible tasks suggesting mutual crosstalk during serial processing. Based on these results, we reject the notion that oculomotor control is generally able to bypass central processing limitations and instead conclude that saccades are subject to the same sources of dual-task interference mechanisms as other actions, too. |
Eye Movements in Three-dimensional Multiple Object Tracking ABSTRACT. Eye movements in multiple object tracking (MOT) tasks reflect the attention processing of an observer. Previous studies have revealed two gaze strategies during two-dimensional MOT tasks, respectively centroid-looking strategy and target-switching strategy. When tracking several moving targets amidst distractors in an MOT task, observers are more likely to gaze at the central areas between targets, and would frequently switch their gaze back and forth between the center and targets. However, little research has focused on where are observers looking and what influences eye movements during the three-dimensional multiple object tracking (3D-MOT). The present study registers eye movements by using a 3D-MOT task based on virtual reality technology, which could exquisitely reflect the interaction between humans and the real world with stereo vision. The aim of the present study is to examine observers’ looking strategy in 3D-MOT, how it is affected by the depth of the 3D spaces, and how differs from the eye behaviors in 2D tracking. We postulate that the target-switching strategy would be in preference in 3D-MOT and observers would switch fixations more frequently to targets at greater depth in the 3D spaces. |
Attentional biases in the size of fixational saccades ABSTRACT. It is well established that spatial attention can bias the direction of fixational saccades. It remains unknown whether and when attention may also bias the size (amplitude) of fixational saccades. To investigate this, we cued attention to one of multiple items in visual working memory while manipulating the spatial demands on attention. In one condition, trials contained two colored tilted bars (presented to the left and right) that were both either near or far from fixation. A color cue (presented during the delay) instructed participants to select either memorandum. Additionally, we included a load-four condition in which items occupied the near and far locations on both sides. Critically, in these trials, the direction is not sufficient to select the cued memorandum as there are always two memoranda in the same direction (one near, one far). Consistent with prior work, we confirm that the direction of fixational saccades was robustly biased by the memorized location of the attended memorandum. More importantly, our data reveal that the size of the directionally biased fixational saccades was also modulated by the spatial demands on attention. Specifically, fixational saccades became larger when selecting the far item, but only when direction was insufficient. |
Yarbus in the age of Webcam Eye-tracking ABSTRACT. The iMotions webcam based eye tracking (WebET), combined with the hidden markov models (HMM) for fixations classification is a potential tool for applied human behaviour research and teaching eye-tracking to larger cohorts. The present study explores the feasibility of carrying out large scale, replicable research projects. To verify how relevant the iMotions WebET can be to applied research and teaching, a part of the Yarbus (1967) study was replicated with N=10 participants. In an online study, participants were shown ‘The Unexpected Visitor’ in four conditions in a repeated measures design, each one asking a different question of the participants. Individual scanpaths and aggregated heatmaps were used for exploratory analysis. As in the seminal work by Yarbus, the present study asked if WebET can be used to distinguish where people look based on the question asked. Areas of interest (AOIs) were marked and calculated in iMotions to understand best practices for quantifying data from webET. Results show that even with a small sample size, the iMotions WebET combined with the HMM fixation classification can accurately distinguish between scanpaths of different conditions. Larger, well-placed AOIs can give eye-tracking insights helpful in understanding top-down cognitive processing of participants in the present study |
Seeing your own webcam image feels distracting, but does not hurt learning: A webcam-based eyetracking study ABSTRACT. In video-call platforms like Zoom or Microsoft teams, a thumbnail of one’s own webcam video feed is often visible. In this experiment, we use webcam-based eye tracking to investigate whether this self-view distracts learners and reduces learning from a video lecture. In a within-subjects experiment, 78 participants (Mage = 22.3) watched two video lectures, one with and one without the self-view visible (in counterbalanced order). Webcam-based eye-tracking was implemented using the Gorilla Experiment Builder (www.gorilla.sc) and participants took part on their own computers. Participants first completed a 9 point calibration and 5 point validation procedure. They then watched the instructional video (either with the self-view visible or not), rated their experience of learning, and completed a post-test. This procedure was repeated for the other condition. Bayesian statistics were used to analyse the data. Participants felt more distracted and self-conscious in the self-view versus the no self-view condition. However, the conditions were comparable in self-reported mental effort and post-test performance. Analysis of the webcam-based eye-tracking data shows limited effects of the self-view on viewing behaviour. In this presentation, we additionally discuss the quality of webcam-based eye-tracking data for stimuli with very large AOIs. |
Eye movements as a measure of higher-level text processing
Symposium Organisers: Jana Lüdtke (Freie Universität Berlin) & Mesian Tilmatine (Freie Universität Berlin)
Discourse as a core aspect of human cognition remains understudied in cognitive sciences (Mar, 2018). The use of eye-tracking technologies bears considerable potential for research on higher-level comprehension of written language (Cook & Wei, 2019). Consequently, there has been a trend in the past years to conduct more studies on higher-level processing of texts with a natural narrative flow, be that in the form of poetry, prose, or even newspaper articles. In this symposium, we will discuss our most recent contributions to this trend, mostly in the form of experimental data and improved or new models.
There are reasons why naturalistic texts are traditionally less prominent in eye-tracking research, mostly related to stimulus complexity. The symposium will thus review possible approaches to the methodological challenges associated with the use of naturalistic text stimuli. In that context, the talks will particularly focus on the role of individual differences in narrative and poetic perception (cf. Mak & Willems, 2019; Graf & Landwehr, 2015; Harash, 2021), as well as on possible ways to measure reading-related cognitive processes like mental simulation, mind-wandering, immersion, and foregrounding.