ECEM2022: 21ST EUROPEAN CONFERENCE ON EYE MOVEMENTS
PROGRAM FOR TUESDAY, AUGUST 23RD
Days:
previous day
next day
all days

View: session overviewtalk overview

08:30-09:30 Session 10: Keynote: Fatema Ghasia

Miniscule eye movements play a major role in binocular vision disorders

Fatema Ghasia (Cole Eye Clinic)

My lab's primary focus is to understand the role of abnormal neural circuits in strabismus and amblyopia and apply novel strategies for their treatment.  As a pediatric ophthalmologist, I witness first-hand the problems and nuances associated with diagnosing and treating patients with binocular vision disorders. As an oculomotor scientist, I have discovered and realized the value of obtaining eye movement recordings in these patients. To resolve a desperate need that I experienced as a clinician, I leveraged my role as an eye movement scientist to understand fixation eye movement abnormalities as they relate to amblyopia diagnosis and treatment outcomes. We have built a cutting-edge infrastructure for tracking eye and head movements simultaneously with high accuracy and precision in children under different viewing conditions. Over the last several years, we have investigated the utility of eye movement measurements in children with binocular vision disorders. The systematic analysis of eye movement traces obtained in the lab has revealed for the first time several features that can be utilized to detect the presence of amblyopia, clinical types, and severity. We have also found that FEM abnormalities correlate with reduced contrast sensitivities and depth perception, and inter-ocular suppression experienced by these patients.We have also found that assessing FEM characteristics can be a valuable tool to predict functional improvement after patching therapy and recent data as it relates to newer amblyopia dichoptic treatments.

09:30-10:00Coffee Break

Take place in LT5

10:00-12:00 Session 11A: Symposium: Unstable fixation and nystagmus with a focus on the next generation of researchers

Unstable fixation and nystagmus with a focus on the next generation of researchers

Symposium Organisers: Frank A. Proudlock (University of Leicester), Mervyn G. Thomas (University of Leicester), Jonathan T. Erichsen (Cardiff University)

This symposium aims to better understand the continuum of abnormal fixational eye movements, from unstable gaze in paediatric eye diseases up to more overt involuntary oscillations of the eyes in the form of nystagmus.

The session summarises the effects of unstable fixation and nystagmus on spatial and temporal aspects of functional vision. The symposium will also review the structural anomalies associated with disrupted foveal development, especially in relation to genetic causes of nystagmus and outline the impact of eye oscillations on clinical electrophysiological testing of underlying retinal abnormalities.

The session has been designed to provide an opportunity for an up-and-coming generation of researchers in the field of unstable fixation and nystagmus to present their work.

Location: LT1
10:00
Fixation eye movements in pediatric eye diseases

ABSTRACT. Dr. Fatema Ghasia is clinician-scientist with expertise in pediatric ophthalmology and binocular vision disorders and research interests in systems neuroscience, with emphasis on human and primate ocular motor control. She is Associate Professor and directs the Vision Neurosciences and Ocular Motility Laboratory at Cole Eye Institute, Cleveland Clinic. One of the main emphasis in the laboratory is studying the visual sensory and oculomotor effects of abnormal visual experience in early life that results in amblyopia and strabismus and investigating treatment effectiveness. We have shown that the fixation instability in amblyopia arises from nystagmus or alterations in physiologic fixation eye movement (FEMs). The systematic analysis of FEM traces has revealed several features that can be utilized to detect the presence and severity of amblyopia and angle and control of strabismus. We have found that FEM abnormalities correlate with reduced contrast sensitivities and depth perception, and inter-ocular suppression experienced by these patients. We have also found that assessing FEM characteristics can be a valuable tool to predict functional improvement post amblyopia and strabismus repair. She will share her clinical and scientific interests and her journey to date and highlight the importance of studying eye movements in a variety of childhood eye diseases.

10:20
Accuracy and precision of fixation is correlated with gaze angle

ABSTRACT. Purpose – In infantile nystagmus, the null zone tends to favour better oculomotor control even when it is eccentric. We investigated whether gaze position affects fixation in typical participants.

Methods – Nine emmetropes fixated vanishing optotype Landolt C targets at 4m, for 7 gaze angles (±45°; 15° apart) while performing a resolution threshold task. Eye movements were recorded at 1000Hz. Eye position accuracy and precision was derived from a bivariate probability density function. The isocontour surrounding the gaze positions with the highest 68% probability density was selected for further analysis. The length of the vector from the target position to the isocontour centre measured accuracy (perfect fixation is zero), and larger contour area reflected less precision.

Results – Mean eye position accuracy had a significant positive correlation with gaze angle [r(2) = 0.973, p = 0.027) as did precision [r(2) = 0.990, p = 0.010]. Mean contour shape (min/max diam.) had a significant negative correlation (less circular) with gaze angle [r(2) = -0.998, p = 0.002].

Conclusion – Fixation performance is progressively less accurate and precise with increasing eccentric gaze. Just as in people with nystagmus where nystagmus worsens outside the null zone, fixation appears to become more unstable as gaze shifts away from primary position.

10:40
Investigating "Time to See" in infantile nystagmus

ABSTRACT. Infantile nystagmus (IN) is characterised by a continuous, involuntary oscillation of the eyes. Individuals with the condition may anecdotally report feeling slow to make visual discriminations, although the exact nature of this ‘time to see’ phenomenon has not yet been established. We hypothesise that the continuous oscillation (c. 3-4Hz) of the eyes in those with IN, which introduces an additional temporal component to their vision whereby their foveas are often moved off-target, is the cause for this increased ‘time to see’.

We are working to characterise ‘time to see’ in IN by using the novel approach of determining and comparing the presentation duration required by typical and IN participants to accurately resolve optotype targets. We will present preliminary results from our ongoing study of duration thresholds in IN, investigating the additional hypothesis that, within as well as across participants, longer duration thresholds will be associated with increasing nystagmus intensity (i.e. amplitude x frequency). Such a relationship could have potential applications in outcome measures for the efficacy of treatments which aim to reduce the intensity of nystagmus eye movements and their impact on visual perception.

11:00
Phenotyping in Infantile Nystagmus

ABSTRACT. Infantile nystagmus is characterised by the involuntary rhythmic oscillation of the eyes. It often arises from mutations in genes expressed within the developing retina and brain. Genes implicated include those involved in melanin biosynthesis, solute carriers, transcriptional factors, G-proteins and ion channels. Afferent system defects are common and visualised using high resolution imaging techniques such as optical coherence tomography (OCT). Foveal hypoplasia represents arrested retinal development with varying grades of severity. The relationship between genotype and phenotype remains unclear. In this talk, we will share the spectrum of eye movement disorders seen in infantile nystagmus, the techniques used to characterise foveal developmental defects and the relationship to genotype. We will utilise nystagmus twins to explore variability and penetrance of the nystagmus phenotype in shared genotypes. To understand the relationship between structure and function, we will correlate vision to the severity of the arrested foveal development and genotype.

11:20
The fovea is horizontally elongated in infantile nystagmus

ABSTRACT. Infantile nystagmus (IN) develops in the first few months of life, prior to the appearance of the fovea. Despite the constant eye movements, visual perception in IN is usually stable. We hypothesised that the foveal pit would be horizontally elongated in adults with IN, corresponding to the streak of the retina over which visual attention constantly oscillates. Horizontal and vertically orientated foveal images were acquired with a long wavelength (λc 1040 nm) optical coherence tomography (OCT) system from 15 adults with idiopathic IN or IN associated with conditions not known to affect the fovea, and from 15 controls (age, sex, and ethnicity matched). Horizontal and vertical foveal pit diameter were calculated.

Foveal shape factor (vertical:horizontal pit diameter ratio) was significantly lower (more horizontal) in participants with IN, as compared to controls (0.88 vs. 0.96, p = 0.02, BF10 = 2.05). These results suggest that early-onset nystagmus may have a direct impact on foveal development, since IN typically develops before the appearance of the fovea. The findings have important implications for understanding the relationship between eye movements and visual development.

11:40
Abnormal electroretinography in albinism and idiopathic infantile nystagmus
PRESENTER: Zhanhan Tu

ABSTRACT. Albinism and idiopathic infantile nystagmus (IIN) are two common forms of infantile nystagmus with involuntary oscillation of the eyes which are commonly associated with retinal diseases. Severe morphological abnormalities of the fovea, optic nerve head (ONH) and peripapillary retinal nerve fibre layer (ppRNFL) have been confirmed in albinism and IIN using optical coherence tomography (OCT). Full-field electroretinography (ffERG) can be used to diagnose abnormal retinal responses to photopic and/or scotopic light stimulation. In this talk, our primary aim was to determine whether ffERG responses are normal in albinism and IIN when measured in a large sample of adults using a robust methodology. A secondary aim was to investigate the effect of nystagmus on ffERG responses. Sixty-eight participants with albinism, 43 with IIN and 24 controls were recruited for comparing ffERG responses. Within-subject comparisons of ffERG responses when nystagmus was more or less intense were performed on 18 participants. Overall, our study found that individuals with IIN and albinism have abnormal ERG responses under photopic conditions. Nystagmus can negatively affect ERG recording and lower the ERG amplitude under the scotopic condition.

10:00-12:00 Session 11B: Eye movement control in reading I
Location: LT2
10:00
Understanding the visual constraints on lexical processing: New empirical and simulation results

ABSTRACT. Word identification is slower and less accurate outside central vision, but the precise relationship between retinal eccentricity and lexical processing is not well specified by models of either word identification or reading. In a seminal eye-movement study, Rayner and Morrison (1981) found that participants made remarkably accurate naming and lexical-decision responses to words displayed more than three degrees from the center of vision—even under conditions requiring fixed gaze. However, the validity of these findings is challenged by a range of methodological limitations. We report a series of gaze-contingent lexical-decision and naming experiments that replicate and extend Rayner and Morrison’s study to provide a more accurate estimate of how visual constraints delimit lexical processing. Simulations were conducted using the E-Z Reader model (Reichle et al., 2012) to assess the implications for understanding eye-movement control during reading. Augmenting the model’s assumptions about the impact of both eccentricity and visual crowding on the rate of lexical processing produced good fits to the observed data without impairing the model’s ability to simulate benchmark eye-movement effects. The findings are discussed with a view towards the development of a complete model of reading.

10:20
Theorizing dynamic adjustment of saccade lengths in reading and dual-stage progression of visual word recognition

ABSTRACT. Prevalent theories of eye movement control in reading assume discrete selection of a saccade target word. Also prevalent dual-route models of visual word recognition assume a dichotomy: familiar words are recognized as wholes, whereas unfamiliar words are decoded by grapheme-phoneme conversion mechanism. However, these fundamental assumptions have been seldom subject of empirical testing.

Concerning the discrete saccade target selection process, analysis of landing position distributions in high skipping probability conditions found no evidence for bimodal landing position distribution, whose peaks would correspond to words n+1 or n+2. Instead, a dynamic adjustment model computing saccade length on the basis of n and n+1 length was sufficient to explain word length effect on landing position, refixation and skipping probabilities.

Concerning word recognition, analyses of eye movement measures during reading revealed that word frequency effect generally precedes word length effect, which was stronger for words of lower frequency. This pattern of results suggest that activation of orthographic word representations precedes decoding and that the decoding is faster for more activated words. Thus, the processing may be better characterized as dual stages than dual routes.

The implications of the findings for the architecture of computational models of visual word recognition and eye movement control in reading will be discussed.

10:40
Print size as an explanation for inter-language differences in eye-movement behavior during reading: Empirical and neurocomputational evidence

ABSTRACT. Readers’ eye-movement behavior is very-much stereotyped. However, major differences have been reported between Western alphabetic languages and Eastern ideographic languages. In the framework of top-down models of eye-movement control during reading, these differences are commonly attributed to inter-word spacing. The assumption is that Chinese/Japanese scripts, which lack inter-word spaces, are read using less efficient word segmentation and/or different saccade-targeting strategies. Here we provide a much simpler explanation for inter-language differences, namely that they result from print size being two-to-four times greater in unspaced-language studies. Character size matters, but this tends to be ignored. Using a meta-analysis of dozens of studies and languages, we first found that benchmark eye-movement patterns, notably the Preferred Viewing Location (PVL) effect, only very mildly differ between languages when words are matched in angular extent and eye-movement behavior is measured in degrees of visual angle, rather than in letters as classically done. We then showed that MASC, our illiterate Model of Attention in the Superior Colliculus, predicts readers’ eye-movement behavior in spaced and unspaced languages, simply using print size as a predictor. Our findings evidence universal visuo-motor principles of eye-movement guidance that generalize across languages and writing systems, while raising crucial theoretical and methodological issues.

11:00
Eye movement control during reading and skimming: Effects of word length

ABSTRACT. The effects of word length on word skipping and saccade targeting during reading for comprehension are well established. The present (OSF preregistered) study provides an experimental test of how reading goals (reading for comprehension vs. skimming for gist) modulate effects of word length. Critical short (3-4 letter) and long (8-9 letter) words were embedded into 96 sentence frames. The study had a within participants and within items 2(reading goal: read, skim) X 2(word length: short, long) design (64 participants). For both reading and skimming short words were significantly more likely to be skipped, less likely to be refixated, and had shorter gaze durations compared to long words. Crucially, there were significant interactive effects of reading goal and length for word skipping and initial landing position. Short words were especially likely to be skipped during first-pass when skimming compared to reading for comprehension. In addition, initial first-pass fixations landed further into long words during skimming compared to reading. Crucially, these results indicate that the visuo-oculomotor mechanisms underlying which words are fixated, and where words are first fixated, are modulated by readers’ goals. The theoretical implications for the flexibility of the mechanisms underlying eye movement control during reading will be discussed.

11:20
A cross-linguistic study of spatial parameters of eye-movement control during reading

ABSTRACT. Current theories of oculomotor control in reading differ in their accounts of saccadic targeting. Some argue that targets for saccades are solely selected on the basis of the rapidly changing sensory input, while others additionally allow for the reader's experiential biases to modulate saccade lengths. We investigated this debate using cross-linguistic data on text reading in 12 alphabetic languages from the MECO database. These languages vary widely in their word length distributions, suggesting that expected word lengths and corresponding biases towards optimal saccade lengths may also vary across readers of these languages. Regression analyses confirmed that readers of languages with longer words (e.g., Finnish) rather than shorter words (e.g., Hebrew) landed further into the word, even when sensory aspects relevant for saccade planning (e.g., word lengths) were controlled for. In the prevalent saccade type, one letter of a difference in mean word length between languages came with one-quarter letter of a difference in initial landing position and saccade length, and a decrease in 1.5% in refixation probability. Interpreted in the Bayesian framework, the findings highlight the relevance of global language-wide settings for accounts of spatial oculomotor control and lead to testable predictions for further cross-linguistic research.

11:40
Individual Differences and the Impact of Word Frequency on Eye Movements during Reading

ABSTRACT. We examined the reading skill of 100 participants across a battery of individual differences tests and recorded their eye movements whilst they read neutral sentence frames containing either high or low-frequency words. Shorter fixation times and higher skipping rates were observed for high-frequency compared to low-frequency words. High scores in some tests (reading ability, print exposure and spelling) were associated with shorter gaze durations than low scores, and a subset of these tests (lexical knowledge and reading ability) were shown to influence the relationship between word frequency and gaze durations such that low-frequency words impacted fixation times and skipping rates less for participants scoring high on these tests. Next, common latent variables within tests were identified using a PCA. The factor lexical proficiency was associated with faster reading times, and a reduction in the impact of a low-frequency word, as was the case in total sentence reading times for a second latent variable indicative of overall processing speed. Our PCA also indicated that commonly used comprehension measures did not load on the same latent factor questioning whether they are indeed measuring the same reading skill.

10:00-12:00 Session 11C: Decision-making
Location: LT8
10:00
The contribution of visual conduction delay to saccadic reaction time

ABSTRACT. In fast sensory driven action decisions, reaction time is conceived as the sum of three components: sensory conduction, decision and motor execution. Over the last 10 years, we described a method to estimate the visual delay and decision time from reaction time distributions, that does not rely on modelling assumptions, but only on behavioural data and logic (Bompas, Campbell & Sumner, 2020 Psychological Review 127(4): 542-561). Here we gather 12 datasets from multiple labs to outline the defining properties of conduction delay in saccadic responses. We provide a first systematic description of the effect (or lack thereof) of well-documented factors, such as visual properties (contrast and colour) and speed-accuracy trade-off. Our work also documents robust individual differences in sensory conduction delays over 70 participant. Our conclusions are contrasted with those reached from simple decision models, such as the drift diffusion model and the linear ballistic accumulator, and highlights the need to better account for the influence of bottom-up signals and their dynamics in saccade generation.

10:20
Uncertainty driven gaze selection

ABSTRACT. The amount of visual information available in the environment is exorbitant and redundant. For real-time interaction with the world, embodied agents must select where and when to sample the information. This process is crucial for the well-being of animals, as ultimately, it can condition their survival. Generally, this sampling process considers various aspects: function, external state, internal state, and action motor costs. Here we focus on the agent's internal state, particularly on the uncertainty of its representations. We postulate that those intrinsically motivated to be curious and decrease their uncertainty learn more efficiently than those who follow other motivations. To test this hypothesis, we consider an agent freely moving in a domestic virtual environment while learning efficient representations of its visual input. We compare the agent's model representation quality, coverage area, and learning efficiency when the agent is curious and selects gazing points to reduce the uncertainty of model parameters against baselines including random and bottom-up gaze selection strategies. We show that curiously sampling the environment improves the overall performance, but not equally for all uncertainty criteria. This work directly relates to the development of artificial systems, e.g. robots or avatars, and gives new insights into possible visual attention and eye movement mechanisms.

10:40
Motivation by reward increases performance beyond the speed-accuracy tradeoff by improving distractor suppression

ABSTRACT. Saccadic selection is characterized by a tradeoff between speed and accuracy. Speed or accuracy of the selection process can be affected by higher-level factors, for example expecting a reward, obtaining task-relevant information, or seeing an intrinsically relevant target. Recently, it has been shown that motivation by reward can simultaneously increase speed and accuracy, thus going beyond the speed-accuracy-tradeoff. Here, we compared the motivating abilities of monetary reward, task-relevance and image content to simultaneously increase speed and accuracy of saccadic eye movements. We used a saccadic distraction task that required suppressing a distractor and selecting a target. Across different blocks successful target selection was followed either by (i) a monetary reward, (ii) obtaining task-relevant information, or (iii) seeing the face of a famous person. Each block additionally contained the same number of irrelevant trials lacking these consequences and participants were informed about the upcoming trial type. We found that only motivation by reward simultaneously increased speed and accuracy of the eye movement response. This was achieved by faster distractor suppression. Task-relevance increased speed but decreased selection accuracy, whereas post-saccadic vision of a face did neither affect speed nor accuracy, suggesting that image content does not affect saccadic selection via motivational mechanisms.

11:00
Decision making, reward and eye movements

ABSTRACT. It has been reported that evidence used to support the decision to saccade to a target has a direct impact on that movement (McSorley & McCloy, 2009; McSorley et al, 2014). The path of a saccade has been found to deviate away from a non-selected target more as supporting evidence for the selected target improves. However, it is unclear what role reward plays in this process. To examine this, participants were asked to choose between two potential saccades targets as indicated by a briefly presented central motion coherence patch whose motion largely moved to toward one or the other. They were rewarded with points for correct choice and punished with points taken away for an incorrect answer. On occasion they could opt for a safe saccade target choice which gave a reward of a lower point value. It was found that performance improved as the motion coherence levels became higher, saccade latency decreased correspondingly, and the number of opt-out selection decreased. This shows that as the evidence supporting choice increases then the task becomes easier and participants’ confidence increase (i.e., opt-out selections dropped). Overall saccade trajectories were found to be away from the non-selected target, showing that it has been inhibited in the target selection process. However, as motion coherence (the supporting evidence) increased, the saccade path was not found to deviate further away from the non-selected target. Thus, while the change in evidence supporting choice clearly influenced performance and eye movement control in terms of speed of response (percentage correct increase while opt-out choices decrease while saccade latencies decrease) the change did not impact on the spatial control of the saccade. We suggest that the reward associated with the choice, which does not change, rather than choice evidence itself, is the key factor which impacts on the inhibition of the non-selected target producing equivalent deviations in the path of the target directed saccade.

11:20
What drives pupil dilation during decision making– surpise or uncertainty?

ABSTRACT. Pupil dilation was linked previously to both environmental uncertainty and surprise. Importantly, in most cases these variables are correlated: more uncertain environments cause more surprising events, and in turn, surprise per se leads to uncertainty. We conducted two pupillometric studies to disentangle the effects of these factors on pupil dilation. In a probabilistic reversal task, participants engaged in a guessing game on computer, in which a fictional actor repeatedly hid a stone in one of his hands, and participants had to guess the location of the stone to attain reward. One of the locations was more often rewarded (e.g. stone was in 85% of the cases in the left hand) and the identity of this advantageous option was switched regularly (reversal). In Study 1, we varied uncertainty by changing reward probability linked to the advantageous option (e.g. 65% vs. 85%) and used a Bayesian model to estimate when participants detected a reversal (i.e. surprise). In Study 2, we investigated pupil dilation after we gave information about the preferred location in a specific manner, which enabled to disentangle the effects of uncertainty and surprise. Our results show that both uncertainty and surprise affects pupil size, but their effects can be dissociated.

11:40
Error inconsistency does not generally inhibit saccadic adaptation

ABSTRACT. Previous studies on adaptation of reaching movements observed that increased error inconsistency reduces the error sensitivity; the ratio between adaptive change and error size. Similar results were obtained in saccade adaptation raising doubts on the suitability of classical linear models of saccade adaptation.

We measured the gain of visually guided, horizontal saccades in 300 training trials with intrasaccadic target shifts (ITS) followed by a 300 washout trials. The adaptation dynamics were compared between two conditions (consistent/inconsistent) where the amplitude of the ITS formed a fixed/variable percentage of the primary target step. The mean ITS was identical in both conditions. The inter-trial standard deviation of the postsaccadic visual error was twice as large in the inconsistent than in the consistent condition.

The total adaptive changes during training or washout did not depend on error inconsistency. Initial adaptation speed was lower with inconsistent ITS. However, the effect on adaptation speed occurred only during amplitude reduction and not during enlargement or washout. It was also not sufficient to induce a significant effect on the adaptation time constant.

These results corroborate the linearity of saccade adaptation in that the mean error is the main factor determining the total adaptive change, independent of error consistency.

12:00-13:00Lunch Break

Bennett Lower Ground Lobby

13:00-15:00 Session 12A: Visual search
Location: LT1
13:00
Age-related changes in oculomotor indices of top-down selection during visual search

ABSTRACT. Ageing has been associated with declines in the speed and accuracy of visual search. These have been attributed to reductions in processing speed, sensory acuity and top-down selection. The effect of these factors on oculomotor sampling, however remains poorly understood. The current study used one or two cues to manipulate the relevance of subsets of coloured stimuli during search in young (YA) and older adults (OA). Displays contained an equal number of red and blue Landholt stimuli. Targets were distinguished from distractors by a unique orientation and observers reported the direction of the gap on each trial. Single-target cues signalled the colour of the target with 100% validity. Dual-target cues indicated the target could be present in either coloured subgroup. The results revealed reliable group differences in the benefits associated with single- compared to dual-target cues. On single-target searches, OA made significantly more saccades than YA to stimuli in the uncued colour subset. Comparisons of z-transformed latencies also revealed a smaller reduction in the time between initial saccades and target fixations on single- compared to dual-target searches. These results support an age-related decline in the ability to restrict oculomotor sampling to a subset of relevant objects during visual search.

13:20
Efficient eye movements during search for an object, inefficient eye movements during search for a feature

ABSTRACT. Some models describe human search as optimal, others as a stochastic process. Across seven experiments, healthy observers searched through line segments, computer icons, mosaic patterns, polygons, and pens. In all experiments, the search array was split into easy and hard sides. The target was visible using peripheral vision on the easy side, but not the hard, such that only eye movements towards the hard side provided new information. An efficient strategy (that is, fixating the hard side) is no more or less difficult to implement across the stimulus sets. Nonetheless, strikingly different patterns of results emerged across different type of stimuli, which hinged largely on how the target of search was defined. Searching for an object oriented in a particular direction produced highly variable efficiency that, on average, did not differ from what would be expected from a stochastic strategy. Searching for a specified object produced far more uniform and efficient search behaviour. The results demonstrate that changes to seemingly irrelevant surface properties of the task can drastically alter measured strategy and performance. Moreover, searching for simple features is a useful and common laboratory task, but it may not always be representative of search for objects.

13:40
Categories of eye movement errors and their relationship to strategy and performance

ABSTRACT. Visual search is a complex task involving perception, attention, decision, working memory, and strategy. Eye movements are a rich source of data that can reveal the complex interplay of these different components in determining the speed and accuracy of search performance. We analysed pre-existing data, focusing on how inefficient eye movements (defined as those directed to locations that could have been evaluated using peripheral vision) correlated with other identifiable eye movements errors such as revisiting previously fixated areas and “look-but-fail-to-see” (LBFTS) errors, in which participants fixated the target but then reported it absent. All three types of eye movement errors contributed to reaction time, and inefficient eye movements were independent of LBFTS errors. We also categorised scan-paths according to whether participants implemented obvious eye-movement “routines” (e.g. “reading” left to right, top to bottom), or appeared haphazard. There were large individual differences in the extent to which participants used routines. Routines were associated with slower search overall, and a higher prevalence of LBFTS errors. The relationships between strategies, identifiable eye movement errors, and search performance not only sheds light on the interplay of different components of search, but has implications for applied contexts involving information gathering and situational awareness.

14:00
Does pre-crastination explain why some observers make sub-optimal eye movements in a visual search task

ABSTRACT. A large proportion of variation in eye movements comes from individual differences. Many participants consistently move their eyes to locations that can be easily ascertained to neither contain the target, nor to provide any new information about the target’s location. Others engage in near-optimal search, executing eye movements to locations where central vision is most needed. In a pre-registered report, we test the hypothesis that inefficient search may represent a specific example of a larger tendency towards precrastination: starting sub-goals of a task before they are needed, and in so doing, spending longer doing the task than necessary. Participants perform two tasks: go and pick up two buckets and bring them back, and search for a line segment. Precrastination is defined as consistently picking up the closer bucket first, versus the more efficient strategy of picking up the farther bucket first. Search efficiency is the proportion of fixations directed to more cluttered regions of the search array. 146 participants have completed the experiment to date. We will reach the planned sample size of 200 by summer and will definitively address the hypothesis. Additional personality inventories will provide exploratory insights into individual differences in eye movement strategies.

14:20
Developing a collaborative framework for naturalistic visual search

ABSTRACT. Interest in naturalistic visual search can be seen in a range of disciplines, tasks and settings, whether it is to determine where individuals look when driving, to find lost keys, or ways to maximise detection accuracy of tumours in x-rays. Yet much of this research has been deployed via screen based experiments, which may not directly assess the true behaviour carried out in these tasks. Our preliminary work aims to combine a series of visual search tasks to develop a ‘naturalistic search task battery’ which will allow us to understand search behaviour in more real-world contexts. Participants’ behaviour and eye movements are recorded during the tasks. The tasks include a bookcase search where we explore top-down and bottom-up mechanisms by comparing target absent and present trials, and different levels of heterogeneity in the arrangement of distractors. We also investigate feature and conjunction searches and the effect of set size using Lego search tasks. Finally, look ahead fixations and strategy selection are investigated in a Lego building task and by making puzzles with or without a template. The aim is to form an open source, replicable and standardised set of tasks which can be used in a wide range of settings.

13:00-15:00 Session 12B: Reading development
Location: LT2
13:00
Seven Years Later – Executive Functioning Predicts the Development of the Perceptual Span during Reading

ABSTRACT. The perceptual span indicates how much visual information can be processed within a single fixation. Here, we present data from two new waves of our longitudinal developmental study of the perceptual span during reading with more than 100 primary-school students, extending the data-collection period from first grade up to middle school. Overall, the size of the perceptual span did not significantly increase but rather stabilized during middle school – remaining at the level of sixth-graders. Matthew effects we reported earlier for early reading development resulted in stable and pronounced inter-individual differences at later grades for the high-level measures reading rate and perceptual span, whereas low-level oculomotor measures such as fixation duration and saccade length revealed compensatory patterns. Measures of executive functioning predicted reading performance seven years later: Children, who initially performed above average in both early reading and executive functioning, finally developed a much larger perceptual span and a higher reading rate compared to their initially below-average performing peers. This suggests that (in combination with linguistic skills) the efficient operation of executive functions such as shifting of attention, updating of working memory, and inhibition of pre-potent responses qualifies as a determinant of the development of the perceptual span.

13:20
The Importance of the First Letter in Children’s Parafoveal Pre-processing in English: Is it Phonologically or Orthographically Driven?

ABSTRACT. For both adult and child readers of English, the first letter of a word plays an important role in lexical identification. Using the boundary paradigm during silent sentence reading, we examined whether the first-letter bias in parafoveal pre-processing is phonologically or orthographically driven, and whether this differs between skilled adult and beginner child readers. Participants read sentences which contained either: a correctly spelled word in preview (identity; e.g., circus); a preview letter string which maintained the phonology, but manipulated the orthography of the first letter (P+ O- preview; e.g., sircus); or a preview letter string which manipulated both the phonology and the orthography of the first letter (P- O- preview; e.g., wircus). There was a cost associated with manipulating the first letter of the target words in preview, for both adults and children. Critically, during first-pass reading, both adult and child readers displayed similar reading times between P+ O- and P- O- previews. This shows that the first-letter bias is driven by orthographic encoding, and that the first letter’s orthographic code in preview is crucial for efficient, early, processing of phonology.

13:40
The effect of relevance in children’s reading of science texts

ABSTRACT. In the present study, 11-12-year-old Finnish children (N=34) read science texts designed for their reading level while their eye movements were registered as part of a larger project aimed at strengthening the scientific literacy of children. Prior to reading, the children were asked a question related to the topic of the texts, and instructed that they would need to write a short essay to answer the question. The texts contained both relevant and irrelevant parts with regard to the question. It was shown that during the first-pass reading, the probability of making a regression within a sentence was higher for the relevant sentences. This was mirrored in the first-pass rereading duration. Furthermore, the probability of making a regression out of the sentence was higher for relevant than irrelevant sentences. Faster reading manifested in shorter first-pass reading and rereading durations and higher probability of intra-sentence regression but not in differences in later rereading. The results indicate that 11-12-year-old readers are sensitive to relevant information. Furthermore, there were hints of strategic reading as the participants went back in text when they encountered relevant sentences. This is likely due to need to integrate the relevant information to the memory representation of the text.

14:00
Children's processing of written irony: An eye-tracking study

ABSTRACT. Ironic language is used frequently in communication. However, it is challenging for many to understand, e.g., for children. Comprehending irony is considered a major milestone in the development of children’s social cognition, as it requires inferring intentions of the person who is being ironic. However, theories of irony comprehension turn a blind eye to developmental changes. The present study examined how children process and comprehended written irony in comparison to adults. Seventy participants took part in the study (35 10-year-old children, and 35 young adults). In the experiment, participants read ironic and literal sentences embedded in story contexts while their eye movements were recorded. They also responded to a text memory and an inference question after each story, and their levels of reading and empathy skills were measured. Results showed that comprehending ironic stories was harder for both age groups in comparison to literal stories, but the effect was larger for children. Readers took longer reading ironic than literal passages, and children already showed adult-like reading time patterns while resolving ironic meaning. Children’s higher empathy skill was related to better irony comprehension and less rereading of ironic phrases. The results have implications for the current theories of figurative language comprehension.

14:20
Concurrent and predictive validity of reading assessment by eye tracking and machine learning

ABSTRACT. We present a study on the concurrent and predictive validity of Lexplore - a fast, objective, and accurate method for reading assessment based on eye tracking and machine learning.

Lexplore was incorporated into the standard assessment battery in grade 1-6 at elementary schools in California (n = 1,484 students). The results from Lexplore in fall and spring was compared to i-Ready (a comprehensive benchmark widely used in US), oral reading fluency (ORF) scores, and the California end-of-year state tests (CAASPP) for grade 3-6. Concurrent and predictive validity was examined through correlational analyses, receiver operating characteristic (ROC) analyses, and classification accuracy statistics. The results show that Lexplore has a high concurrent validity as the results are very similar to the analysis of ORF (r = .92) and correlated well with i-Ready (r = .75). In both spring and fall, the correlation between Lexplore, ORF, and CAASPP (r > .65) was lower than i-Ready and CAASPP (r > .80), although both tests performed similarly when predicting readers below standard (ROC area under the curve > .79).

Given that Lexplore demonstrates comparable concurrent and predictive validity, we discuss what additional benefits eye tracking and machine learning can have for reading assessment in schools.

14:40
The Eye-Voice Span in Children: Exploring Individual Differences

ABSTRACT. Oral reading is a complex task especially for developing readers, involving the concurrent recruitment of visual, oculomotor, lexical, semantic, memory, and articulatory processes (Godde et al., 2021; Kim et al., 2019). We measured these coordinative processes during oral reading using the eye-voice span (EVS), the distance between the eye and the voice while reading aloud (Adedeji et al., 2021; Buswell, 1921; Laubrock & Kliegl, 2015; Rayner, 1998). We obtained EVS data from 52 seven-to-ten-year-olds reading short passages, alongside offline ability measures of reading, spelling, vocabulary and RAN speed. We present reliabilities in individual differences in the EVS. Reading, vocabulary, and RAN, all predicted mean EVS, while spelling ability predicted the variability in the EVS. Spelling ability was also found to influence saccade length and reading ability influenced gaze duration. Neither vocabulary nor RAN speed predicted gaze duration or saccade length. Our pattern of results fits prior work (Parker & Slattery, 2021; Slattery & Yates, 2018; Veldre & Andrews, 2015, 2016) where spelling ability influences early letter encoding during reading, and fixation times are modulated by reading ability. We conclude that spatial EVS maybe more reflective of off-line measures related to reading ability than measures of gaze duration.

13:00-15:00 Session 12C: Eye-tracking methods
Location: LT8
13:00
Fixation classification: how to merge and select fixation candidates

ABSTRACT. Eye trackers are applied in many research fields. To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data of high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5 deg), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by two rules: 1) select saccades with amplitudes > 1.0 deg, and; 2) select fixations with durations > 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.

13:20
Web-based attention-tracking with an eye-tracking analogue is reliable and valid

ABSTRACT. Psychological research is increasingly moving to the internet, where larger and more diverse samples of participants can be reached. While much work has been done on webcam eye-tracking, the technique still suffers from high levels of attrition (~60%!) and imprecision. MouseView.js was developed to circumvent this issue. This software uses a mouse-locked aperture of high resolution (analogous to the fovea), while blurring the rest of a stimulus display. It can thus act as an analogue to eye tracking, for example to measure overt attention in online experiments. Here, we present findings of a validation study in which MouseView was compared directly with eye tracking (EyeLink 1000) in preferential looking tasks. We found that mouse-guided dwell time (collected via the internet) was at least as reliable as gaze dwell time (collected in the lab). In a second study, we show that there was a strong correlation between dwell time measured with MouseView and with eye-tracking (collected in the lab, within-participants). The only clear deviation between mouse-guided and gaze behaviour was in the first second of stimulus presentation, suggesting eye-tracking more accurately captured involuntary attention. We conclude that reliable and valid dwell data can be collected in web-based experiments using MouseView.js.

13:40
Characterising Eye Movement Events with an Unsupervised Hidden Markov Model

ABSTRACT. Eye-tracking allows researchers to infer cognitive processes from eye movements that are classified into distinct events. Parsing the events is typically done by algorithms. Here we aim at developing an unsupervised, generative model that can be fitted to eye-movement data using maximum likelihood estimation. This approach allows hypothesis testing about fitted models, next to being a method for classification. We developed gazeHMM, an algorithm that uses a hidden Markov model as a generative model, has few critical parameters to be set by users, and does not require human coded data as input. The algorithm classifies gaze data into fixations, saccades, and optionally postsaccadic oscillations and smooth pursuits. We evaluated gazeHMM’s performance in a simulation study, showing that it successfully recovered hidden Markov model parameters and hidden states. Parameters were less well recovered when we included a smooth pursuit state and/or added even small noise to simulated data. We applied generative models with different numbers of events to benchmark data. Comparing them indicated that hidden Markov models with more events than expected had most likely generated the data. We also applied the full algorithm to benchmark data and assessed its similarity to human coding and other algorithms. For static stimuli, gazeHMM showed high similarity and outperformed other algorithms in this regard. For dynamic stimuli, gazeHMM tended to rapidly switch between fixations and smooth pursuits but still displayed higher similarity than most other algorithms. Concluding that gazeHMM can be used in practice, we recommend parsing smooth pursuits only for exploratory purposes. Future hidden Markov model algorithms could use covariates to better capture eye movement processes and explicitly model event durations to classify smooth pursuits more accurately.

14:00
The amplitude of small eye movements can be accurately estimated with video-based eye trackers

ABSTRACT. Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging since 1) a video camera is limited in terms of spatial and temporal resolution, and 2) because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (<<1 deg). We show how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that saccades with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.

14:20
Event level evaluation of eye movement event detectors

ABSTRACT. Dozens of eye movement event detectors exist to date, however the reported results of performance evaluation are usually neither directly comparable between the papers nor easily interpretable even by the field experts. To a large degree it is a direct consequence of the multitude of available evaluation methods and approaches. The number of reported metrics alone is impressive (sensitivity/specificity/F1 scores, accuracy or disagreement rates, Cohen's kappa, etc.), while the details of their application and implementation, lead to fundamental dissimilarities even when the same metric is used in the evaluation. This is especially prominent when considering event level evaluation. In this talk we review existing practices of evaluating eye movement event detection algorithms and present the empirical analysis of different combinations of eye movement event matching methods and metrics computed on the results of the matching step. We also give recommendations on improving the event detection evaluation pipeline that aim to ensure the high quality of future publications, as well as encourage inter-comparability and reproducibility in this field of research.

14:40
Eye tracking: empirical foundations for a minimal reporting guideline

ABSTRACT. In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that existing reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research. This is an international collaboration involving 46 authors.

15:00-16:30 Session 13: POSTER SESSION II
The When and Where of the Looking at Nothing Effect: Examining Eye Movements During Memory Retrieval

ABSTRACT. Looking at Nothing (LAN) describes the behavior that people look at empty spatial locations when trying to retrieve information from memory which was previously associated to these locations. This study investigated LAN for retrieval from working memory. We tested whether LAN is directed to all or only some of the associated spatial locations, when LAN occurs, and its relation to retrieval performance. During encoding, participants saw four word-pairs in four different spatial locations on a screen. During retrieval, they heard two words and had to indicate whether the words came from one previously seen word-pair (positives) or from two different pairs (lures). We found that participants only showed LAN to the first probe’s location, but this occurred only when hearing the second probe, irrespective of the correctness of the response. The results speak in favor of memory processes leading to LAN during the recollection of information in working memory.

The context effect on implicit sequence learning using an ocular version of the Serial Reaction Time (O-SRT) task

ABSTRACT. Objectives: We aimed to evaluate the effect of contextual information on implicit sequence learning (ISL) using an ocular version of the Serial Reaction Time task (O-SRT). Participants and Methods: A total of 76 young adults were tested on the O-SRT using two alternating sequences simultaneously. Participants were randomly assigned to one of two versions of the task: with or without context. In the former each one of the two sequences was presented with a different context (shape and color), and in the latter both sequences were presented with the same context. Eye movements were recorded by an SMI-RED eye-tracker (250Hz). Results: Correct Anticipations of next spatial location were analyzed applying Mixed Design ANOVA, with Group (with and without context) and Learning trials (1-6), between and within-subjects factors, respectively. The group with context showed significant learning in the later trials of training, compared to the group without context that showed significant improvement in the earlier trials. Conclusions: Contextual information might impede ISL in the early learning phase, possibly because it fosters explicit exploration of task or because it requires the processing of additional information, compared to the no context condition, which is proved to be beneficial for the later learning stages.

Eye-tracking in innovative neuropsychological assessment of visual working memory

ABSTRACT. In both the laboratory and clinical neuropsychological assessment, visual working memory (VWM) is typically estimated by means of the maximum storage load. However, these assessment settings ignore that in daily life, information is generally available in the external world. We can easily sample information from the environment by making eye-movements, reducing the need to use the maximum VWM storage load. Vice versa, reliance on VWM capacity increases when accessing external information is difficult or costly. We investigated whether people reduce VWM load when sampling is possible, and whether they memorize more information when sampling is costly. Patients with severe memory problems (Korsakoff’s syndrome) and controls were instructed to perform a copy task while their eyes were tracked. The availability of the example puzzle was manipulated by introducing a gaze-contingent waiting time to provoke different strategies (sampling vs. storing). Preliminary data confirms that controls successfully shifted from sampling to storing when information became less readily available. Although patients also showed less sampling indicating an attempt to adjust strategy, they could not memorize more items at once and made more errors. This suggests that successfully switching strategy from sampling to storing is dependent on VWM functionality.

Gaze and visual short-term memory for localizing part of an image

ABSTRACT. Visual short-term memory (VSTM) is impaired in conditions such as Alzheimer’s disease, impacting daily life in many ways. We ask whether VSTM can be tested during free viewing of natural scenes, with the aim of understanding the importance of gaze allocation on VSTM under ecologically valid conditions. Recognition memory for scenes is close to ceiling for long presentation durations and short time intervals between encoding and retrieval, which makes it an impractical measure to test the impact of gaze allocation on memory. Instead, we used an image-part localization task, in which observers freely explore visual scenes (from a variety of categories) for 8 seconds and 2 seconds later localize a randomly selected image-part within a scene placeholder. Short-term memory for image-part localization is quite poor, unlike recognition memory for whole scenes, but correlates positively with an individual’s fixation density (weighted by fixation duration) on the image-part prior to localization. This task could be used to investigate VSTM during free viewing in normal aging and dementia.

Testing memory strength with pupil dilation as a function of strategic and automatic memory retrieval.

ABSTRACT. Previous research indicates that pupil responses during automatic recognition of previously seen information reflect the aggregate strength of a memory trace and not cognitive effort per se. In contrast, during recall or effortful recollection, the retrieval process is predominantly reliant on strategic processes and cognitive effort, as an active searching strategy is needed for successful retrieval. In such cases, consequently, we expected that as the strength of the memory trace increases, fewer mental effort is needed for the retrieval resulting in a negative link between memory strength and pupil dilation. Thus, memory strength might be differently related to pupil dilation in different forms of memory tests. To test this hypothesis, we implemented two testing paradigms on verbal stimuli: a paired-associate learning paradigm and the source-monitoring framework. We manipulated memory strength by presenting words different times (one vs. four times) on different spatial locations in source monitoring and by presenting word-pairs different times (one vs two times) in the paired-associate learning. In the subsequent memory test, we tested them on recognition and recall/recollection and measured pupil responses. Our preliminary results suggest that the link between memory strength and pupil dilation is modulated by the form of retrieval.

Pupil responses: indices of individual memory performance

ABSTRACT. Studies have suggested that pupil size changes reflect activity of the Locus Coeruleus (LC). Thus, by measuring fluctuations in pupil diameter over time, we can indirectly monitor ongoing attentive processes. An ample number of pupillometry studies have already investigated within-subject effects. In contrast, less research has focused on exploring how individual differences in pupil responses correlate with criterion variables. To this aim, in our present research, we inspected between-subjects variabilities in phasic pupil responses as possible predictors of individual memory performance. In one experiment we used an incidental memory task targeting mnemonic discrimination (Mnemonic Similarity Task), whereas in the other task, a 2-back design was used. We had the participants' pupils recorded during both tasks. For our correlational analyses, we measured baseline corrected event-related pupil dilation (ERPD). We conclude that individual differences in task-evoked pupil behavior can be used to predict cognitive performance. This might be caused by the modulating role of LC on attentional processes.

A field test of appearance-based gaze estimation

ABSTRACT. Appearance-based gaze estimation (ABGE) refers to techniques that estimate gaze direction from video recordings of the eyes or face. Although many ABGE methods have been developed, most of their validations can only be found in the technical literature (e.g., computer science conference papers).

We aimed to 1) identify which ABGE methods are usable by the average experimental psychologist, and 2) validate those methods. We searched the existing literature for methods that don’t require calibration and have clear documentation. Only OpenFace and OpenGaze were found to fill these criteria. We evaluated the methods by having adult participants fixate points displayed on a screen for three conditions with different degrees of head movement.

We demonstrate that (1) gaze estimation sufficed to distinguish between all fixated points for some but not all participants, (2) there was large variability in the accuracy and precision of gaze estimation, (3) gaze estimates were not independent from head orientation, and (4) OpenGaze outperformed OpenFace. We conclude that both methods can potentially be used in sparse environments with horizontally separated areas of interest.

eyetRack - Shiny application for recurrence quantification analysis

ABSTRACT. eyetRack is a new R package and a Shiny application which facilitates the accessible analysis of eye-tracking data from SMI or Tobii eye-trackers. It offers a basic analysis for the initial conception of the number and duration of fixations. Barplot visualization allows to show a number of fixations in each Area of Interest and Dwell Time. The tool also offers visualization of the scanpath above the stimulus. The essential functionality of the application is the analysis through recurrence and recurrence quantification analysis. The recurrence plot can be displayed. However, visualization of recurrent fixations can often predispose to subjective bias when evaluating a set of results. For that reason, we used recurrence quantification analysis measures, which allow us to quantify data displayed in the recurrence plot. Using RQA, we can compare different tasks or compare multiple participants. The last functionality of the application is the calculation of coefficient K, which helps distinguish focal and ambient attention. The tool will be freely available at www.eyetracking.upol.cz/tools.

An open-source device for vestibular stimulation and eye-movement tracking in head-fixed mice

ABSTRACT. Visual virtual reality (VR) is widely used to study cortical processing in awake, behaving mice. It allows for tight control of animal-driven visual stimuli and provides the ability to change the coupling between behaviour and visual stimulus. However, most visual VR approaches render animals motionless in space (i.e., head-fixed), resulting in the vestibular system being taken out of play. Consequently, the head-direction (HD) system, which is primarily driven by vestibular input and plays a pivotal role in navigation, is severely compromised. Here we present a novel experimental apparatus to overcome this limitation. Using an open-source approach, we have built a modular and affordable device allowing the rotation of head-fixed, behaving mice. It can be used in open-loop mode to study vestibular sensory representation and processing. In closed-loop mode, the apparatus allows animals to navigate in rotational space and self-generate vestibular input, providing a better substrate for 2D navigation in virtual environments. We show that our approach is compatible with the electrical recording of brain activity at the cellular level and results in the robust recruitment of HD cells. We further demonstrate its utility by combining the tracking of vestibular and visually evoked eye-movements with optogenetic interference of specific neuronal populations.

Metacognitive Modeling Effect of Reading Illustration First for EFL Readers: A Study of Eye Movement Evidence

ABSTRACT. Eye-trackers have been adopted to investigate the instructional effects of modeling in science reading and reveal underlying cognitive processes. However, fewer studies investigated how to help less capable EFL readers read illustrated narratives by utilizing eye-trackers. Modeling illustration reading first has been expected to form a macrostructure before reading the text. This study explored the metacognitive modeling effect of reading illustration first with metacognitive questions as prereading guidance for EFL readers with beginning language capacity. Participants were randomly assigned into four groups (intervention: modeled/non-modeled; lexical difficulty of article: low/high), in which illustrated narratives were provided with story structure (prologue/climax/resolution). Modeled groups were instructed to read illustrations with 5Ws questions before reading text as the model did, while non-modeled groups read in their own manners. Two-way ANOVAs were conducted to analyze posttest performance and eye measures, fixation counts (FC), dwell time (DT), and run counts (RC). Lexical level demonstrated substantial impacts on FC, DT, and RC in both text and illustration areas of prologue and climax structures. Intervention effect shows for FC and RC in both text and illustration, but only in prologue structure. Results found that the metacognitive modeling strategy is only influential in reading the beginning of a story.

Reading search page results: Evidence from an eye tracking study on 11-12-year-olds

ABSTRACT. This study examined whether 11-12-year-old Finnish readers can differentiate task-relevant search page results from the irrelevant ones. The participants (N=34) read simulated search engine results pages (SERPs) while their eye movements were recorded. Each page included 8 search hits, which were described with a title and a short description, which could be either relevant or irrelevant to the search task given to the readers. The position of the relevant search results on the search page was manipulated, and average reading speed in a separate reading task was used as a measure of reading skill. The results of the fixation time on titles showed that skillful readers spent less time reading the titles towards the end of the search page, regardless of relevance. Less skilled readers did not show such a speed-up. As for the fixation time on descriptions, skilled readers spent less time on irrelevant segments towards the end of the pages, whereas the fixation time for relevant segments did not change as much. Less skilled readers’ fixation times on irrelevant segments did not decrease across the pages. In sum, reading skill modulates how relevant and irrelevant search results are attended to on a search page.

Beginning to Characterise Children’s Eye Movement Control during Reading in English: A Corpus Study

ABSTRACT. Past research examining beginner child readers’ eye movement behaviour during silent sentence reading has primarily compared when such readers move their eyes relative to skilled adult readers. The other key question regarding eye movement behaviour during reading- where readers fixate- has received much less attention. We have created a corpus in English, based on the results of three experiments (adults n = 132; children n = 132), which we are using to characterise where typically developing 8- to 9-year-old child readers move their eyes during silent sentence reading. Our systematic analyses include assessments of differences in launch site, initial landing position, refixation probability, and skipping rates (in relation to foveal and parafoveal processing of words). The results will provide insight into how child readers typically encode information about words during reading, and how visual and linguistic characteristics of words determine where the eyes move and how such behaviour differs in comparison to adult readers. We believe that such understanding will be critical to the development of models that capture and represent on-line lexical processing and eye movement control in an ecologically valid way.

Interactive effects of semantic diversity and word frequency in natural reading

ABSTRACT. Word frequency exhibits one of the strongest influences on reading behavior - increasing skipping rates and reducing fixation durations (Rayner, 1998). However, some have argued that semantic/contextual diversity better represents word difficulty (Adelman et al., 2006) and has a stronger effect on fixation behavior when frequency is controlled (Plummer et al., 2013). We investigated whether these factors influence the reading process differently across the time course of reading behavior. We performed a secondary data analysis on a sentence reading study with target words that ranged in frequency and semantic diversity. We found that only word frequency affected skipping rates. However, there was an interaction between word frequency and contextual diversity wherein high frequency words were read faster when they were low in contextual diversity. Conversely, low frequency words were read faster when they were high in contextual diversity. This suggests that, for familiar words, having a specific meaning facilitates word recognition whereas for unfamiliar words, having more diverse semantic features makes recognizing at least one of those meanings more accessible. Our findings support prior literature by arguing that word frequency facilitates early stages of word recognition prior to meaning retrieval while semantic diversity influences more fine-grained semantic processing downstream.

Do Chinese deaf readers develop a unique cognitive mechanism during visual word recognition? The effect of oral language experience and reading ability
PRESENTER: Nina Liu

ABSTRACT. For most deaf readers, learning to read is a challenging task. Visual word recognition is crucial during reading. However, little is known about the cognitive mechanism of Chinese deaf readers during visual word recognition. In the present study two experiments explored the activation of orthographic, phonological, and sign language representations during Chinese word recognition. Eye movements were recorded as participants read sentences containing orthographically similar words, homophones, sign language related words or unrelated words. All deaf readers showed shorter reading times for orthographically similar words compared to unrelated words. However, when reading ability was controlled, the homophone advantage was observed only for deaf readers with more oral language experience, whereas the sign language advantage was observed only for deaf readers with less oral language experience. When oral language experience was controlled, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels had more stable orthographic and sign language representations. Deaf college readers with more oral language experience activate word meanings through orthographic and phonological representation, whereas deaf college readers with less oral language experience activate word meanings through orthographic and sign language representation, reflecting a unique cognitive mechanism, and, reading ability moderates this process.

Individual differences in word learning associated with reading skill and vocabulary: An eye-movement investigation

ABSTRACT. A large proportion of an individual’s vocabulary is learned incidentally, during reading. We examined individual differences in lexical acquisition during reading, and whether low-frequency words are processed differently to pseudowords during lexical acquisition. Rigorous pre-screening ensured that the low-frequency words were not known by our target population. Participants’ eye-movements were measured as they read sentences containing unknown words (either low-frequency or pseudowords) in a learning phase and a subsequent test phase. First, each new word was presented in four meaningful sentences during the learning phase, providing a diverse semantic context. We then took individual assessments of both reading ability and vocabulary. In the test phase, each new word was presented in a further four meaningful sentences, and reading time measures provided an index of the ease with which participants were able to read the new words. Finally, participants completed a semantic categorisation task to examine whether semantic representations for the new words had been successfully formed. We predict that greater reading skill and larger vocabulary sizes, will be associated with more efficient lexical acquisition. We also predict that there will be no differences between low-frequency words and pseudowords, validating the use of pseudowords within word learning experiments.

The role of the left perceptual span in L2 reading: An eye-tracking study

ABSTRACT. Substantial cognitive resources are required for processing the foveal area, leaving fewer cognitive resources available for parafoveal processing. Proficient first-language (L1) readers have a perceptual span of 3-4 characters to the left and 14-15 to the right of the foveal fixation1. Given that second-language (L2) processing requires more cognitive resources2, it stands to reason that L2ers will have a smaller perceptual span than L1ers. We hypothesize L2ers will have a smaller, more symmetrical perceptual span relative to L1ers, allowing them to make use of the left span to reconfirm what they previously read. We test the symmetry of the perceptual span using the GCMWP3 and manipulate the information available (3,6,9 characters-left/3,9,15 characters-right). Additionally, we account for the influence of English skills with German L1ers/English L2ers reading in English (n=53). L2ers benefit from an increase of window size 3-6 to the left of fixation and from 3-9 to the right of fixation, with only higher-skilled L2ers further benefiting from an increase in window size up to 15 characters to the right of the fixation. We plan to compare our data to L1ers of different ages. Overall, our data suggest that only highly skilled L2ers exhibit an L1-like asymmetric perceptual span.

Lexical access in L2 reading: evidence from self-paced reading and eye tracking data

ABSTRACT. We present a study on lexical access while reading in L2. We analyze word-reading data obtained using two paradigms: self-paced reading and eye-tracking. According to (Frank et al. 2013), eye-tracking measures in L1 highly correlate with reading times in SPR for the current word and for the following word, reflecting spillover in SPR and parafoveal preview in eye-tracking. Chinese-speaking learners of Russian (A2-B1) read the Sentence Corpus for L2 learners of Russian (90 sentences) either in SPR mode (n=65) or in an eye-tracking experiment (n=30). They read for comprehension, comprehension questions asked after 30% of sentences. In self-paced reading data we found significant effects of word frequency, word length and predictability on reading times. In eye tracking data, we analyzed 4 measures (Frank et al. 2013): first-fixation time, first-pass time, right-bounded time and go-past time, and found strong positive correlations between all these measures and reading time of the previous (not the following) word in SPR averaged over subjects. It can be explained by the lack of parafoveal preview and strong spillover effect in eye-tracking mode compared to SPR mode in L2 reading.

Funded by the research grant no. ID: 92566385 from St Petersburg University.

GECO-CN: Ghent Eye-Tracking COrpus of Sentence Reading for Chinese-English Bilinguals

ABSTRACT. GECO-CN presents the very first eye-tracking corpus of Chinese-English bilinguals reading a novel in their two languages. Participants read half of the novel in Chinese as first language and the rest in English as second language, in a counterbalanced order. They also completed a series of language proficiency tests and a language background questionnaire (LEAP-Q). This work presents some important descriptive statistics and compares the reading performance of two languages on eye movement measures such as average reading times and skip rate. In addition, this study used the same reading material as GECO (Cop et al., 2017), which studies the performance of Dutch-English bilinguals. By comparing two bilingual eye-movement corpora in which the similarity of the second language to the native language is different, this corpus is useful to investigate the influence of different Eastern and Western first languages on reading in the second language. This unique eye-tracking corpus will be freely available online, enabling future research to examine theories of bilingual reading by investigating similarities, differences, and mutual influences between two different writing systems.

The processing strategies for illustrated science reading and Chinese academic words with different semantic transparency among middle-school students: An eye-tracking study

ABSTRACT. This study uses eye tracking to explore the cognitive process and strategies of seventh-grade students with different reading abilities in reading illustrated scientific texts, and how readers deal with academic words with high (paraphrase) and low (transliteration) semantic transparency. Seventh-grade students (N=65) were divided into groups of reading ability through a pre-test. After reading four science texts, they answered free recall and reading comprehension questions, and finally participated in cued retrospective think aloud (CRTA). The results show that reading ability is significantly positively correlated with reading comprehension and free-recall performance. When reading transliterated words, students of all abilities have a longer gaze duration than when reading paraphrasing words, indicating the difficulty in understanding the meaning of academic words from morphemes. Furthermore, regardless of the reading ability, students use the text part as the main source of reading comprehension. However, the students realize that the form illustration has a high amount of integrated information. The eye movement retrospective think aloud data shows that high-ability students often use inference and integrated reading strategies; middle-ability students often use information extraction strategies; low-ability students often use negative reading processing methods. It is recommended that the differences in students’ reading ability should be considered.

Eye Movements and Reading in Children Who Survived Cerebellar Tumors
PRESENTER: Marina Shurupova

ABSTRACT. Previous investigations have demonstrated that cerebellar tumor survivors tend to have a variety of oculomotor impairments, such as hypermetria and poor gaze stability. In the current study, we aimed to evaluate the oculomotor deficits and reading parameters in children who survived cerebellar tumors. Two groups of 65 patients and 47 healthy controls, all aged 8–17, participated in the study. We analyzed the performance in several oculomotor and reading tasks. Eye movements were recorded every 1/60 s monocularly using an Arrington eye tracker. We revealed pronounced reading impairments in the patients as compared to healthy children, including longer fixation durations, greater numbers of fixations and regressive saccades, longer reading time. The patients showed gaze fixation instability and long scanpath reflecting the return of the gaze to the already counted objects. We also observed significant correlations between basic oculomotor functions and reading parameters in both groups. All these tendencies indicate that cerebellar tumor and its treatment cause oculomotor changes which can lead to disturbances in higher cognitive functions, such as reading. Our results highlight the necessity of considering these deficits in current rehabilitation protocols for pediatric cerebellar tumor survivors.

The role of phonological and orthographic parafoveal processing during silent reading in Russian children and adults

ABSTRACT. Parafoveal processing allows readers to recognize a word before fixating on it. However, there is still a debate about the type of information that people might get from the parafovea. Studies have shown that adults and children use phonological and orthographic parafoveal processing, but their role depends on age and language. In the present study, we investigated the development of phonological and orthographic parafoveal processing during silent reading in 56 Russian-speaking second graders, 48 fourth graders, and 65 adults. The participants read sentences with embedded target nouns, while their eye movements were recorded in a gaze-contingent boundary paradigm. The target nouns were presented in the parafovea in original, pseudohomophone, control for pseudohomophone, transposed-letter and control for transposed-letter conditions. The comparison of fixation durations between the conditions allowed us to assess the reliance on phonological and orthographic information in each age group. We found that adults used both phonological and orthographic information from the parafovea, whereas second graders and fourth graders relied on orthographic parafoveal information. These results might indicate that Russian-speaking children do not have fully developed phonological recoding skills by grade 4, but can recognize a word in the parafovea as a whole orthographic unit already in grade 2.

A two-tier taxonomy of gaze behaviours for free-moving participants

ABSTRACT. A Gaze Event Detector (GED) is an algorithmic component that parses a time series of eye positions and directions into meaningful gaze events (aka oculomotor behaviours). The best-known gaze events are fixations and saccades. There are also smooth pursuits (SPs), events caused by vestibulo-ocular and opto-kinetic reflexes (VOR and OKR), vergence shifts, and more. Many GED algorithms have been developed in the last 50 years; most popular are versions of either IVT or IDT. Most of these are suited to the experimental paradigms with the participant sitting in front of a flat stimulus display. Thus they often equate gaze movement with eye movement, disregard head movement, cannot account for vergence changes,etc. Nowadays, eyetracking technology rapidly becomes a ubiquitous component of most XR devices. One barrier for an adoption of gaze analysis in XR is a lack of GED algorithms that process coordinated eye-head-body gaze movements in complex 3D stimulus scenes. We propose a novel two-tier taxonomy of gaze behaviours, that combines atomic eye-head-body movements into meaningful gaze behaviours, such as: focusing on a static target; following a moving target; shifting attention; internal thinking; etc.; and present a preliminary algorithmic implementation of it.

GlassesValidator: Data quality tool for eye tracking glasses

ABSTRACT. According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a validation poster and written accompanying Python software. Here we present this work. We tested the poster and procedure with 61 subjects. In addition, the software has been tested with six different wearable eye trackers. The validation procedure can be administered within a minute per subject and provides accuracy, precision and data loss. Calculating the eye tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.

Fixation sequences when walking up and down stairs in daily life
PRESENTER: Andrea Ghiani

ABSTRACT. In our daily life there are many situations in which it is not directly evident where gaze should be directed: not all tasks require constant visual guidance and there may be reasons to look elsewhere. Walking up or down a staircase is a good example of such a situation. We investigated participants’ gaze behaviour when walking on stairs as part of a navigation task in their own house. Participants did not know that stairs were the focus of investigation. We analysed the order in which participants fixated the steps, confirming earlier reports that people often looked at each step sequentially. However, we found that participants also often made fixations back to steps that had already been fixated and that they regularly skipped looking at several steps to fixate further ahead. The main difference between ascending and descending the staircase was found when approaching the first step: when descending participants looked extensively at the beginning of the staircase, whereas when ascending they did not. This study shows that focussing on sequences of fixations is useful when investigating stair climbing with a variety of staircases in different environments.

Investigating the effects of task and body movement on the generalizability of scene viewing experiments.

ABSTRACT. Scene-viewing experiments conducted in the laboratory attempt to understand human gaze behavior and to generalize findings. This generalizability has often been questioned. Recent advances in eye tracking technology allow for experiments outside the laboratory with a high degree of mobility and enable participants to have more freedom of movement. In the current study we investigate eye movements using mobile eye tracking devices, but remain in the laboratory in order to investigate how distinct experimental conditions affect eye movements. Here we present the effects of A) the given scene viewing task and, B) the possibility of head movements, on scan path statistics. The given task clearly affects both temporal and spatial gaze parameters. We find differences in behavior even for apparently minor changes in task instructions such as free viewing and guessing tasks with only little task constraints. In our experiments, the subjects' freedom of movement hardly affects temporal gaze parameters but noticeably affects spatial parameters. Our results are consistent with the view that laboratory factors such as a chin rest do not cause artefacts that limit the generalizability of laboratory findings. However, the absence of a task, or a free viewing task, significantly affects gaze behavior.

Automated Discrimination of Stable and Non-stable Gaze Events in Dynamic Natural Conditions

ABSTRACT. Introduction It is challenging for people with visual field defects to perform daily tasks that rely on having a good visual overview. For helping people with such a condition, an essential step is to quantify their scanning behavior. However, there are no accurate gaze event detectors suitable for use in dynamic natural conditions, limiting research in such settings. We aim to design a gaze-event detector for free head and body movements conditions.

Methods and Results Our event detector interprets environmental movements using optic flow estimation. Additionally, it employs point tracking on surrounding patches of gaze location to analyze the gaze path. By combining this information, the detector can discriminate between and describe and visualize stabilizing and non-stabilizing gaze events. We tested our method on samples recorded using a Pupil Invisible in 15 participants who conducted a series of predefined activities, including simultaneously moving and following an object of interest. The method successfully discriminated between our two classes of events and is being improved.

Conclusions We conclude that our gaze event detection method is suitable for examining visual scanning behavior in dynamic natural settings. Researchers can benefit from this method in investigating the scanning behavior in people with visual field defects.

Investigating face perception during free-viewing in a naturalistic virtual environment.

ABSTRACT. Face perception is commonly investigated in standardized lab settings with high experimental control during which eye movements are generally restricted and the fixated stimuli are predetermined. While faces are considered prevalent and important stimuli (e.g., Wheatley et al., 2011), little research has explored the perception of faces in naturalistic settings. The current study combines high experimental control with natural viewing and movement behavior by investigating face perception in a virtual environment. Our virtual city consists of houses, various background stimuli, and notably static and moving pedestrians. Participants freely explore the virtual scene while eye-tracking and EEG data are recorded. We investigate participants’ distribution, duration, and distance of gaze events on faces and the participant's movement path. Preliminary results indicate large between-subject differences in the number of gazes on the bodies and faces of pedestrians. Additionally, big differences in the subjects’ movement patterns can be observed. The findings of this study will provide insights into face perception in a naturalistic virtual environment.

Wheatley, T., Weinberg, A., Looser, C., Moran, T., & Hajcak, G. (2011). Mind perception: Real but not artificial faces sustain neural activity beyond the N170/VPP. PloS one, 6(3), e17960.

Gaze Aversion in Human-Robot Interaction: Case Studies in Physical and Virtual Settings

ABSTRACT. Human Robot Interaction (HRI) is an interdisciplinary research domain that focuses on verbal and nonverbal communication between physical or virtual robotic agents and human interlocutors. As in human-human dyads, gestures and gaze have been primary nonverbal communication modalities in HRI. This study presents our findings from an HRI framework, which has been designed and implemented to support developing applications that employ gaze and other nonverbal modalities for interacting with people. In experimental investigations, we investigated gaze contact and aversion in communication between humans, between humans and physical robots, and between humans and virtual agents. We have been developing the framework to model gaze-mediated communication between two agents, where an agent can be a robot or an avatar, to allow the agent to interact naturally and intuitively with a human user. This study aims at presenting a snapshot of the state of the art in HRI, the experience gained from experimental investigations, and the items for future work.

Gaze aversions serve as social signals conveying the performer’s cognitive state

ABSTRACT. When engaged in effortful cognitive processing, we often avert our gaze to the periphery. Studies explained this phenomenon as an attentional mechanism of distraction avoidance. Here we propose that, in addition to its contribution to attentional processes, gaze aversion also serves as a signal in social interaction, conveying information regarding the performer’s cognitive state. As a first step in investigating this hypothesis, we examined how well perceivers infer other people’s cognitive states in social interaction, and how this ability depends on eye movements. In two experiments, participants (N=40 each) watched short (5s) muted videos depicting individuals during social interactions. Results of the first experiment showed that participants succeeded in identifying when other individuals were engaged in cognitive processing, relative to listening or tapping their feet. Furthermore, participants were more likely to correctly identify an individual as engaged in cognitive processing, when this individual was shifting their gaze. In a second experiment, we found that when individuals performed gaze aversions while they were engaged in an effortful cognitive task, they were rated by others as more concentrated in the task, and more likely to provide a correct response. Together, our findings suggest that effortful cognitive processing is communicated via gaze aversions.

Semantics of gaze: Deciphering the meaning of a listener’s gaze direction, gaze position changes, and blink frequency

ABSTRACT. In the present study, we developed a novel methodology to understand the semantics of perceived gaze patterns. Using a qualitative approach, we presented participants with videos showing a person that is involved in listening to brief (neutral vs. emotional) stories. The eye movements of the listening person were systematically manipulated regarding gaze direction, changes of gaze position, and blink frequency. After each presentation of a subset of these videos, participants were asked how they would verbally characterize the different gaze patterns. By applying semantic categorization procedures, we were able to link particular gaze patterns to distinct semantic categories (e.g., attentiveness, nervousness, empathy etc.). The resulting exploratory findings are subsequently submitted to rigorous experimental testing. Limitations in the generalizability of the present findings to other situational and social contexts will be critically discussed.

Looking for speaking: What determines language-specific expressions in motion event descriptions

ABSTRACT. This cross-linguistic experimental study examines the relationship between event construal and linguistic expressions using the eye-tracking method. We focus on descriptions of motion events elicited from a picture book by English, Italian and Japanese speakers. Because these languages are grouped into two based on the typology of motion expression. Slobin (1991, 1996) presented a modified form of linguistic relativity: “thinking for speaking”. He claimed that the language we learn shapes the way we perceive reality based on the linguistic experiment of various languages. Our study replicated Slobin’s experimental method and investigated the speakers’ eye movement when describing pictures. We examined how the speaker’s construal of an event tends to reflect in language expressions used in describing the event. We have obtained the data from 9 participants from each language and analysed a scene where “a boy fell off the cliff into the pond”. We found individual tendencies in each language, what kind of path components (DOWN, FROM, INTO) are expressed frequently, and how eye movements, such as fixation duration, correspond to them. A comparison of patterns of eye movement and language expressions revealed the relation between perception and speech production to be either universal or individual.

Silent or Oral Reading in L2: An Eye-Tracking Study

ABSTRACT. This study aims to answer two questions: is there any effect of reading modality (oral vs. silent) on L2 text processing, and what type of reading contributes to more successful text processing and translation into L1. According to Hale et.al. (2007) reading aloud facilitates understanding of the text, despite the fact that it has a larger processing load. Fuchs et al. (2001) suggested that oral reading fluency reflects overall reading competence. It was also shown that silent reading was stronger for retelling narratives (Schimmel & Ness 2017). Translation task can be used as method for checking reading comprehension skills (Karimnia, 2014). In a two-group experimental design, native speakers of Russian (N=20, B2-C1 level of English) read either orally or silently two English texts, estimated the subjective difficulty of each text, and then translated them into Russian. Both texts were of the same length, topic and level of readability (checked via http://readable.com). The eye movements of the participants were recorded (EyeLink 1000 Plus by SR Research). We measured the total reading time (TRT), total fixations count (TFC), average fixation duration (AFD), and regressions count (RC). The quality of the translations were assessed by Gilmullina's test (2016) of quantitative analysis. Mann-Whitney U test showed that reading was significantly slower (RT: p=0,008; TFC: p=0,027; AFD: p=0,036) when subjects read text aloud, as opposed to silently. No interaction was found between quality of the translations and subjective difficulty of the stimuli. So oral reading slows down text processing without any contribution to comprehension. Supported by the research grant no. ID: 92566385 from St Petersburg University

Scanpath analysis of eye movements during reading in children with high risk of dyslexia

ABSTRACT. The study presents a comparison of the global reading processes estimated via scanpath analysis (von der Malsburg & Vasishth, 2011) in Russian schoolchildren with and without reading difficulties. We tested 144 children from grades 1-5 (age range 7-11; 54 girls): 72 children were at high risk of dyslexia according to the Standardized Assessment of Reading Skills (Kornev, 1997), and 72 had no reading difficulties. The children read 30 sentences while their eyes were tracked. We identified five global reading processes that all children engaged in while reading. The processes differed in fixation durations, the probability of rereading single words, and the probability of rereading entire sentences. The comparison between grades and groups revealed that children without reading difficulties progressed quickly and by grade 4 engaged in a fluent adult-like reading process. Children with high risk of dyslexia started with the beginner reading process, then engaged in the intermediate and upper-intermediate reading processes in grades 2-to-4. They reached the advanced process in grade 5 but rarely adopted the fluent reading process. In sum, the scanpath analysis revealed that children in the high-risk group and typically developing peers adopt similar reading processes, but the former group progressed much slower, with a 2-3-year delay.

Modeling Task-Dependency of Eye Movement during Scene Viewing

ABSTRACT. Eye movements during scene viewing reflect how the visual system processes and prioritizes information. Selection of fixation locations is driven by image features, as well as by top-down factors, and task constraints. Additionally, dynamical aspects such as scan path history and fixation duration influence selection. The SceneWalk model is a biologically inspired dynamical model for predicting fixation sequences on the basis of static saliency maps. In the current work we added a spatial-temporal likelihood to the existing framework to jointly model fixation durations and location selection. To explore the influence of task and saliency, we investigated two model versions, using either general- or task-specific saliency maps as a basis. We separately fitted model parameters for each individual in two guessing and two searching tasks. Parameters were inferred using a fully Bayesian likelihood-based approach. We find that the optimal parameters differ significantly between the tasks. The parameters of our model represent interpretable quantities such as the attention span. Therefore, differences in parameters allow insight into how the visual system adapts to task demands. Posterior predictive checks show that the model can reproduce individual differences in scan path statistics. We also show that dynamic components of eye movement improve model fit more than task-specific saliency.

Using Eye tracking techniques for oculomotor sign of neglect
PRESENTER: Marina Shurupova

ABSTRACT. The common outcome after a right hemisphere stroke is unilateral spatial neglect. These patients demonstrate visual attentional deficits towards to the contralesional side. The aim of our pilot study was to investigate oculomotor signs of neglect using a new experimental paradigm. Six patients (mean age 54.5±9.8) and twelve age-matched controls participated in the study. Patients had subacute phase of stroke (1 hemorrhagic, 5 ischemic) and were diagnosed with neglect by a neuropsychologist. Patients performed an oculomotor task in which they had to select and fix a target stimulus (a blue star) appearing simultaneously with a distracting factor (a black dot) to the left or right of it. Eye movements were recorded at 250 Hz sample rate. Patients showed a higher error rate when the target appeared on the left, although most patients made self-correction within two seconds after choosing a distractor. In addition, patients showed a higher delay to the appearance of the target on the left compared to the right. All patients needed more than one fixation to fix the target, while the control group needed one. Our experimental results demonstrate oculomotor signs of neglect. Future studies will add more quantitative results and may be applicable in clinical practice.

Oculomotor Control and Dual-Task Interference

ABSTRACT. For a long time, oculomotor control was regarded as largely unaffected by additional actions in other effector modalities. However, recent research suggested that saccade control - although being prioritized over other simultaneous actions - can still exhibit substantial impairments. In the present study, we examined temporal dynamics of oculomotor performance decrements in dual tasks by applying the psychological refractory paradigm, in which we varied the stimulus onset asynchrony (SOA) between a saccade and a manual RT task. Across 4 experiments we examined differential effects of task order, task order instructions and spatial task compatibility on dual-task performance. Results revealed that performance of both saccade and manual responses suffered at close temporal proximity indicating structural and content-based interference mechanisms. Structural interference was observed in form of longer RTs for Task2 at short SOAs suggesting a serial response selection bottleneck, and content-based interference emerged as longer RTs for incompatible than for compatible tasks suggesting mutual crosstalk during serial processing. Based on these results, we reject the notion that oculomotor control is generally able to bypass central processing limitations and instead conclude that saccades are subject to the same sources of dual-task interference mechanisms as other actions, too.

Eye Movements in Three-dimensional Multiple Object Tracking

ABSTRACT. Eye movements in multiple object tracking (MOT) tasks reflect the attention processing of an observer. Previous studies have revealed two gaze strategies during two-dimensional MOT tasks, respectively centroid-looking strategy and target-switching strategy. When tracking several moving targets amidst distractors in an MOT task, observers are more likely to gaze at the central areas between targets, and would frequently switch their gaze back and forth between the center and targets. However, little research has focused on where are observers looking and what influences eye movements during the three-dimensional multiple object tracking (3D-MOT). The present study registers eye movements by using a 3D-MOT task based on virtual reality technology, which could exquisitely reflect the interaction between humans and the real world with stereo vision. The aim of the present study is to examine observers’ looking strategy in 3D-MOT, how it is affected by the depth of the 3D spaces, and how differs from the eye behaviors in 2D tracking. We postulate that the target-switching strategy would be in preference in 3D-MOT and observers would switch fixations more frequently to targets at greater depth in the 3D spaces.

Attentional biases in the size of fixational saccades

ABSTRACT. It is well established that spatial attention can bias the direction of fixational saccades. It remains unknown whether and when attention may also bias the size (amplitude) of fixational saccades. To investigate this, we cued attention to one of multiple items in visual working memory while manipulating the spatial demands on attention. In one condition, trials contained two colored tilted bars (presented to the left and right) that were both either near or far from fixation. A color cue (presented during the delay) instructed participants to select either memorandum. Additionally, we included a load-four condition in which items occupied the near and far locations on both sides. Critically, in these trials, the direction is not sufficient to select the cued memorandum as there are always two memoranda in the same direction (one near, one far). Consistent with prior work, we confirm that the direction of fixational saccades was robustly biased by the memorized location of the attended memorandum. More importantly, our data reveal that the size of the directionally biased fixational saccades was also modulated by the spatial demands on attention. Specifically, fixational saccades became larger when selecting the far item, but only when direction was insufficient.

Yarbus in the age of Webcam Eye-tracking

ABSTRACT. The iMotions webcam based eye tracking (WebET), combined with the hidden markov models (HMM) for fixations classification is a potential tool for applied human behaviour research and teaching eye-tracking to larger cohorts. The present study explores the feasibility of carrying out large scale, replicable research projects.

To verify how relevant the iMotions WebET can be to applied research and teaching, a part of the Yarbus (1967) study was replicated with N=10 participants. In an online study, participants were shown ‘The Unexpected Visitor’ in four conditions in a repeated measures design, each one asking a different question of the participants. Individual scanpaths and aggregated heatmaps were used for exploratory analysis. As in the seminal work by Yarbus, the present study asked if WebET can be used to distinguish where people look based on the question asked. Areas of interest (AOIs) were marked and calculated in iMotions to understand best practices for quantifying data from webET.

Results show that even with a small sample size, the iMotions WebET combined with the HMM fixation classification can accurately distinguish between scanpaths of different conditions. Larger, well-placed AOIs can give eye-tracking insights helpful in understanding top-down cognitive processing of participants in the present study

Seeing your own webcam image feels distracting, but does not hurt learning: A webcam-based eyetracking study

ABSTRACT. In video-call platforms like Zoom or Microsoft teams, a thumbnail of one’s own webcam video feed is often visible. In this experiment, we use webcam-based eye tracking to investigate whether this self-view distracts learners and reduces learning from a video lecture. In a within-subjects experiment, 78 participants (Mage = 22.3) watched two video lectures, one with and one without the self-view visible (in counterbalanced order). Webcam-based eye-tracking was implemented using the Gorilla Experiment Builder (www.gorilla.sc) and participants took part on their own computers. Participants first completed a 9 point calibration and 5 point validation procedure. They then watched the instructional video (either with the self-view visible or not), rated their experience of learning, and completed a post-test. This procedure was repeated for the other condition. Bayesian statistics were used to analyse the data. Participants felt more distracted and self-conscious in the self-view versus the no self-view condition. However, the conditions were comparable in self-reported mental effort and post-test performance. Analysis of the webcam-based eye-tracking data shows limited effects of the self-view on viewing behaviour. In this presentation, we additionally discuss the quality of webcam-based eye-tracking data for stimuli with very large AOIs.

16:30-18:30 Session 14A: Symposium: Eye movements and higher-order text processing

Eye movements as a measure of higher-level text processing

Symposium Organisers: Jana Lüdtke (Freie Universität Berlin) & Mesian Tilmatine (Freie Universität Berlin)

Discourse as a core aspect of human cognition remains understudied in cognitive sciences (Mar, 2018). The use of eye-tracking technologies bears considerable potential for research on higher-level comprehension of written language (Cook & Wei, 2019). Consequently, there has been a trend in the past years to conduct more studies on higher-level processing of texts with a natural narrative flow, be that in the form of poetry, prose, or even newspaper articles. In this symposium, we will discuss our most recent contributions to this trend, mostly in the form of experimental data and improved or new models.

There are reasons why naturalistic texts are traditionally less prominent in eye-tracking research, mostly related to stimulus complexity. The symposium will thus review possible approaches to the methodological challenges associated with the use of naturalistic text stimuli. In that context, the talks will particularly focus on the role of individual differences in narrative and poetic perception (cf. Mak & Willems, 2019; Graf & Landwehr, 2015; Harash, 2021), as well as on possible ways to measure reading-related cognitive processes like mental simulation, mind-wandering, immersion, and foregrounding.

Location: LT1
16:30
Mind-wandering during reading of Siri Hustvedt’s Memories from the Future: Evidence from eye tracking

ABSTRACT. Participants (N=52) read selected parts (135 paragraphs in total) of Siri Hustvedt's novel "Memories from the Future" while their eye movements were recorded. After 30 pre-selected paragraphs, participants responded to the 13-item mind-wandering scale (MWS, Turnbull et al., 2019) probing the focus and contents of current thoughts. A principal components analysis (PCA) of the MWS responses produced 4 components: 1) immersion, characterised by on-task focus, vivid and detailed imagery, and positive emotion; 2) off-task thoughts related to worrying about the future; 3) fluctuating thoughts about the past and others; and 4) voluntary verbalisations. In order to examine the associations between different types of mind-wandering episodes and eye movements, the PCA scores were used as predictors in linear mixed models for eye movement data. The results showed that readers' thoughts are reflected in their eye movements: high immersion and verbalisation increase the effects of word frequency and word length on eye fixation times. The results support the view that immersion to narrative text is an elementary part of reading experience, and that it is reflected in eye movement control during reading.

16:50
Reading Russian poetry: An expert–novice study

ABSTRACT. Studying the role of expertise in poetry reading, we hypothesized that poets’ expert knowledge comprises genre-appropriate reading- and comprehension strategies that are reflected in distinct patterns of reading behavior. We recorded eye movements while two groups of native speakers (n=10 each) read selected Russian poetry: an expert group of professional poets who read poetry daily, and a control group of novices who read poetry less than once a month. We conducted mixed-effects regression analyses to test for effects of group on first-fixation durations, first-pass gaze durations, and total reading times per word while controlling for lexical- and text variables. First-fixation durations exclusively reflected lexical features, and total reading times reflected both lexical- and text variables; only first-pass gaze durations were additionally modulated by readers’ level of expertise. Whereas gaze durations of novice readers became faster as they progressed through the poems, and differed between line-final words and non-final ones, poets retained a steady pace of first-pass reading throughout the poems and within verse lines. Additionally, poets’ gaze durations were less sensitive to word length. We conclude that readers’ level of expertise modulates the way they read poetry. Our findings support theories of literary comprehension that assume distinct processing modes which emerge from prior experience with literary texts.

17:10
Unraveling the social-cognitive potential of narratives using eye-tracking

ABSTRACT. Narratives have a unique ability to disclose the inner worlds of others, leading various scholars to hypothesize about an intricate link between reading narratives and social-cognitive abilities such as perspective taking. One of the ways in which narratives can represent the inner worlds of characters is through the use of viewpoint markers, i.e., linguistic elements that provide readers access to the perceptions, thoughts, and emotions of characters. In this study (N = 90), we used eye tracking to study individual differences in the linguistic processing of these viewpoint markers. Having collected eye-tracking data from 90 participants who read a 5000-word non-fictional narrative, we found diverging reading patterns for perceptual viewpoint markers, that were processed relatively fast, and cognitive and emotional viewpoint markers, that were processed relatively slow. Moreover, perspective-taking abilities and self-reported perspective-taking traits facilitated processing both in general and for perceptual and cognitive viewpoint markers in particular. As such, our study extends findings that social cognition is of importance for narrative reading, showing that social-cognitive abilities are engaged specifically by the linguistic processing of narrative viewpoint. Moreover, these findings show that higher-level abilities such as social-cognitive abilities affect low-level reading processes.

17:30
Different Kinds of Simulation During Literary Reading: Insights from a Combined fMRI and Eye Tracking Study

ABSTRACT. Mental simulation is an important aspect of narrative reading. In a previous study, we found that gaze durations are differentially impacted by different kinds of mental simulation. Motor simulation, perceptual simulation, and mentalizing as elicited by literary short stories influenced eye movements in distinguishable ways (Mak & Willems, 2019). In the current combined eye tracking and fMRI study, we investigated the existence of a common neural locus for these different kinds of simulation, using a fixation-related analysis for our fMRI data. We additionally investigated whether individual differences during reading, as indexed by eye movements, are reflected in domain-specific activations in the brain. We found a variety of brain areas activated by simulation-eliciting content, both modality-specific brain areas and a general simulation area. Individual variation in percent signal change in activated areas was related to measures of story appreciation as well as personal characteristics (i.e., transportability, perspective taking). Taken together, these findings suggest that mental simulation is supported by both domain-specific processes grounded in previous experiences, and by the neural mechanisms that underlie higher-order language processing (e.g., situation model building, event indexing, integration).

17:50
Effects of centrality on eye movements: Predictions by computational language models

ABSTRACT. Sentences differ in how much they contribute to the overall meaning of a text: central ideas are read more carefully and remembered better than more peripheral ideas. Centrality is usually operationalized using rating procedures in which participants explicitly evaluate the importance of a sentence in a text. In the present study, I used a computational measure of centrality based on a semantic vector space model. Centrality was defined as the cosine between the vector representing the overall meaning of a text and the vectors representing individual words or sentences. Eye movement data from the MECO corpus showed that centrality predicted participants’ eye movements while controlling for word length and frequency. Centrality affected early reading variables in a complex manner: On the one hand, central words were more likely to be skipped during first-pass reading. On the other hand, however, they were also processed more deeply as indicated by longer first-pass reading times if fixated. I will discuss these findings in relation to other studies that have investigated centrality effects. In addition, I will compare the new centrality measure with other computational measures that have recently been used to predict eye movement behaviour.

18:10
Eye movements as a measure of immersion and foregrounding in narrative poetry reading

ABSTRACT. The cognitive process of foregrounding is considered to be crucial to the appreciation of literary language. However, it remains unclear how it relates to processes of narrative immersion, as these two processing routes are theorized to be separate mechanisms with different effects on the reader. For a direct comparison between textual elements evoking both processing routes, we conducted an eye-tracking study on narrative poetry (a story which is told in rhymes), namely on two excerpts of Goethe’s "Faust". By comparing eye-gaze data and individual reading backgrounds with quantitative text data, subjective ratings, open statements, and tests of comprehension, we got more detailed insights into the exact workings of emotional activation for both routes. In the talk, we will present our methodology of integrating these rich data, as well as the most important results we found, and the implications for reader response theories and eye-tracking as a method in empirical aesthetics.

This talk is part of the symposium "Eye movements as a measure of higher-level text processing"

16:30-18:30 Session 14B: Eye movement control in reading II
Location: LT2
16:30
Word difficulty determines regression accuracy in sentence reading

ABSTRACT. The occurrence of long-range regressive saccades during reading is related to linguistic processing difficulties. Two hypotheses regarding the accuracy of theses regressions appear viable. When reading difficult words, high processing demands may hinder the formation of a reliable memory trace for their location, leading to a lack of precision when regressions are executed. Alternatively, the location of such words may be specifically tagged in spatial memory as future saccade targets, enabling more effective regressions. Participants were asked to read single-line sentences, identify a probe word that appeared to the right of each sentence, and to move the eyes back to the corresponding location in the sentence. Target words were either difficult or easy to read (low vs. high word frequency and orthographic regularity) and the target word location was either close to or far from the probe. Difficult words substantially increased the precision of primary long-range regressions, being more often attained by accurate (single shot) regressions. If the target was missed, fewer additional saccades and less time were needed until the eyes fixated the target word. These results support the notion that orthographically and/or lexically difficult words assume a privileged status in visual-spatial memory as targets of future regressions.

16:50
Does visual-similarity cause more regressions in reading? An eye-tracking based study.

ABSTRACT. Regressions account for 5-20% of eye movements in reading. Regressions are often found to accompany processing difficulty, which may be caused by syntactic ambiguity, semantic implausibility, or oculomotor targeting errors. In the present eye-tracking study, we examine regressions in sentences where processing difficulty arises due to implausibility, and we test the role of a context word’s visual similarity relative to a plausible alternative. Sentences in the plausible condition (e.g., There was an old horse that John had ridden when he was a boy) provide a baseline to contrast with two implausible conditions. In the visually similar implausible condition, a lexical neighbour (“horse”) appeared in the place of the plausible context word (“horse”), while in the visually distinct implausible condition a distinct word (“place”) appeared, and each of these rendered the sentence implausible. A Bayesian analysis showed that, following the word ridden, participants made more regressions in the two implausible conditions than in the plausible condition, confirming the plausibility effect. However, during these regressive episodes, participants were no more likely to fixate the context word in the visually similar condition than in the visually distinct condition, with Bayes Factor evidence supporting a null effect of visual similarity, against our predictions.

17:10
When functions words carry content

ABSTRACT. Eye movement studies on reading function words (FW) are rare, as are studies in Brazilian Portuguese (BP). While in most languages FW usually have less semantic information and are generally shorter, in BP they can carry gender and plurality marks and are often as long as content words (e.g. aquele/es/a/as, meaning “that one” or “those”). Though studies often report that FW are skipped more often than CW, a study in English from Schmauder, Morris and Poynor (2000) has shown that, with matched length and frequency, skipping rates and first fixation duration between CW and FW were similar. Here we report results from analyses on FW and CW using data from the RASTROS corpus of natural reading in BP (Vieira, 2020). We found that, in general, fixation duration decreased and skipping rates increased on shorter, more frequent and predictable words. On CW, predictability and length seem to affect processing independently. On FW, we found indications that only the longer FW were influenced by predictability. We argue that longer FW may be processed similarly to CW.

17:30
Does omitting mandatory commas affect the reading process?

ABSTRACT. Many orthographies mandate the use of commas to separate clauses and list items. However, casual writers routinely omit these mandatory commas; furthermore, commas are often misused (Trask, 2019). Even though the usage of commas (and their omission) is ubiquitous when reading texts, little research has been done on its effect on eye movements. One exception is the studies by Hirotani and colleagues (Hirotani, 2004; Hirotani et al., 2006), who found that using non-mandatory commas seemed to facilitate overall reading compared to omitting commas, although there were higher dwell times ahead of the commas. However, there was no evidence for longer global sentence reading times when mandatory commas were omitted. We present an eye-tracking experiment investigating the effect of omitting mandatory commas in five types of grammatical constructions in Spanish: concessive, adversative, listing, connective, and parenthetical. Sentences were presented with or without mandatory commas while readers' eye movements were recorded. We found no evidence for shorter global reading times due to comma presence. There was evidence for some differences in reading times (first-fixation duration, gaze duration, go-past time, total viewing time) for the pre-comma, post-comma, and subsequent regions, but there was no clear pattern suggesting a major advantage of comma presence.

17:50
TheroleofspacesinreadingFinnishText

ABSTRACT. In alphabetic languages, spaces are functional segmentation cues allowing for faster word recognition. Spaces delineate word boundaries and help position the eyes optimally in the word. It has been found that reading English text without spacing is 30-50% slower than reading with spaces (Rayner & Pollatsek, 1996; Rayner et al 1998). The current eye movement experiment compares reading Finnish sentences in spaced format and unspaced format. The main question was how specific text properties affect the detection of words and word boundaries in unspaced text and with that reading speed in general. The results showed that unspacing text in Finnish increases sentence reading times with about 35%, in line with what was reported earlier for English. More importantly, the results also indicated that unspaced sentence reading is affected by average word length and sentence length as well as by the average bigram frequency at word boundaries. The results thus indicate that even unexperienced readers of unspaced text are equipped to make use of specific word segmentation cues. Moreover, they indicate that there are some general text properties that facilitate unspaced text reading.

18:10
The Role of Visual Crowding in Eye Movements during Reading: Effects of Text Spacing

ABSTRACT. Visual crowding, generally defined as the deleterious influence of clutter on visual discrimination, is a form of inhibitory interaction between nearby objects. While the role of crowding in reading has been established in psychophysics research using RSVP paradigms, how crowding affects additional processes involved in natural reading, including parafoveal processing and saccade targeting, remains unclear. The current study investigates crowding effects on reading via two eye-tracking experiments. Experiment 1 is a sentence-reading experiment incorporating an eye-contingent boundary change in which letter spacing and preview validity are jointly manipulated. Experiment 2 is a passage-reading experiment with a line spacing manipulation. In addition to replicating previously observed letter spacing effects on global reading parameters, Experiment 1 found an interaction between preview validity and letter spacing indicating that benefits of reduced crowding on fixation duration were present only when parafoveal preview was intact. Experiment 2 found reliable but subtle influences of line spacing. Participants had shorter fixation durations, higher skipping probabilities, and less accurate return sweeps when line spacing was increased. These results extend the literature on how crowding affects reading and inform the question whether the observed benefits of reduced crowding reflect facilitated linguistic processing in the parafovea or improved oculomotor control.

16:30-18:30 Session 14C: Real world and virtual reality
Location: LT8
16:30
Characterization of naturalistic free viewing behavior across the lifespan

ABSTRACT. Visual attention changes as we age, reflecting underlying maturational and degenerative processes in brain structure and function. Previous work investigating visual attention in aging is limited by the characterization of developmental and aging processes separately, and the reliance on task paradigms which fail to capture the visual complexity of the real world. The use of dynamic videos during unstructured free viewing provides a more ecologically valid means of investigating visual attention in individuals of different ages. The goal of the present study is to characterize naturalistic free viewing behaviour across the lifespan. We recorded saccade behavior from a large, cross-sectional cohort of normative individuals (n=497, aged 5-89) while they freely viewed naturalistic video clips that changed in content every 2-4 s. Averaged across clips, saccade amplitude and peak velocity decreased with age, while saccade frequency increased with age. Aligned to clip onset, the timing and magnitude of saccade rate and gaze clustering exhibited complex curvilinear trajectories with age. We propose that the trajectories of these saccade behaviors are mediated by structural and functional changes in underlying cortical and subcortical circuits. These findings have considerable implications for improved detection of neurological disorders that emerge during vulnerable windows of development and aging.

16:50
Visual stability in naturalistic scenes

ABSTRACT. The current study examines how visual stability is established in naturalistic scenes. Previous studies have shown that detection of position shifts is better when the saccade target object shifts rather than the background or the whole image shifting (Currie et al., 2000). Additionally, briefly removing the target object from the screen (blanking) improves shift detection (Deubel et al., 1996). We tested whether blanking would improve shift detection for contextual information in naturalistic scenes. Participants were presented with images and instructed to execute a saccade to a highlighted target object. During the saccade, the saccade target, the whole image, or the background shifted. In control trials, no shift occurred. Half of the trials had a 250ms target blank, context blank, or all blank that occurred when the saccade was detected. Participants reported whether they detected a move. Target shifts resulted in the highest detection rate, and the background shifts had the lowest detection rate. More importantly, we found that blanking only improved shift detection in target shift condition but not in background or whole image shifts, which suggests that the visual system uses a localized solution for establishing object correspondence across saccades that mainly relies on the saccade target for stability.

17:10
Finding landmarks – An investigation of viewing behavior during spatial navigation in VR using a graph-theoretical analysis approach

ABSTRACT. Vision provides the most important sensory information for spatial navigation. Recent technical advances allow new options to conduct more naturalistic eye tracking experiments in virtual reality (VR), but also require new analysis approaches. Here, we propose a method to quantify characteristics of visual behavior by applying graph-theoretical measures to eye tracking data.

The analysis is based on eye tracking data of 20 participants, who freely explored a virtual city for 90 minutes with an immersive VR headset with an inbuild eye tracker. We pre-process the data and define “gaze” events, from which we created gaze graphs. On these, we applied graph-theoretical measures to reveal the underlying structure of visual attention.

To investigate the importance of houses in the city, we apply the node degree centrality measure. Our results reveal 10 houses consistently outstanding in their graph theoretical properties. As these outstanding houses fulfilled several characteristics of landmarks, we named them “gaze-graph-defined landmarks”. Furthermore, we find that these gaze-graph-defined landmarks were preferentially connected to each other.

Our findings do not only provide new experimental evidence for the development of spatial knowledge, but also establish a new methodology to identify and assess the function of landmarks based on eye tracking data.

17:30
An online experiment with deep learning models for tracking eye movements via webcam

ABSTRACT. Eye-tracking during online experiments allows collection of larger and more diverse datasets in a less restrictive and cost-effective manner. Computer vision methods that can track eye movements from webcam recordings have improved remarkably with the application of deep learning. However, their application is still restricted to offline setups due to the computation requirements and difficulty of estimating physical calibration parameters (e.g., camera intrinsics and user-camera distance). Here, we embrace these challenges by designing an online experiment with robust solutions for such constraints. The experiment consisted of 65 participants completing a battery of five eye-tracking tasks that we used to compare three state-of-the-art appearance-based gaze tracking methods and two blink-detection algorithms. We report on multiple measures, such as fixation accuracy, precision, smooth pursuit onset and angle, attended zone classification, saliency mapping, etc. and also evaluate different calibration strategies. Our results demonstrate a mean fixation error in the range of 2-3 visual degrees for the best model. While these errors are not as low as Eyelink100 (0.57°) and Pupil Core (0.82°) on these same tasks, they nonetheless encourage the use of unrestricted setups for accurate eye tracking in online studies (compare to e.g. 4.17° from Webgazer.Js).

The pre-registration for our study could be found at https://osf.io/qh8kx/.

17:50
Georeferencing of eye movement data using ET2Spatial software

ABSTRACT. Eye-tracking in cartography has been active in the last few decades, particularly as the main pillar of cognition in geographic visualization. Stimuli used in this field of research are mostly static maps. With the evolution of techniques in geovisualization and technology itself, however, these stimuli have also evolved from analogue to digital and from static to dynamic. The interactive stimuli hence pose challenges when it comes to evaluating their usability through eye-tracking. The submission aims to introduce a developed open-source tool called ET2Spatial intended for the analysis of eye-tracking data recorded on interactive web maps. The tool simplifies the labour-intensive task of analysis of screen recordings with overlaid eye-tracking data available in current eye-tracking systems. The tool's main function is to convert the screen coordinates of the participant's gaze to real-world coordinates and allow exports in commonly used spatial data formats (shapefile, geojson). These data can be loaded into Geographic Information System (GIS) software, where different visualization and analytical methods commonly used for spatial data can be applied. The tool and associated pilot studies aim to enhance the research capabilities in eye-tracking in cartography.