ECVP2015: EUROPEAN CONFERENCE ON VISUAL PERCEPTION
PROGRAM FOR WEDNESDAY, AUGUST 26TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-11:00 Session 16A: Interactive social perception and action

.

Location: A
09:00
Against the Unobservability Principle

ABSTRACT. A generally unexpressed assumption behind much current social cognition research is the so-called “unobservability principle” (UP). According to the UP, minds are composed of exclusively intracranial phenomena, perceptually inaccessible and thus unobservable to everyone but their owner (Krueger, 2012). Mental states, such as beliefs and intentions, are private, internal, and not observable in others. Contrary to the UP, I will argue that intentions are indeed visible in others’ movements. First, I will present evidence that intentions influence response properties and shape movement kinematics during movement execution (Becchio, Manera, Sartori, Cavallo, & Castiello, 2012). Next, I will show that observers are especially attuned to kinematic information and can use early differences in visual kinematics to anticipate another person’s goal (Ansuini, Cavallo, Bertone, Becchio, 2015). This ability is crucial not only for interpreting the actions of individual agents, but also to predict how, in the context of a social interaction between two agents, the actions of one agent relate to the actions of a second agent.

09:24
Sensorimotor learning influences understanding of others’ actions

ABSTRACT. The discovery of ‘mirror’ neurons (motor-related neurons which fire during both the performance of an action and the observation of another conspecific performing the same action; di Pellegrino, Fadiga, Fogassi, Gallese, & Rizzolatti, 1992), has led to renewed interest in the role of motor processes in social perception. In particular, it has been suggested that mirror neurons subserve the ‘understanding’ of others’ actions via the observer’s motor system (Rizzolatti, Fadiga, Gallese, & Fogassi, 1996). However, even if mirror neurons do contribute to action understanding, their properties may be the result of learning to associate the perceptual representation of an action with the motor program for that action (Cook, Bird, Catmur, Press, & Heyes, 2014). Participants completed an action understanding task (Pobric & Hamilton, 2006) in which they judged the weight of boxes lifted by another person, before and after a period of ‘counter-mirror’ sensorimotor training, wherein they lifted heavy boxes while observing light boxes being lifted, and vice-versa. Compared to a control group, this training significantly reduced participants’ performance on the action understanding task. Performance on a duration judgement task was unaffected by training. These data suggest that the ability to understand others’ actions is the result of sensorimotor learning.

09:48
Interpersonal integration of perceptual judgments in joint object location

ABSTRACT. Prior research has shown that the ability to communicate the confidence associated with individual perceptual judgments predicts the accuracy of perceptual group judgements (Bahrami et al., 2010; Fusaroli et al., 2012). The present study asked whether there are additional processes of information integration across individuals that depend on integrating location information across different viewpoints on the same scene in the environment. Pairs of participants located an object in a virtual 3D environment. Two participants in a pair had different viewpoints so that the relative uncertainty of location judgments on different spatial dimensions differed between them. Importantly, the uncertainty of locating objects was much higher on the depth dimension than on the orthogonal dimension. Three experiments investigated whether participants' use of location information provided by their partners would be weighted depending on the partner’s uncertainty on a particular spatial dimension. The results confirmed that participants incorporated location information provided by their partners not simply by averaging but by flexibly weighing location information on particular spatial dimensions according to their partners’ uncertainties. Thus, groups of people are able to improve their joint perceptual accuracy by selectively weighing information with another’s perceived or estimated uncertainty.

10:12
The influence of dyadic gaze dynamics on joint and individual decisions

ABSTRACT. How do interpersonal behavioural dynamics predict individual and joint decisions? Recent interactionist views on social cognition suggest that the most under-studied and important aspect of social cognition may be interaction dynamics. However, it has hitherto proven extremely difficult to devise a controlled setup in which social cues, such as eye gaze, are subject to unconstrained interaction.

To address these issues, we use a dual interactive eye-tracking paradigm. Participants are presented with the face of an anthropomorphic avatar, the eye movements of which are linked in real-time to another participant’s eye-gaze. This allows for control of interaction aspects that are not related to the experience of gaze contingency.

Participants have to choose which one out of two spheres on either side of the avatar face is the largest. These spheres can have a medium, small, and no difference. Specifically in the latter condition, gaze dynamics guide choices. Using cross-recurrence quantification, we analyse the time course of the gaze interactions and look at how this predicts individual and joint decisions about sphere size, which participant will follow the other, and assess collaboration in a subsequent “stag hunt” game, a variation on the prisoner’s dilemma game.

10:36
From action observation to social interaction: Top-down influences on motor interactions

ABSTRACT. When shaking hands or catching a ball, humans not only need to perceive, understand and predict the interacting partner’s actions, but also timely coordinate their own actions to successfully interact. Thus far, such social interactions have typically been investigated using paradigms that neglected the active part of the so-called observer; that is, participants passively observed the confederate’s action without actively reacting to or interacting with the observed action. This raises the question to what degree findings from passive observation experiments are transferable to situations in which participants actively engage in motor interactions. Secondly, predictions of an observed action are not only informed by online visual information, but are often influenced by prior assumptions and expectations. Therefore, the second question we address is to what degree motor control in social interactions is mediated by top-down influences. We present a series of studies examining the processes of social perception and action in truly interactive experimental setups. The results of these studies support the ecological validity of earlier findings and they show that prior expectations powerfully influence/ bias motor control in social interactions. We advocate the use virtual reality to create interactive setups that are under high experimental control and allow natural interactions.

09:00-11:00 Session 16B: Brain responses to visual symmetry

.

Location: B
09:00
Brain Activity in Response to Visual Symmetry

ABSTRACT. There has been much progress on the study of the neural basis of symmetry perception. ERP studies reliably show a sustained posterior negativity (SPN), with lower amplitude for symmetrical than random patterns at occipital electrodes, from 250 ms after onset. The SPN is an automatic and sustained response and is broadly unaffected by task. The extended symmetry-sensitive network involves extrastriate visual areas and LOC with consistent evidence from fMRI and TMS. Reflection is the optimal stimulus for a general regularity-sensitive network that responds also to rotation and translation. We tested whether response to symmetry is dependent of view angle. When people classify patterns as symmetrical or random, response to symmetry is view-invariant. When people attend to other dimensions, the network responds to residual regularity. Neural response to symmetry also scales with noise: Proportion of symmetrically positioned elements predicts the size of SPN and fMRI responses. Connections between the hemispheres are not critical because SPN amplitude increases with the number of axes, and is comparable for horizontal and vertical symmetry. The same ERP response to symmetry can come from either hemisphere, but it is stronger in the right hemisphere. Overall, there is a consistent link between brain activity and sensitivity.

09:17
Symmetry Detection in typically and atypically lateralized individuals: A visual half-field study.
SPEAKER: Ark Verma

ABSTRACT. Visuospatial functions are typically lateralized to the right cerebral hemisphere, giving rise to a left visual field advantage in visual half-field tasks. In a first study we investigated whether this is also true for symmetry detection off fixation. Twenty right-handed participants with left hemisphere speech dominance took part in a visual half-field experiment requiring them to judge the symmetry of 2-dimensional figures made by joining rectangles in symmetrical or asymmetrical ways. As expected, a significant left visual field advantage was observed for the symmetrical figures. In a second study, we replicated the study with 37 left-handed participants and left hemisphere speech dominance. We again found a left visual field advantage. Finally, in a third study, we included 17 participants with known right hemisphere dominance for speech (speech dominance had been identified with fMRI in an earlier study; Van der Haegen et al., 2011). Around half of these individuals showed a reversed pattern, i.e. a right visual half-field advantage for symmetric figures while the other half replicated the left visual-field advantage. These findings suggest that symmetry detection is indeed a cognitive function lateralized to the right hemisphere for the majority of the population.

09:34
The causal role of right lateral occipital (LO) cortex and right occipital face area (OFA) in symmetry detection: evidences from fMRI-guided TMS data
SPEAKER: Silvia Bona

ABSTRACT. Despite the salience of bilateral mirror symmetry in the visual world, its neural correlates are not established. We investigated the brain areas underlying symmetry detection in low-level stimuli (dot configurations) and high-level stimuli (faces) with fMRI-guided transcranial magnetic stimulation (TMS). We focused on lateral occipital (LO) cortex and occipital face area (OFA) because of their relevant role in object/shape and face processing respectively, for which symmetry represents a critical cue. In Study 1, we applied TMS over rightLO, leftLO, or two control sites while participants discriminated between symmetric and asymmetric dot patterns. TMS over both rightLO and leftLO impaired performance with a greater effect following rightLO TMS, revealing that symmetry detection is right-lateralized. In Study 2, TMS was applied over rightLO, rightOFA, leftOFA (control site) and Vertex (baseline) while participants discriminated between symmetric and asymmetric dot configurations (as in Study 1) and judged whether a face was either perfectly symmetric or not. Symmetry detection in dot patterns recruited both rightLO and rightOFA, whereas symmetry detection in faces selectively involved rightOFA. Overall, we suggest the co-existence of low-level/general and high-level/face specific symmetry encoding mechanisms, with symmetry as a low-level feature recruiting both rightLO and rightOFA whereas facial symmetry involving solely rightOFA.

09:51
Symmetry interactions in perceptual organization

ABSTRACT. I discuss several psychophysical findings, starting from the holographic approach to symmetry perception (van der Helm & Leeuwenberg, 1996). This approach explains that mirror symmetries and Glass patterns are about equally detectable and better detectable than repetitions, and that detection of imperfect mirror symmetries and Glass patterns follows a psychophysical law which improves on Weber's law. Against this background, I consider interactions between symmetry and other factors in perceptual organization, such as perceived depth, temporal aspects, and relative orientation. The latter, in particular, seems relevant to the perception of multiple symmetries, and presumably therefore, also to their skewed distribution in flowers and decorative art (van der Helm, 2011). The findings suggest specific neural mechanisms and hopefully inspire further research into these mechanisms.

10:08
The emergence of symmetry in the distributed response patterns in the ventral visual stream

ABSTRACT. Symmetry is a very salient feature of visual patterns. With this study, we wanted to investigate where and how the percept of symmetry emerges in the visual ventral stream. Participants are scanned while observing small dot patterns or larger stimuli that consist of two of the smaller dot patterns. These composed stimuli are either symmetric or non-symmetric. Using multi-voxel pattern analyses, we investigated the relationship between the response patterns of the larger dot patterns and their constituting smaller parts. We found that the lateral occipital complex (LOC) could discriminate well between symmetric and non-symmetric patterns, better than between patterns of the same category. Classification was also better between two symmetric patterns than between two non-symmetric patterns. Decoding accuracy in LOC to discriminate between symmetric and non-symmetric patterns was not influenced by whether or not a part of the two dot patterns was shared, while two non-symmetric dot patterns were better classified when the two patterns did not share a smaller part. In early visual cortex, a different pattern was found: decoding accuracy was generally very good, and depended on whether or not the patterns had one part in common, regardless the type of classification.

10:25
The Holographic model predicts amplitude of the brain’s symmetry response
SPEAKER: Alexis Makin

ABSTRACT. Helm and Leewenberg (1996) developed a ‘holographic weight of evidence model’ that quantifies ‘perceptual goodness’ (i.e. salience, detectability) of different visual patterns. They state that W = E/N, where W is goodness, E is evidence of regularity, and N is the total information. We tested whether the W-load of different visual symmetries predicts the amplitude of the neural response. We recorded a symmetry-related ERP called the Sustained Posterior Negativity (SPN, see Supplementary Material Panels A-D) in six experiments. 1) SPN amplitude was greater for reflection than translation or rotation patterns. 2) The number of dots differentially modulated the SPN generated by reflection and translation. 3) The SPN scaled with the proportion of paired/unpaired dots. 4) The SPN scaled with the number of reflection axes. 5) The SPN was similar for symmetry and anti-symmetry, but again scaled with number of axes. Across these experiments, the correlation between W and SPN amplitude was very strong (Supplementary Material Panel E). Finally, we show that the brain can switch between coding goodness of objects, and goodness in the image. We conclude that the holographic model captures most aspects of the neural response to symmetry.

10:42
Measuring symmetry responses across time and cortical area in the human brain

ABSTRACT. Most of what we know about brain mechanisms of symmetry perception has come from studies of point symmetries, especially mirror symmetry. Here we used a broader class of stimuli -- wallpaper groups -- to study the temporal evolution (using EEG) and spatial localization (fMRI and EEG) of brain responses to symmetric stimuli. Wallpaper groups contain the point symmetries, but repeat themselves in two directions, tiling the plane. The magnitude of EEG responses depends on the number of subgroups present, with more complex groups leading to larger responses. In a subset of wallpaper groups that differed only in the number of rotation axes, both EEG and fMRI responses scaled with the number of rotation axes. This parametric relationship was seen as early as V3 and V4 in fRMI,and was also present in the lateral occipital complex (LOC) and VO1, a ventral surface area anterior to V4. Responses were weak in dorsal areas MT, V3A and IPS0. The latency of the symmetry evoked response was earlier in V3/V4 than in the LOC indicating that sensitivity in these areas developed in a feed-forward fashion, rather than being due to feedback from LOC.

09:00-11:00 Session 16C: Clinical visual pshychophysics: from bench to bedside and beyond

.

Chair:
Location: C
09:00
Contrast Sensitivity
SPEAKER: John Robson

ABSTRACT. Since the introduction of standardised printed letter charts by Herman Snellen in 1862, the ubiquitous measure of visual competence in optometric and ophthalmic practice has been visual acuity. While there are good reasons for using this simple measure to assess the deleterious effects of refractive error, it was realised from the beginning that it was less well suited to characterising the visual defects associated with retinal or more central nervous dysfunction. Thus we find in 1881 a Norwegian ophthalmologist, Ole Bull, proposing a chart with large letters of decreasing contrast that could be used in clinical practice to measure a patient's "light sense" as easily as a Snellen chart could be used to measure their "form sense". Unfortunately Bull was unable to make such a chart and it was not until the late 1980s that technological developments made it possible to print and calibrate such charts. In 2015 we may ask whether the measurement of contrast sensitivity should not have a greater place in the routine visual assessment of retinal disease and whether the use of printed letter charts may still have advantages in a clinical context compared with methods of measuring contrast sensitivity based on gratings and electronic displays.

09:24
Visual Function Self-Testing for Remote Monitoring of Maculopathy
SPEAKER: Yi-Zhong Wang

ABSTRACT. Maculopathy, including age-related macular degeneration (AMD) and diabetic retinopathy (DR), is the leading cause of severe visual impairment. Changes in lifestyle can slow the progression of maculopathy, and new anti-VEGF treatments can preserve vision in patients with neovascular AMD and diabetic macular edema (DME). For these interventions to be optimally effective, frequent monitoring of maculopathy is required, which, in turn, depends on timely detection of disease condition changes. Because annual or semiannual eye examinations may not be sufficient to ensure an early diagnosis, the preferred practice for maculopathy management must include self-testing by patients for remote monitoring of disease onset or progression. This presentation discusses desirable characteristics of visual-function tests that can be used by patients with maculopathy for self-testing (Liu, Wang & Bedell, 2014), and reports the results of clinical studies that employed a mobile shape discrimination hyperacuity test for remote monitoring of maculopathy (Wang et al., 2013; Kaiser et al., 2013). It also discusses the potential and challenges of self-testing, remote monitoring tools for the detection of visual function changes associated with clinically significant changes of disease condition.

09:48
Visual field testing for the detection and management of glaucoma
SPEAKER: David Henson

ABSTRACT. The developer of visual field tests for glaucoma has to balance a number of factors; speed, accuracy and discriminatory power. The test needs to acknowledge that most patients are unreliable and that test-retest variability is dependent upon threshold sensitivity. The test needs to be fast as patients start to loose attention after only a few minutes. This limits the number of locations that can be tested and has led to developments in threshold algorithms, the use of prior data, and the focus upon test locations with high informational value. This presentation will summarise some of the research looking at measures of attention during clinical perimetry, the selection of test location, the development of new Bayesian methods and the use of prior data from previous tests and a knowledge of the relationship between test-retest variability and sensitivity.

10:12
Automated static threshold perimetry using a remote eye-tracker
SPEAKER: Pete R. Jones

ABSTRACT. Static Threshold Perimetry [STP] is a technique for mapping luminance-detection sensitivity across the visual field. Current STP methods require (i) an explicit, button-press response (precluding testing of infants) and (ii) expensive, specialised equipment. Here we present a new measure that addresses these problems by combining a cheap, commercially available eye-tracker (Tobii EyeX: $135), with an ordinary desktop computer. Luminance detection thresholds were measured monocularly in 18 healthy adults (additional data collection ongoing), using both a Humphrey Field Analyzer [HFA] and a new procedure based on remote eyetracking. The eye-tracker was used to present (Goldmann III) stimuli relative to the current point of fixation, and to assess whether the participant made an eye-movement towards the stimulus. Participants completed each test twice to assess test-retest reliability. The eyetracker was able to produce maps of luminance-sensitivity, which: (i) were correlated with those produced by the HFA; (ii) exhibited similar (though slightly lower) reliability; (iii) could distinguish between luminance sensitivity in central (<10°) versus peripheral (>10°) retinal locations; (iv) and could differentiate between the blind spot and surrounding locations. This work demonstrates that STP can be performed using remote eye-tracking and low-cost components. Such a test could be particularly effective for screening preverbal infants.

10:36
The Glasgow Caledonian University Face Test: A New Clinical Test of Face Discrimination

ABSTRACT. Introduction: Accurate interpretation of face information is critical for social functioning. Impairments of face perception are associated with a range of ocular, developmental and neurological conditions. The aim of this study was to develop a face test which is both clinically applicable and able to capture normal variability. Methods: The new face test presents four synthetic faces in an “odd-one-out” task. The difference between the faces is controlled by an adaptive procedure which allows face sensitivity (i.e. the minimum difference between faces required for discrimination) to be measured. Results: A broad range of face discrimination sensitivity was established for a large group of healthy adults (N=52; 29.7±15.1 years). The test is rapid (3min) and repeatable (Bland-Altman analysis and test-re-test r squared=0.795). Older adults (72±4.1 years) showed preserved face discrimination ability. A case report of a patient who reported a lifelong difficulty with face perception indicated that the test is highly sensitive to impairments of face perception (Z-score of -7; c.f. Z-score of -2 for existing face tests). Conclusions: The new face test offers a novel, sensitive and repeatable assessment of face discrimination ability. It overcomes limitations of existing tests such as restricted testing ranges and confounding factors (e.g. memory, familiarity).

11:00-12:00 Session 17

Motion, Time, Space & Magnitude / Vision & Motor Control / Wholes and Parts (Illusions, Objects & Grouping)

Location: Mountford Hall
11:00
Predicting curved motion during smooth pursuit and fixation
SPEAKER: unknown

ABSTRACT. Previous work has shown that motion prediction is enhanced during smooth pursuit of linear motions (Spering, Schütz, Braun, & Gegenfurtner, 2011). The current work extends those findings to certain curved motions. Subjects sat in front of a dark screen in a dark room. They were instructed that if the trial started with a green fixation spot, they should smoothly pursue the target and if the trial started with a red fixation spot, they should continue fixating that location during target motion. After a button press, there was a 350ms delay and then the ball (1 deg blurred red circle) and a goal (red vertical line segment, 3 deg tall, 0.1 deg wide) appeared. Motion trajectories were constant curvature arcs. Five curvature levels were tested and curvature was blocked. Gap sizes (5 and 8 deg) and motion durations (500, 800, or 1500ms) were randomly interleaved. Velocity was constant at 11 deg/s. Subjects had to predict if the ball would hit or miss the goal. Prediction performance was on average higher for the smaller gap size and longer motion durations. The benefits of smooth pursuit for motion prediction, as determined by d’, were not as clear as for linear motion.

11:00
Investigating the veridicality of shape from shading for real objects
SPEAKER: unknown

ABSTRACT. We investigated the accuracy with which observers can infer the shape of real 3D objects from shading cues. Observers viewed sinusoidal, triangular and trapezoidal corrugations, illuminated from either the top-left or the left by a point-light source. Depending on light-direction, the shape and shading profiles of objects could be quite different. Terminating contours and the light source were not visible to the observer. The objects were first viewed monocularly, then monocularly in the presence of a white matte sphere placed to help identify light direction, and finally binocularly. In each condition, observers were asked to draw the depth profile of the objects as if they were seen from above, and to indicate the light direction. Dynamic Time Warping was used to quantify the similarity between the drawn profiles and the shape and shading profiles of the objects. Perceived shapes were more similar to the actual shapes than to the shading profiles, in all three conditions. Simple rules, such as “dark is perceived as deeper”, could not explain perceived shape as a function of shading profile. Instead, we present a heuristics-based model to link the perceived shape of an object to its shading variations.

11:00
Effects upon magnitude estimation of the choices of modulus’ values
SPEAKER: unknown

ABSTRACT. We used the magnitude estimation to investigate how the range of stimuli influences visual perception by changing the modulus value. Nineteen subjects with normal or corrected-to-normal visual acuity (mean age=25.7yrs; SD= 3.9) were tested. The procedure consisted of two gray circles luminance of 165cd/m2, 18.3 degrees apart from each other. On the left side was the reference circle (VA of 4.5 deg) in which was assigned four arbitrary values: (1) 20, (2) 50, (3) 100 and (4) 500. The subjects' task was to judge the size of the circles on the right side of the screen assigning the number proportional to the changed size, relative to the circle presented on the left side of the screen (modulus). In each trial, ten circle sizes (1.0, 1.9, 2.7, 3.6, 4.5, 5.4, 6.2, 7.2, 8.1, 9.0 degree of visual angle at 50cm) were presented randomly. Our results shows a high correlation between the circle size judgment and different modulus sizes (R=0.9718, R=0.9858, R=0.9965 and R=0.9904).The Power Law exponents were (1) 1.28, (2) 1.34, (3) 1.29 and (4) 1.40. Increasing the size of modulus, bigger the exponent gets due the wide range of numbers available to judge the size.

11:00
An aperture synthesis variant of the Müller-Lyer-Illusion is sensitive to visual reference frame manipulation
SPEAKER: unknown

ABSTRACT. We created a variant of the Müller-Lyer (ML) illusion combining the two classical ML figures into a bisected double arrow surrounded by a rectangular reference frame. The double arrow was revealed only within a small Gaussian aperture around gaze position, while the frame was always drawn in its entirety. As subjects could see only one of the three arrowhead elements at a time, judgments of figure symmetry required synthesis of visual information across sequential fixations. Either a visual reference frame or oculomotor information is required to achieve such synthesis. We measured the size of the ML illusion with a two alternative forced choice method with a roving pedestal (Morgan, Melmoth and Solomon, 2013), which minimizes the effect of any cognitive biases. Measured illusion size in our aperture stimuli is comparable to that for completely visible controls. To differentiate between visual and oculomotor synthesis, we introduced eye position based changes in the position of the reference frame which would reduce the size of the illusion for visual, but not for oculomotor synthesis. This manipulation produces a consistent reduction in the size of the ML illusion, suggesting that synthesis is at least in part based on purely visual references.

11:00
Attentional allocation to feedback locations in motor movements
SPEAKER: unknown

ABSTRACT. Prior to executing a movement, attention is allocated to the movement target (Baldauf, Wolf & Deubel, 2006). However, when executing skilled actions, such as driving, performance is often improved when attention is directed to the external locations of feedback (Prinz, 1997). In light of these findings we examined the role of feedback and attentional allocation in the planning and execution of both pointing movements and saccades. Participants were presented with a circular array of eight digital 8s. They were asked to point or saccade towards a movement target, as indicated by a central arrow, while simultaneously identifying a briefly-presented discrimination target. Visual feedback on the accuracy of movement was provided, in the form of a brief colour change in one of the 8s immediately following the end of the movement. The results show elevated discrimination accuracy at both movement targets and feedback locations for pointing, but only at the movement target for eye movements. We discuss these findings in light of the role of visual feedback from one’s own hand during movement, as well as the changes in the expected retinotopic location of feedback across saccades.

11:00
Decoding perceived and imperceptible feature conjunctions in human early visual cortex
SPEAKER: unknown

ABSTRACT. Studies in humans using functional magnetic resonance imaging (fMRI) indicate that feature conjunctions are represented as early as primary visual, or striate, cortex (V1). However, a remaining challenge is to disentangle the perception of these conjunctions from their simple presence in the stimulus, an important distinction when identifying brain regions that correlate with feature binding per se. We investigated the neural correlates of both perceived and imperceptible conjunctions in human visual cortex. We used temporally-alternating stimulus displays consisting of differently-coloured perpendicular gratings, or a novel checked stimulus where the colour-orientation conjunction information was distributed over time. The colour-orientation conjunction could be reliably discriminated at all but the highest frequency tested (30 Hz) in the gratings. However, in the checked stimulus it was discriminable only within an intermediate range of temporal frequencies (7.5-15 Hz). We adapted these stimuli for fMRI and probed the response in striate and extrastriate cortex using multivariate pattern analysis. Feature conjunctions in all stimulus displays could be reliably decoded from patterns of activity in striate and extrastriate cortex, even when those conjunctions were imperceptible. Together, our results indicate that the binding of colour and orientation is not fully resolved by early visual processes.

11:00
Seeing actions in the fovea influences subsequent action recognition in the periphery
SPEAKER: unknown

ABSTRACT. Although actions often appear in the visual periphery, little is known about action recognition away from fixation. We showed in previous studies that action recognition of moving stick-figures is surprisingly good in peripheral vision even at 75° eccentricity. Furthermore, there was no decline of performance up to 45° eccentricity. This finding could be explained by action sensitive units in the fovea sampling also action information from the periphery. To investigate this possibility, we assessed the horizontal extent of the spatial sampling area (SSA) of action sensitive units in the fovea by using an action adaptation paradigm. Fifteen participants adapted to an action (handshake, punch) at the fovea were tested with an ambiguous action stimulus at 0°, 20°, 40° and 60° eccentricity left and right of fixation. We used a large screen display to cover the whole horizontal visual field of view. An adaptation effect was present in the periphery up to 20° eccentricity (p<0.001), suggesting a large SSA of action sensitive units representing foveal space. Hence, action recognition in the visual periphery might benefit from a large SSA of foveal units.

11:00
On the shape properties affecting the detection of tilt
SPEAKER: unknown

ABSTRACT. If an object has vertical or horizontal edges in its shape, we would easily detect the tilt of the object. Conversely, if an object has the shape that lacks the edges, it would be difficult to detect the tilt. The shape without edges sometimes causes the tilt blindness. However, it is not impossible to detect the tilt of the object that has no clear edges. The purpose of this study was to examine the characteristics of the object shape that affect the tilt judgment. In the experiment, participants observed several different figures with respect to the length of vertical or horizontal edges, and they were required to judge whether a figure was tilted. The result showed that the clarity of edges affected the tilt judgment. At the same time, as to the figures without edges, changes in the aspect ratio of the shape also affected the tilt detectability. These results suggested that other characteristics as well as the edges also act as a cue for detecting the tilt of an object.

11:00
Characterising shape aftereffects using composite radial frequency patterns
SPEAKER: unknown

ABSTRACT. Adaptation to radial frequency (RF) patterns has been used extensively to interrogate the properties of early level shape encoding mechanisms. However, little research has explored how multiple RF patterns can be combined to create a single composite stimulus that can reflect more realistic shapes. Such stimuli can be used to investigate what RF information is important for the detection and analysis of real world objects. For example we analysed the RF content of the outline head shape of a 3D head model and found that the phase of the third RF component was strongly correlated with the viewpoint of the head. We then replicated the face viewpoint aftereffect using a composite RF pattern to model outline head shape where viewpoint was cued by the phase of the RF3 component. This aftereffect was fairly tolerant to changes in size, where a 50% change in size resulted in a ~50% reduction in aftereffect magnitude. Further stimulus manipulations revealed this aftereffect was replicable with inverted face and non-face stimuli. Overall our experiments suggest a generic shape encoding mechanism that is highly sensitive to manipulations in the RF domain, which is also tolerant to size changes meaning it likely resides in extrastriate visual cortex.

11:00
Brain asymmetry influences biological motion perception in newborn chicks (Gallus gallus)
SPEAKER: unknown

ABSTRACT. A small number of light-points on the joints of a moving animal give the impression of biological motion (BM). Visually-naive chicks prefer BM to non-BM, suggesting a conserved predisposition to attend to moving animals. In humans and other mammals a network of regions, primarily in the right hemisphere, provides the neural substrate for BM perception. This has not been investigated in avians. In birds the information from each eye is mainly feeding to the contralateral hemisphere. To study brain asymmetry, we recorded the eye spontaneously used by chicks to inspect point light displays (PLD). We also investigated the effect of lateralization, following light exposure of the embryos. In Experiment 1, highly-lateralized chicks aligned with the apparent direction of motion only when they were exposed to the PLD moving rightward first. Because an alignment with a rightward moving stimulus implies monitoring it with the left-eye-system, our results suggest a right hemisphere dominance in BM processing. In Experiment 2 weakly-lateralized chicks did not show any behavioral asymmetry. Moreover they counter aligned with the apparent direction of motion, suggesting a modulatory effect of brain lateralization on social interactions. Environmental factors (light stimulation) seem to affect the development of lateralization, and consequently social behavior.

11:00
Color cast hypothesis of color-dependent Fraser-Wilcox optical illusion

ABSTRACT. A reddish variant of the Fraser-Wilcox illusion (1979) was created by Kitaoka (2010). Kitaoka also proposed an empirical rule of the illusory motion direction based on the color layout. I found a more general and fundamental rule explaining the phenomenon. The retina has three types of cones which correspond to the three primary colors (i.e., red, green, and blue). A retina image is frequently renewed by eyeball movement, such as saccade. If the response time differs among the three kinds of cones, then apparent motion would occur. However, this hypothesis alone does not fully explain the phenomenon. Therefore, I introduced an additional hypothesis that the color of the color cast among the three primary colors is perceived more slowly compared with the two other colors. For example, in Kitaoka’s reddish pattern, in which only two prime colors (i.e., red and blue) are used, red is perceived slower than blue because red is the color of the color cast. If the image is bluish, the direction of motion is reversed even if the two colors used are the same. This rule applies also in the cases wherein the two primary colors combined are red and green or green and blue.

11:00
Reaching and grasping with pliers-like tools: a kinematic analysis
SPEAKER: unknown

ABSTRACT. Evidence that tools are ‘incorporated’ into the body schema—and are controlled as if part of the body—is compelling, but relates mostly to tools that only extend the arm’s reach. Yet, tools commonly alter the relationship between hand posture and the tool tips in more complex ways. We examined how the brain compensates for such ‘tool geometry’ by studying grasps made with pliers-like tools. We manipulated tool ‘gain’, using tools that opened more (1.4:1) or less (0.7:1) than the hand opening. A 1:1 tool controlled for effects of tool use vs. the hand per se. Kinematic parameters reflected variations in object properties in the normal way (maximum tool opening increased with object size, for example). We compared tool grasps to a simple model, assuming the brain controls the hand so as to produce the same end-effector opening for a given object, independent of tool geometry. Varying tool gain caused substantial changes in hand opening in the predicted direction. These were insufficient, however, to fully compensate for tool geometry. Haptic-only estimates of perceived size, acquired with the same tools, were similarly biased. Our results are consistent with the brain compensating for tool geometry, but using a biased internal model.

11:00
Vibration to increase or decrease strength of illusory motions
SPEAKER: unknown

ABSTRACT. The Fraser–Wilcox optical illusion (1979) is an illusory figure that is still in reality but is perceived to be moving. Kitaoka (2010) created color-dependent variants of this optical illusion. Meanwhile, Yanaka et al. (2011) pointed out that the color-dependent Fraser-Wilcox illusion is strongly perceived as an illusory motion when it is vibrated by a PC program or by hand etc. at several Hz. Under vibration via a computer, the illusion can be affected by limited frame rates and the afterimage of the computer display. With regard to vibration by hand, the challenge lies in determining stroke conditions and vibration frequency at which the strength of an optical illusion increases or decreases. In this work, we developed vibration equipment using a linear motor whose stroke and vibration frequency are set and controlled through a PC program. This equipment facilitates the observation of the effects of stroke conditions and vibration frequency. Illusory motions, such as that of the color-independent “waterfall” and “UFO” optical illusions of Kitaoka, which are classified as CDI, as well as drifting triangles illusion, are reinforced by the vibration that is perpendicular to an illusory motion. During vibration of the scintillating Hermann’s grid, optical illusions become extinct.

11:00
Number-space association in synaesthesia: An fMRI investigation
SPEAKER: unknown

ABSTRACT. Background: The laterality effect (LE) reflects the automatic classification of numbers in terms of left versus right in relation to the midpoint 5. Response times are faster for bilateral (e.g., 2-8) as opposed to unilateral (e.g., 1-4) pairs, illustrating the close link between numbers and space. Objectives: Here we look at the neural correlates of number-space association by examining the brain response in a spatial-form synaestheste (M.M) and sixteen non-synaesthete controls. Method: Participants reported the physically larger number in a size congruity task. Congruity and laterality were manipulated orthogonally. Congruent (e.g., numerical value and physical size match; 2 8) and (e.g., numerical value and physical size mismatch; 2 8) trials were presented for bilateral (e.g., 2 - 8) and unilateral number pairs (e.g., 2 - 4). M.M. represents numbers in the following manner: 8 6 4 2 0 1 3 5 7 9, so that large and small quantities are split against the midpoint zero. Results: Only for M.M. LE elicited significant activity in the supramarginal gyrus (bilateral) and in the left angular gyrus. Conclusions: These results strongly support the automatic activation of space by long-term numerical representation, and the role of the supramarginal gyrus in space-numerical coding.

11:00
White-matter pathway connecting sensory cortical regions involved in optic-flow processing
SPEAKER: unknown

ABSTRACT. Previous studies have reported concurrent activation in the visual, multisensory and vestibular areas during optic-flow stimulation (Cardin & Smith, 2010; 2011). This study aimed to investigate how those optic-flow selective areas communicate through white-matter pathways, by combining functional magnetic resonance imaging (fMRI) and diffusion-weighted imaging (DWI). Using fMRI, we localised the optic-flow selective sensory areas in six participants. We performed probabilistic fibre tractography (mrTrix toolbox; Tournier et al., 2012) on the DWI data obtained from the same participants, and identified a white-matter tract connecting the multisensory/vestibular areas in the parietal lobe (VIP, p2V, PcM) and the vestibular area in the temporal lobe (PIVC). The anatomical shape and location of this tract are consistent with those of that identified in post-mortem studies (Sachs, 1892; Vergani et al., 2014). Results of tractography were evaluated using Linear Fascicle Evaluation (LiFE; Pestilli et al., 2014), which yielded statistically significant evidence supporting the existence of this tract. These findings suggest that the multisensory/vestibular areas in the parietal lobe (VIP, p2V, PcM) and the vestibular area in the temporal lobe (PIVC) communicate through this pathway, and that this pathway may support sensory integration underlying optic-flow processing.

11:00
The role of task in the interaction between gestures’ and words’ meaning

ABSTRACT. This study aims to verify whether the priming effect of gesture on same meaning words is modulated by the activation of different types of information in words. The priming gestures’ meanings prompt visuo-spatial information yielding a pictorial semantic context for the meaning of the target words. The meaning of the target words can be activated through different types of information depending on the task. Behavioral and electrophysiological evidence was collected in a lexical decision and image formation task. The behavioral data showed a priming effect of the meaning of gesture in both the tasks. The electrophysiological data confirmed this result showing a significant larger N400 during the lexical decision task. In the image formation task we found a N300 effect modulated by gestures’ meaning. The early flexible integration of gestures’ and words’ meaning seems to depend on the type of information elicited in the target words by the task.

11:00
Eccentricity effects in optic flow parsing
SPEAKER: unknown

ABSTRACT. Rushton & Warren (2005) proposed the existence of a flow parsing mechanism that globally subtracts optic flow resulting from self-movement. Any remaining motion can then be attributed to the movement of objects in the scene. Accordingly, stationary participants fixating the centre of a radial expansion field perceive an eccentric probe to move towards the centre, consistent with global subtraction of the outwards radial flow (Warren & Rushton, 2009). Furthermore, the perceived illusory movement is larger at 4 deg than 2 deg eccentricity, which is expected given the increase in flow speed with eccentricity. Here we investigate in more detail the dependence of the magnitude of this effect on probe eccentricity. Stationary participants fixated the centre of an expanding radial field of limited lifetime dots simulating forward movement at 0.6 m/s. The perceived trajectory of a horizontally displaced (±1, ±2, ±3, ±4, ±5 deg), vertically moving probe was indicated by orientating an onscreen gauge. Potential contributions of local motion mechanisms were minimised by removing optic flow in an aperture (diameter = 6 deg) surrounding the probe. Perceived trajectory was biased towards the centre and this effect increased approximately linearly with eccentricity.

11:00
An Experimentally Constrained Theory For Levelt's Propositions and The Scalar Property Of Multistable Perception
SPEAKER: unknown

ABSTRACT. Reversal time distributions in multistable perception exhibit a characteristic gamma-like shape, which remains largely invariant across displays, observers, and stimulation levels, whereas distributions mean span over two orders of magnitude and feature a somewhat paradoxal input-dependences known as Levelt's propositions (Levelt, 1967; Pastukhov and Braun, 2007; Blake et al., 1971; Murata et al., 2003; Walker, 1975). This implies deterministic and stochastic contributions to the dynamical process underlying the alternation statistics must satisfy a peculiar balance (Kim, Grabowecky, Suzuki, 2007; Van Ee, 2009; Brascamp, Van Ee, Noest, Jacobs, Van den Berg, 2008; Pastukhov et al., 2013). Our hierarchical model of stimulus integration by ensembles of stochastic bistable nodes, fully constrained from experimental observations, can account for the shape and scalar property of reversal time distributions at all orders (Cao et al. 2015, in preparation) as well as numerous other properties. We show that successive truncations of the higher-order dynamics can provide with important insights; in particular, the reduction to a second-order diffusion process reveals that the scalar property relies on adequate input-dependence of the step distribution, while further reduction to a first-order leaky-integrate-and-fire model uncovers possible mechanisms for each of Levelt's propositions.

11:00
Removing binocular cues disrupts the lower visual field advantage for grasping but obeys Weber’s law

ABSTRACT. Humans achieve better performance when grasping stimuli positioned in the lower than in the upper visual field (VF). Moreover, visuomotor brain regions (such as SPOC) also show a lower VF preference for hand actions (Rossit et al., 2013). The current study investigated whether the lower VF advantage for grasping is related to the availability of binocular cues. Right-handed participants were asked to grasp objects in their lower and upper VF under conditions of either monocular or binocular vision. Under binocular viewing there was a stronger relationship between object size and maximum grip aperture when objects were presented in the lower VF as compared to the upper VF, whereas no lower VF advantage was observed in the monocular condition. In addition, a striking dissociation was observed between monocular and binocular grasping: in the monocular condition the ‘just notable difference’ (JND) increased with object size in accordance with Weber’s law, but not in the binocular condition. These results suggest the existence of a fundamental distinction between the way that object size is computed under binocular and monocular viewing conditions. Moreover they indicate that the lower VF advantage for grasping is ‘boosted’ by the availability of binocular depth cues (stereopsis and vergence).

11:00
The influence of object history on correspondence in the Ternus display
SPEAKER: unknown

ABSTRACT. How is the visual system able to know which elements belong together despite the input being ambiguous and incomplete? This correspondence problem could be solved on the basis of low-level factors, as for example motion energy based on luminance contrast, or by taking into account higher-level object representations. To investigate this question, we used the Ternus display, in which three elements are presented from one frame to the next, shifted by one position. Depending on how correspondence between the elements is resolved, this ambiguous apparent motion display can be perceived as one element jumping across the other two (element motion) or as all three elements moving together as a group (group motion). We manipulated the object history of the Ternus elements by presenting the elements in the beginning of each trial before starting the actual Ternus display either as moving together as a group along the same random motion trajectory or as moving independently, each following different motion trajectories. Participants perceived more group motion when the elements had a common than an independent motion history, suggesting that object history had an effect on how correspondence was solved. These results imply that higher-level object representations can influence correspondence.

11:00
Elementary motion cues to animacy perception: speed changes elicit social preferences in naive domestic chicks (Gallus gallus domesticus)
SPEAKER: unknown

ABSTRACT. Motion cues, implying the presence of an internal energy source, elicit animacy perception in adults and preferential attention in infants. We investigated whether speed changes affecting adults’ animacy ratings elicit spontaneous social preferences in visually-naïve chicks. Observers evaluated the similarity between the movement of a red blob and that of an animate living creature. The red blob entered the screen and moved along the azimut. Halfway through its trajectory the object could either continue to move at a constant speed and direction, or reverse its motion direction and/or linearly increase its speed. The average speed, the distance covered by the object and the overall motion duration were kept constant across stimuli. Subjects reported significantly higher animacy ratings for accelerating objects, regardless of whether they reverted their motion direction. Two-day-old chicks were tested for their spontaneous preference for approaching the red object moving at a constant speed and trajectory (inanimate stimulus) or an identical object, which suddenly accelerated and then decelerated again to the original speed (animate stimulus). Chicks showed a significant preference for the animate stimulus, indicating that motion cues causing animacy perception in humans elicit spontaneous preferences in naïve animals.

11:00
Perceived junction changes in crowding revealed with a drawing paradigm
SPEAKER: unknown

ABSTRACT. In crowding, objects that are discernible when presented alone become indiscernible when flanked by close-by objects. Here, we investigated appearance changes in crowding with a drawing paradigm. Participants drew stimuli presented in the visual periphery. Eye tracking assured that stimuli were only viewed when participants kept fixation. The drawings, made under free viewing conditions, were aimed at making the peripherally viewed stimuli and the freely viewed drawings appear as similar as possible. Targets consisted of line configurations and letters with various junctions between line elements. Targets were presented with or without flankers. To quantify junction changes in the resulting drawings compared to the stimuli, the drawings were evaluated with a recently developed scoring system. We found high rates of junction changes in crowding. Most changes were omissions: Junctions present in the stimuli were missing in the drawings. This was due to the frequent 'error' of not depicting presented line elements. L-junctions were more often added than X- or T-junctions. Flanker junctions determined perceived target junctions: X-junction were more often added to targets when present in the flankers. We propose that drawing is a useful tool to investigate crowding, providing a fine-grained characterization of appearance changes in crowded peripheral vision.

11:00
Is implied flow necessary for global shape coding in textured contours?
SPEAKER: unknown

ABSTRACT. Radial frequency (RF) patterns, shapes formed by the sinusoidal modulation of the radius of a circle allow for the demonstration of global integration of local information around a shape. Textures with RF modulation of orientation are also globally processed, with the impression of a flow implying closure (flowsure) being observed to be critical for such integration to occur (Tan, Bowden, Dickinson, & Badcock, 2015). Psychophysical methods with four experienced observers were used to measure shape-deformation thresholds to determine whether this same requirement was necessary for global integration to occur in a textured (or second-order) RF contour. Gabor sampled RF patterns were utilized where the orientation of the patches on the path were either coincident with the path, orthogonal to the path, or randomly oriented around the pattern. Even when patches had orientations that were not tangential to the radius of a circle (as in conventional sampled RF patterns), global integration was observed. Textured RF patterns did not conform to the same requirements as modulated textures and flowsure of elements was not observed to be required for global integration to occur in such textured RF patterns.

11:00
Use of online vision for reach-to-grasp movements in adolescents with autism spectrum disorders
SPEAKER: unknown

ABSTRACT. Movement disturbances in autism spectrum disorders (ASD) have been a focus of research in addition to their social communication problems (e.g., Leary & Hill, 1996). However, kinematics properties of reach-to-grasp movements in adolescents with ASD have not yet been revealed. Here, we investigated how online vision affects kinematics properties of reach-to-grasp movements in adolescents with ASD, compared to typically developing (TD) peers. Participants, wearing liquid crystal shutter goggles, reached for and grasped a cylinder with a diameter of 4 or 6 cm. Two visual conditions were tested: Full vision (FV) condition (the goggles remained transparent during the movement) and no vision (NV) condition (the goggles closed 0 ms after movement initiation). The two visual conditions were alternated with each trial in one experimental session (Alternated condition), or each condition was blocked in the session (Blocked condition). TD showed larger peak grip aperture (PGA) difference between NV and FV conditions in the Blocked condition than the Alternated one. The majority of ASD participants showed similar kinematics pattern to TD. The results suggest that movement disturbances in ASD could not be always explained by a lack of use of online vision for motor control.

11:00
Automatic imitation of hand and foot movements is independent of observed body posture

ABSTRACT. Automatic imitation describes a stimulus-response compatibility effect whereby we are faster to perform a movement that matches an observed movement than one that is incongruent with the observed movement. Here we test for automatic imitation of hand and foot movements across different (whole) body postures. Images of a seated person were projected onto wall so that the chair in the image appeared to touch the floor. We tested two observed body postures: the (moving) hand was either above the (moving) foot, or below. The participants’ task was to perform a hand or foot response to a target letter presented superimposed on the body stimuli. The current results revealed significant automatic imitation effects for both types of observed body posture. The effects cannot be accounted for by a match of observed body to spatial frames (such as vertical Simon-like effects for hand and foot responses) as these were only aligned in the hand-above but not the foot-above condition. These types of “whole body” automatic imitation effects could provide a potential experimental tool for developing links between automatic imitation and perception-action associations found in social contexts such as the unconscious copying of postures and mannerisms (motor mimicry).

11:00
Equivalent noise (EN) analysis of motion direction discrimination in adults with Autism Spectrum Disorders (ASD)
SPEAKER: unknown

ABSTRACT. EN analysis differentiates the influence of local neural noise, affecting the precision of local directional estimates, from the efficiency with which these estimates are averaged (Dakin et al., 2005). Recent models of atypical perceptual processing in ASD propose differences in precision, possibly arising from altered endogenous neural noise (Pellicano & Burr, 2012; Simmons et al., 2009; Davis & Plaisted-Grant, 2014). This has recently been assessed in ASD using a rapid EN procedure (Manning et al., 2014). We adopted an extensive EN procedure allowing more detailed characterisation of the EN function. Sensitivity to average motion direction at increasing, multiple levels of stimulus noise was measured. Adults with ASD were at least equally as sensitive to global motion direction as neurotypical adults when local directional variability was low. At greater levels of stimulus noise, when multiplicative noise is the dominant influence on performance, our results suggest a reduced influence of multiplicative noise in ASD, consistent with Manning et al. (2014). These findings are discussed in the context of theoretical models of precision and atypical perception in ASD.

11:00
Visual processing of average size by chimpanzees
SPEAKER: unknown

ABSTRACT. Many studies have argued that the human visual system can compute the mean size of sets of circles (e.g., Chong and Treisman, 2003). Is such statistical processing of visual information unique to humans? Although comparative studies have implied that humans' perceptual grouping ability is superior to that of other species, it remains unknown whether other species can represent the overall statistical properties of multiple similar objects. We presented chimpanzees and humans with contrasting pairs of arrays consisting of either one circle or a set of 12 circles. The mean size of the circles was larger in one array than in the other. There were three experimental conditions: the heterogeneous (circles within each array were different sizes), homogeneous (circles within each array were the same size), and single (only one circle in each array) conditions. Chimpanzees and humans were required to touch the array containing the larger circle(s). The results show that there was little difference in accuracy between the two conditions for chimpanzees or humans. In addition, performance under these conditions was superior to that under the single condition. This is consistent with the results of Chong and Treisman (2003). This study suggests that chimpanzees can represent overall statistical properties.

11:00
Enhancing the world with the mind: Shape adaptation exaggerates shape differences
SPEAKER: unknown

ABSTRACT. Adaptation to different visual properties can produce distinct patterns of perceptual aftereffects. Some, like those following adaptation to colour saturation, seem to arise from recalibrative processes, in which adaptation updates a perceptual norm. As all relevant inputs are encoded relative to a single norm, recalibration affects the appearance of all inputs similarly, including the adaptor. Other aftereffects seem to arise from contrastive processes that exaggerate differences between the adaptor and test stimuli without affecting the adaptor’s appearance. Recently it has been suggested that norm-based coding is a common strategy for complex spatial patterns, such as shapes and faces. We therefore decided to determine whether a recalibrative or contrastive process underlies the shape aspect ratio aftereffect. We mitigated retinal contributions by adapting to an oval that jittered over a range of retinal positions. We found that adapting to a moderately elongated shape made narrower shapes appear even narrower, while simultaneously making more elongated shapes appear even more elongated. These data suggest that aspect ratio aftereffects arise from a contrastive process that exaggerates differences between the adapted and other values. More generally, spatial adaptation may enhance the salience of novel stimuli, rather than recalibrate our sense of what constitutes a ‘normal’ shape.

11:00
Impaired discrimination of radial motion in early-onset cannabis users
SPEAKER: unknown

ABSTRACT. Rationale: Early-onset cannabis use is associated with impaired visual processing. Objectives: The present study investigated whether event-related potentials (ERPs) elicited in a radial motion discrimination task differ between early-onset cannabis users and non-user participants. Method: 18 early-onset cannabis users (M =22.44±5.01; Age of onset M =15.11±1.28) and 25 controls (M =22.04±3.46) were evaluated. Stimuli were 50 low contrast (<16%) dots moving radially outward or inward in pseudo-random order. Five levels of motion coherence were tested: 6, 24, 30, 38 and 80%. Mixed measures ANOVAs were run with Group (control, cannabis) as the between subject factor and Motion coherence (6, 24, 30, 36 and 80%) as the within subject factor. Dependant variables were accuracy and N2, P2 and P3 peak and latency. Results: N2, P2 and P3 amplitudes were significantly reduced in the early-onset cannabis group compared to the control group, but we found no significant differences for the latencies. The P2 peak amplitude reductions over the right parietal area correlated significantly and negatively with total number of years of cannabis use in the cannabis group. Conclusion: Prolonged cannabis use with early age of onset is associated with reduced neural correlates of motion processing.

11:00
Depth constancy in grasping is only apparent
SPEAKER: unknown

ABSTRACT. It is known that the distance at which objects are presented affects their perceived depth, which becomes smaller at larger object distances. A similar bias is found in grasping tasks when objects are located at eye-height and only stereo depth is available for their 3D structure. Since by viewing the top part of objects we can gather contour information important to reveal their structure, we ask whether the visuomotor system is immune to these biases when interacting with objects seen from above. Participants grasped an object presented at different distances and at two heights (eye-height and 130 mm below eye-height) along its depth axis. We found that the grip aperture was systematically biased by the object distance along most of the trajectory. However, whereas the bias persisted up to the end in the eye-height condition, it vanished towards the end in the below eye-height condition. These findings suggest that grasping actions are not immune to biases typically found in perceptual tasks. On-line visual control can counteract these biases only when direct vision of both digits and final contact points is available, as when objects are seen from above and the hand is in their close proximity.

11:00
Temporal predictions in tone sequences
SPEAKER: unknown

ABSTRACT. Timing of stimulus onset is a key component of temporal judgements in daily life; however, human’s sense of time is subjective. The present study aimed to understand how predictability of stimulus properties can improve perceived timing. We presented isochronous sequences of 3, 4, 5 or 6 auditory tones interspersed at random within each block. The timing of the last stimulus could deviate and participants reported whether it was either ‘early’ or ‘late’ relative to the expected regular timing. In the “scale” condition, the tones composed the musical scales. In the “shuffle” condition, the order of the tones was randomized. In all the sequences the last tone was 440 Hz to rule out perceptual distortions due to the frequency to be judged. Results indicate an overall better discrimination performance in the scale than in the random condition. Furthermore, the last stimulus in the sequence is perceptually accelerated with the longest sequence in the scale condition. These effects can be explained by hypothesizing that the melodic pattern acts as a predictive cue, thereby providing better discriminability also on stimulus onset timing. Expectations resulting from the combination of scale and sequence length leads also to a perceptual anticipation of the timing of stimuli.

11:00
Late, decision-related biases in reports of visual motion direction
SPEAKER: unknown

ABSTRACT. Following a fine-discrimination task of the direction of a field of moving dots relative to an oriented reference line, subjective reports of motion direction can be biased away from the reference (Jazayeri & Movshon, 2007). A decoding model that applies a weighting profile during stimulus decoding could quantitatively account for such a repulsion. Alternatively, this repulsion may reflect a relatively late response bias. Here, we manipulated the reference line during the task: in the first experiment subjects (n=5) performed the same, fine-discrimination task in the presence of the reference, but subsequently estimated motion direction in its absence. The weighted decoding model predicts perceptual biases under these circumstances, but we found subjects’ responses to be unbiased and veridical. In the second experiment, following the fine-discrimination task, a reference line was present during the estimation phase, but we manipulated its angular position (shifted by either -6°, 0° or +6° with respect to the discrimination phase). In this case, the directions reported by the subjects were biased, but were yoked to the location of the shifted reference line. Taken together, these results are better explained by a late, decision-related bias rather than an early, sensory or decoding bias.

11:00
Perceived speed of mixed-contrast random-dot kinematograms
SPEAKER: unknown

ABSTRACT. Perceived speed and subsequent driving behaviour are thought to be altered in conditions of low contrast, e.g. when driving in fog (Thompson, 1982; Snowden, Stimpson & Ruddle, 1998). Here, we investigate perceived speed in scenes containing both high- and low-contrast components (e.g. street lights/fog lights visible through the fog). We varied the proportions of high- and low-contrast dots in random-dot kinematograms (RDK) and investigated the effect on perceived stimulus speed. We manipulated the proportion of high-contrast dots (20%, 50%, 80%, 100%) and the speed (4deg/s and 8deg/s) of the mixed-contrast RDK. Perceived speed was measured using a 2AFC design in which participants (N=15) reported the faster of a mixed-contrast standard RDK and a low-contrast test RDK. RDKs were circular patches (diameter 7cm, dot density 5.2 dots/cm2 ). Low-contrast dots (8% contrast) and high-contrast dots (64% contrast) were presented against a mid-grey background. Standard and test RDKs were presented simultaneously, with a separation of 10cm, for 500ms. Mixed-contrast RDKs were perceived as faster than low-contrast RDKs, however no significant effect of speed or proportion of high-contrast dots was found. These results suggest that high-contrast information determines perceived speed regardless of the relative proportion of high- to low-contrast components.

11:00
The effect of simulated vision loss on walking paths
SPEAKER: unknown

ABSTRACT. Individuals with unilateral visual neglect have difficulties walking. They tend to pass through doorways off-centre, and collide with objects on the neglected side. Neglect is an attentional disorder associated with a reduced awareness of objects to one side of the body. Hemianopia, a sensory deficit, often co-occurs with neglect. In hemianopia there is a reduced awareness of objects to one side of fixation. We simulated the perceptual effects of neglect and hemianopia and examined their contribution to walking difficulties. Twelve healthy volunteers wore a head-mounted display and walked through free-roaming virtual environments. Trajectories were recorded as participants walked towards virtual targets located 7 m away in open, closed, empty and cluttered environments. Walking paths through empty space towards targets were unperturbed by simulated hemianopia or neglect. When walking through cluttered spaces, participants occasionally collided with obstacles. We conclude that the restriction of seen space associated with neglect or hemianopia is not responsible for the major difficulties experienced by patients with neglect.

11:00
The Role of the Magnocellular Visual Pathway in Object Recognition
SPEAKER: unknown

ABSTRACT. Visual categorization plays an important role in the fast and efficient processing of information surrounding us, still the neuronal basis of fast categorization has not been established. Two main hypotheses are known, both agree that the primary impressions are based on information acquired through the magnocellular pathway. It is unclear whether this information is due to the magnocellular pathway running parallel to the ventral pathway or to top-down mechanisms executed through the connections of the dorsal pathway and the frontal cortex. A categorization task was performed by 39 subjects, who decided about the size of objects based on the first impression. Stimuli used for the magno- and parvocellular pathways were discriminated by their spatial frequency content. Transcranial direct-current anodal, cathodal and sham stimulation were used to assess the role of frontal areas. Stimulation did not bias the accuracy of decision for stimuli optimized for the parvocellular pathway. In case of stimuli optimized for the magnocellular pathway, cathodal stimulation decreased the subjects’ performance, whereas the anodal stimulation increased the performance. Our results support the hypothesis that top-down mechanisms, which promote fast predictions through coarse information carried to the orbitofrontal cortex by the magnocellular pathway, is crucial in fast categorization processes.

11:00
The colorful stranger in the mirror – the strange-face-in-the-mirror illusion revisited
SPEAKER: unknown

ABSTRACT. The "strange-face-in-the-mirror-illusion” is a rather broad perceptive phenomenon, summing up different illusionary impressions, such as perceived deformations of one's own face, or seeing unknown, animal or archetypal faces, which seem to occur when gazing at one’s own reflection in the mirror for a longer amount of time (Caputo, 2010). The present study investigated whether different colored ambient light would affect occurrence and intensity of the illusion. All participants gazed at their own face for five minutes under red, green and blue plus neutral ambient light and had to describe their impressions after each gazing interval. In addition they had to rate the intensity of perceived illusionary impressions while gazing at their face. To test for any relation of the strength of the perceptual impression and top-down mechanisms, e.g. the general tendency to be susceptible of paranormal phenomena we employed the Revised Paranormal Belief Scale (Tobacyk, 2004). Perceived intensity of the illusion was stronger for red and blue than for green and neutral light (large effects for greatest perceived intensity during the gazing intervals)—with people susceptible for paranormal beliefs showing higher amounts of illusionary perceptions in general.

11:00
Comparing the effects of contrast on perceived speed for linear and radial gratings
SPEAKER: unknown

ABSTRACT. At low speeds, lower contrast linear gratings appear to move more slowly than higher contrast gratings. This effect is reduced and even reversed at higher speeds. Are similar effects observed for ring-like radial gratings? Although drifting linear and radial gratings can be matched for local spatio-temporal properties, radial gratings have more complex global structure, approximating optic flow associated with either self-movement or object-movement in depth. Using a standard 2IFC method we assessed perceived speed of a low contrast (8%) reference grating moving at 1, 4, 8 deg/s (Exp 1, N = 19) and 2, 6, 12 deg/s (Exp 2, N = 18) relative to a higher contrast (64%) comparison. Linear stimuli were gabor patches (SF = 1 cpd, σ = 3.33 deg) and radial stimuli had matched spatial parameters. Consistent with previous studies, participants judged lower contrast linear gratings as markedly slower than higher contrast gratings, except at the highest speeds tested. This was also true for radial gratings, however, biases in perceived speed for these stimuli were even more pronounced at low reference speeds (1-6 deg/s). Contrast-dependent effects on speed perception appear to vary depending on the global structure of the stimulus.

11:00
Reading social intention in movement kinematics
SPEAKER: unknown

ABSTRACT. Spatio-temporal parameters of voluntary motor action may help optimize human social interactions, yet it is unknown whether individuals spontaneously perceive informative social cues borne by action. This study investigates for the first time if social intention can be implicitly detected from motor actions at the second-person perspective. In this study, an actor and a partner participated in a task consisting for one of them, depending on an auditory cue, in grasping and moving a wooden dowel under time constraint. Before this main action, the actor performed a preparatory action, viz., placing the dowel on a starting mark. The information about who would make the main action was provided only through the actor's headphones. Analysis of motor performances revealed that actors initiated the preparatory and main actions differently depending on whether or not they knew they had to do the main action. Strikingly, partners showed similar effects on the main action despite having received only irrelevant prior information. Our data then support that social intentions could be spontaneously perceived in voluntary motor actions and then suggest an implicit cognitive processing of the social scope of other’s action during social interaction.

11:00
Unconscious priming effect in visual scene with multiscale objects.
SPEAKER: unknown

ABSTRACT. Biederman and Cooper (2001) showed that priming effect in naming task remains the same regardless the differences in size of primer and target. However in this and similar works the size difference was small, or the used sizes were from the same diapason of sizes. There are two differently perceived diapasons of object sizes. Perception of objects larger than 1-1.5 deg (depending on objects class) is scale invariant, while objects, which size is smaller than 1.5 deg, are perceived poorer with stimulus diminution. In this work we investigated whether primer of a large size can prime the target, which size is smaller than 0.5 deg in match-to-sample task. Object-sample appeared for 200 msec, SOA between sample object and test-event was 1200 msec. It was four-alternative forced choice task. Object-sample size was 0.1 or 0.2 degree, and noise level was 0 or 40%; SOA between primer and test-event was 300 msec. Object-prime appeared for 150 msec. Primer was masked and mask renewed every 150 msec. Presentation of the congruent primer caused reduction of the reaction times in the most uncertain conditions (stimuli size 0.1 deg and noise level 40%) comparing to conditions without primer or with incongruent primer.

11:00
Inter-scale suppression and facilitation in motion-discrimination are unaffected by dichoptic presentation
SPEAKER: unknown

ABSTRACT. The discrimination of motion direction of a fine-scale pattern is impaired when a static coarse-scale pattern is added to it. The strength of the impairment is unaffected by dichoptic presentation (Derrington et al, 1993), so it suggests that the interaction between different motion sensors tuned to high and low spatial frequencies is happening after binocular combination. Interestingly, discrimination of motion direction improves when a static fine-scale pattern is added to a moving coarse-scale pattern. In this work we tested whether this facilitation is also unaffected by dichoptic presentation. Using a mirror stereoscope, we measured duration thresholds of Gabor patches moving horizontally at 2deg/sec. We tested four conditions, two dichoptic presentations: a) 1c/deg moving in one eye and a static 3c/deg in the other eye; b) 3c/deg moving in one eye and 1c/deg static, and two monocular presentations, both stimuli (the static and the moving pattern) presented in the same eye. Results of 4 subjects showed that impairment and facilitation effects in motion discrimination were present with the same strength in both monocular and dichoptic presentations. We suggest that facilitation in motion discrimination is caused, after binocular combination, by the interaction between two motion mechanisms tuned to coarse and fine scales.

11:00
GLM-based decoding of contour classification from EEG signals
SPEAKER: unknown

ABSTRACT. As our brain makes sense of continuously changing stimuli from the environment, parsing the precise time-course of a cognitive task remains a challenge (King & Dehaene, 2014). Here, we present a GLM-based decoding method that allows to test at which moment a specific neural feature becomes a predictor of a cognitive state on a single-trial basis. For that, we will analyse electro-encephalographic responses of human subjects performing a 2AFC contour classification task (involving 11 Gabor, Mathes et. al, 2006), where local stimulus features are integrated into a coherent visual percept and further categorized into two classes. By using both contour and non-contour trials, we compute the probability of a stimulus being presented given a neural response, trial by trial. While contour integration can be decoded from occipital areas with 57.7% accuracy, oscillatory activity within frontal-parietal areas predict contour classification with 65.7% accuracy. With these results, we will argue that characterizing the neural correlates of a particular cognitive task, in a time-resolved fashion, sheds light on the temporal organization of cognitive processes, and is a novel method for understanding how neural representations are manipulated and transformed over time.

11:00
Psychophysical approbation of an algorithm for coherent motion perception
SPEAKER: unknown

ABSTRACT. Moving dot stimuli are used to study mechanisms of motion perception. Unfortunately independent researches yields diverse threshold values e.g. 5.6±0.39 (%) (Ridder, Borsting, Banton, 2001); 15.34±4.71 (%) (Milne et al, 2002); 25% (Slaghuis, Ryan, 1998). Dissonance among results may rise because lack of joint conception of motion perception stimuli design as well as from individual experience of test participants. We have studied how threshold values are influenced by differences in stimuli design (shape of the test field, moving dot density) as well as type of protocol of psychophysical testing. Lowest thresholds values were obtained by test field with elliptical shape (r=6.2deg at 50cm) having dot velocity vectors constant (2deg/s) with limited fluctuations in dot density over the time. In case of constant stimuli coherent motion perception threshold was (5.0%, 0.4SD) and with adaptive staircase 4AFC psychophysical protocol it was (6.5%, 1.7SD). Preliminary results suggest that perceptual learning do affect repeatability of test results as well as fatigue of participant (Lee, Lu, 2010).

11:00
Inversion effects are stronger for subordinate than for basic-level action recognition
SPEAKER: unknown

ABSTRACT. Previous results showed that actions can be recognized in multiple ways suggesting that several recognition levels exist in action recognition (e.g. a waving action can be recognized as a greeting or a wave). Categorization tasks suggest that the recognition of social interactions is more accurate at the basic-level (e.g. greeting) than at the subordinate level (e.g. waving). What is the origin of the supremacy of basic-level recognition? Here we examined whether basic-level recognition relies to a larger degree on configural processing than subordinate social interaction recognition. To do so we probed basic-level and subordinate recognition performance (RT and discrimination ability (d')) of 20 participants for upright and inverted social interactions. Larger inversion effects are typically associated with stronger configural processing. Participants saw a one image at a time and reported whether it matched a predefined action. Our results showed that - contrary to our initial hypothesis - subordinate recognition of social interactions was significantly more affected by stimulus inversion than basic-level recognition. Moreover, recognition performance was better for subordinate than basic-level recognition. We show that these results can be well explained by a top-down activation of snapshot templates.

11:00
Distance and Time Estimation of Outdoor Routes Varying in Complexity and Encroachment
SPEAKER: unknown

ABSTRACT. More corners in a traversed route has been shown to increase the distance and time estimates of the route (Sadalla & Magel 1980). Also, the availability of optic flow information can influence distance perception resulting in larger estimates (Sun, H., et al 2004). 96 university undergraduates were shown 8 videos of outdoor walking routes from the 1st person perspective. The 4 critical routes contained either 1 or 7 corners (low/high complexity) and were either in an open park or in a forested trail (low/high encroachment). These routes were 200 meters long and took 175 seconds to view. Half of the participants had a GPS map in the corner of the video. Participants estimated the route length and route duration by providing a quantitative estimate. While watching the videos, eye-tracking data was collected. Results showed that the high encroachment conditions resulted in larger distance and time estimates. This is consistent with increases in the complexity of optic flow information, resulting in larger distance estimates. For both time and distance estimation, the high encroachment/high complexity condition produced significantly larger estimates. Also, high route complexity resulted in larger distance estimates. Eye-tracking data revealed several differences in gaze patterns across the experimental conditions.

11:00
Sliding motion by different edge contrast
SPEAKER: unknown

ABSTRACT. Pinna and Spillmann (2005) showed that in an array of grey square-shaped checks which have different black and white edges in the central area from those in the surround, apparent sliding motion of the central area is perceived by keeping the gaze fixed on the moving dot. We systematically examined the role of edge contrast by using all four adjacent patterns; (a) black edges on the right and bottom, white edges on the left and top, (b) black edges on the left and bottom , (c) reversed edge polarity of (a), and (d) reversed (b). Patterns (a) and (b) can be perceived as convex, whereas (c) and (d), as concave. All the combinations of the four patterns in the central and the surround were presented with a horizontally moving dot, and the direction and magnitude of apparent sliding motion were measured. Results demonstrated that the direction of sliding motion changed when the pattern of the central and that of the surround were switched, and sliding motion was perceived independently of concavity and convexity but not perceived when the surround area was level with the background. We discuss our results from the viewpoint of the integration of the local light direction.

11:00
Visual memory in reaching and grasping
SPEAKER: unknown

ABSTRACT. The accuracy with which humans execute goal-directed grasping movements depends on the availability of vision. To grasp a target successfully, its position in space as well as its size must be processed. If visual information about the target is unavailable, a stored target representation needs to be accessed. Previous research suggests that object position and object size are two distinct features which are processed and stored in different cortical areas and also show different decay characteristics. Here, we tested if typical alterations in grasping kinematics due to increased memory demands (i.e. larger grip opening) reflect a decay of position information or size information. We manipulated the availability of visual feedback during grasping and introduced two different pre-response delays. Additionally, the grasp position was varied, either requiring a long reach toward the target (far condition), a short reach (near condition) or no reach (fixed condition). If only information about target size was required (fixed condition), grasp kinematics were unaffected by the availability of vision. In contrast, grasp kinematics changed with increased memory demands in near and far conditions suggesting rapid decay of position information following visual occlusion and a more stable representation of object size.

11:00
Electrophysiological correlates of motion extrapolation
SPEAKER: unknown

ABSTRACT. Motion extrapolation (ME), the ability to predict the future states of moving objects that are hidden by an occluder, is critical to interact with a dynamic environment. In a classical paradigm, participants are required to estimate time to contact (TTC) by pressing a button when the occluded moving target reaches a certain cue. Research using this paradigm showed that adapting the specific regions in which the target will be occluded produces a shift in the TTC estimate: adaptation in the same direction increases TTC, whereas adaptation in the opposite direction shortens it (GIlden et al., 1995). In this study, we asked whether the modulation of TTC by motion adaptation is reflected in the Contingent Negative Variation (CNV), a frontal electrophysiological component related to timing processing. Results showed a larger CNV amplitude after adaptation in the same direction of the target, possibly suggesting that visual and frontal areas interact during ME. Furthermore, we asked whether motion extrapolation could elicit an N2 component, which is normally elicited at the onset of visible motion at the posterior sites.Results showed a negative component peaking at 190ms post-occlusion at posterior sites ipsilateral to the direction of ME, potentially indexing the “start” of the actual ME.

11:00
The effects of contrast dissimilarity on crowding.
SPEAKER: unknown

ABSTRACT. Visual crowding is a phenomenon in which peripheral object recognition deteriorates in clutter. Target-flanker dissimilarity generally facilitates object recognition in crowded conditions and reduces the spatial extent of crowding. However, Rashal and Yeshurun (2014) reported an exception to this rule when they found that low contrast targets are strongly crowded by high contrast flankers, even though they are dissimilar to each other. This might be because their stimuli used at-threshold display durations and backward masking, which are known to further increase crowding (Vickery, Shim, Chakravarthi, Jiang & Luedeman, 2009). In order to examine whether the unconventional contrast dissimilarity effects reported by Rashal and Yeshurun (2014) are of a general nature or specific to situations with backward masking, we systematically manipulated flanker contrast (high or low) and the presence of a backward mask. We observed increased crowding effects in conditions with masking and when low contrast targets were surrounded by high contrast flankers. However, these effects were not as prominent as reported by Rashal and Yeshurun (2014), even though we used a larger contrast difference. Nevertheless, the contrast dissimilarity effect on crowding was also present without masking, albeit in a weaker form, confirming the generality of this effect.

11:00
Implied Motion Priming and Motor Expertise
SPEAKER: unknown

ABSTRACT. Visual perception of implied human actions was examined via a short term priming paradigm. First, we checked whether a visual priming effect was observed with visible primes depicting movements. The stimuli we employed were static stimuli that implied or not human motion. Second, we investigated whether the degree of motor prime-target congruency impinged on the perception of human movements and whether motor expertise impacted on priming effects. Twelve French elite female gymnasts and twelve matched controls performed a speeded two-choice response time task. They were presented with congruent and incongruent prime-target pairs and had to decide whether the target stimulus represented a movement or a static position. Moreover, we manipulated three levels of prime-target similarity to distinguish between: (i) low-level physical repetition, (ii) same movement, and (iii) different movement priming effects. A main effect of prime-target congruence was revealed: Regardless of expertise, the subjects responded 45.5ms faster in congruent trials than in incongruent trials. Detailed analyses confirmed the existence of both low-level and abstract priming effects. Surprisingly, compared to controls, experts did not display superiority when performing the priming task. The lack of motor expertise influence on priming may be explained by the display of static stimuli.

11:00
Spatial integration in dynamic random-dot patterns depicting either first-order or second-order global motion
SPEAKER: unknown

ABSTRACT. Previous studies have investigated the spatial integration limits for first-order (luminance-defined), but not second-order (contrast-defined), global motion in human vision. In the present study, we compared coherence thresholds for random-dot-kinematograms (RDKs) containing either luminance-defined (modulation depth 0.3) or contrast-defined dots (modulation depth 0.8) depicting translational, rotational or radial motion. The diameter of the circular aperture in which the dots were displayed was varied (in equal logarithmic steps) from 2 to 16 degrees. Regardless of the type of dots used and trajectory depicted, participants’ (N=7) thresholds decreased as image size increased. However sensitivity was greatest for rotational motion and least for radial motion, especially with the smallest RDKs tested. The minimum image size for which the direction of global motion was still reliably discernable was larger for RDKs composed of second-order dots than first-order dots. Nonetheless when differences in absolute sensitivity were taken into account, thresholds for first-order and second-order global motion fell at the same rate as RDK diameter increased. These findings reinforce the notion that if first-order and second-order local motions are detected separately, they are subsequently combined across space by a cue-invariant global motion mechanism, consistent with the properties of some neurons in extra-striate areas MT and MSTd.

11:00
Colour induced enhancement of perception of global versus local movement
SPEAKER: unknown

ABSTRACT. Local and global perception of moving objects was studied psychophysically and neurologically (Anstis,Kim,2011,J Vision 11(3);1–12; Zaretskaya et al.,2013, J Neuroscience 33,523–531). Authors hypothesise prevalence principles of perceptual “local“ vs. “global“ grouping that depends on stimuli geometry, lightness polarity, complexity. Previously elementary elements were gray-scaled and arranged in groups in various manners. We introduced: a)colour contrast between stimuli groups and between stimuli and background, b)viewing eccentricity of scene. We used spot doublets that can be perceived rotating around their symmetry centre (“local” motion) – organized at vertices of two squares that can be perceived sliding over each other along circular paths (“global” motion). Doublets were shown as red and green spots on yellowish background, further the colour saturation was minimized during trials. During onset of scene the local motion prevailed that further turned to global sliding of two squares. We measured with 2-AFC paradigm the time course of the first switching event to global motion in dependence of spot colour distances K in La*b*space both for chromatic and achromatic scene and contribution of chromaticity into facilitation of switching. Facilitation of switching was observed increasing eccentricity of viewing continuously moving the fixation point from the doublet centre to centre of scene.

11:00
The role of mirror neuron mechanisms in the anticipation of others’ actions: An EEG study
SPEAKER: unknown

ABSTRACT. The functional role of the mirror neuron mechanism (MNM), which becomes active during both action execution and action observation, is still hotly debated. The present study investigated whether the MNM becomes activated already prior to the onset of observed actions when, on the basis of contextual information, the occurrence of the action was anticipated. If so, this would suggest a specific role in action anticipation. Additionally, its relation with individual differences in autistic traits (AQ; Autistic-spectrum Quotient) was examined. EEG recordings of 23 typically-developed participants were made during the observation of video clips depicting hand actions. Reductions in the power of the sensory-motor alpha band (8-13 Hz; mu rhythm), which presumably reflect MNM activation, were determined. A significantly reduced power in the mu rhythm was found during action observation but not prior to the onset of actions. No significant correlation was found between mu rhythm suppression and the extent of autistic traits. The findings suggest that the MNM does not get activated during anticipation of upcoming actions. The relationship between MNM activity and AQ scores will be extended to include individuals with autism.

11:00
Illusory motion in an afterimage formed by gradation patches and the stimulus luminance as the determinant of the motion direction

ABSTRACT. It is known that some kinds of repetitive gradation patches induce illusory motion perception, and the luminance of the background of the patches influences the motion direction. In this regard, Naor-Raz and Sekuler (2000) have briefly mentioned that a similar illusory motion could be observed in an afterimage produced by a sequential contrast in which the gradation stimulus was abruptly changed to a blank screen. However, the manner in which the background luminance and blank luminance specify the motion direction in an afterimage is unknown. In this study, we systematically manipulated both background luminance and blank screen luminance and inspected the perceived direction of the illusory motion in the afterimage. The results revealed that the motion direction in the afterimage was not specified by either background or blank screen luminance alone but by a ratio of the two. This finding is discussed in terms of how the recovery from an adaptation of the primary stimulus and input of ray of the blank screen may produce the motion signals in the afterimage.

11:00
Effect of local salience on the collinear masking effect
SPEAKER: unknown

ABSTRACT. Searching for a target in a salient region is considered to be easier than in a non-salient region. However, our previous study (Jingling and Tseng, 2013) found that a local target on a salient collinear structure is actually harder to find, termed “the collinear masking effect”. In this case, the salient location was defined by collinear grouping, which creates a global salient structure. This study tested whether increase perceptual salience of the local target could reverse the collinear masking effect. In three experiments, we increased target salience in three different dimensions respectively: color, luminance, and temporal duration. Nevertheless, all of these manipulations still elicited the collinear masking effect. Our data suggest that this large well-grouped structure can alter perceptual salience of a local element, implying that global grouping preceded the computation of local salience. We argue that the collinear masking effect may not be due to the perceptual salience induced by the global structure, rather, may depend on collinear grouping of the structure.

11:00
Seeing the forest or seeing the trees: The role of urbanisation in the development of perceptual bias
SPEAKER: unknown

ABSTRACT. Global perceptual bias has been reported to emerge around 6 yr in Western populations (e.g. Poirel, Mellet, Houdé, & Pineau, 2008) and is thought to be a universal characteristic of perception in adulthood. In contrast, a remote Namibian population called the Himba demonstrate a strikingly local bias even in adulthood.This local bias diminishes with limited urban exposure in adulthood: Himba raised traditionally but relocated to town in adulthood are substantially more global than Himba remaining in the villages (Caparos, Ahmed, Bremner, de Fockert, Linnell, & Davidoff, 2012). Here we show that from as early as 6 yr urbanised Himba children already show a greater global bias than Himba adults raised traditionally but relocated to town in early adulthood. Furthermore we show that, within adults, both exposure to the urban environment earlier in life and exposure over a longer period of time are associated with a global perceptual bias comparable to that of Western adult populations.We conclude that global bias is not a universal characteristic of adult perception but requires urban exposure, or factors associated with such exposure, to be expressed.

11:00
Are spatial indexes used to identify thematic roles for language?
SPEAKER: unknown

ABSTRACT. The finding that participants can identify agents and patients in scenes of several identical moving objects (Gao, Newman & Scholl, 2009) suggests that role assignment involves the spatial indexes that support object tracking (Pylyshyn & Storm, 1988). A multiple object tracking paradigm was adapted to examine whether visual tracking limits (due to a finite number of spatial indexes) influence agent and patient recognition accuracy. Participants described pushing actions between two spheres (an agent and patient) amongst a display of nine identical objects that moved randomly before and after the events. Agents and patients were identified at above chance levels for even three separate push events, exceeding the five index capacity proposed in fixed-limit theories (Kahneman & Treisman, 1984). However, accuracy was highest for one-push and lowest for three-push events. There was also a strong positive relationship between agent and patient assignment, with participants being most likely to swap the labels than produce any other type of error. This suggests that participants overcame capacity limitations by grouping objects together and switching attention between the objects of different push events. Therefore, the limitations of the visual attention system appear to influence thematic role assignment in language production.

11:00
Crowding and Shape Representations
SPEAKER: unknown

ABSTRACT. In crowding, target perception deteriorates when flanking elements are added. Crowding is traditionally characterized by target-flanker interactions which are (1) deleterious, (2) spatially confined within Bouma’s window, and (3) feature specific. Here, we show that none of these assumptions universally hold true. We determined vernier offset discrimination thresholds at 9° of eccentricity. When the vernier was embedded in a square, thresholds increased compared to the unflanked threshold. Surprisingly, when the vernier was flanked by three additional squares on either side, crowding strongly decreased. Similar results hold true for other shapes, including unfamiliar, irregular shapes. In addition, changing the flanking shapes’ orientations led to increases in crowding. These results show that (1) more flankers can decrease crowding, (2) crowding strength can be determined by elements outside Bouma’s window and (3) shape processing can determine vernier offset thresholds. We propose that visual acuity for each element in the visual scene depends on all elements in the entire visual field and, on top of that, on the overall spatial configuration. In addition, these results provoke the question of whether the human brain is, indeed, coding any type of shapes at all locations in the visual field.

11:00
Estimates of eye velocity are tuned for speed
SPEAKER: unknown

ABSTRACT. Estimating eye velocity helps convert retinal motion into movement with respect to the head and other coordinate frames. Models of these coordinate transforms assume that eye-velocity estimates encode speed – yet direct evidence is scant. We therefore measured the orientation of discrimination contours in the distance-duration plane for pursued stimuli. If speed dominates, stimuli moving over different distances and durations should be more difficult to discriminate when their speed is the same. Discrimination contours (ellipses) will therefore be oriented obliquely along iso-speed lines. Because extra-retinal signals and retinal flow may both contribute to eye velocity estimation, we measured discrimination with and without visible backgrounds. In Experiment 1, a horizontally-moving pursuit target was shown in the dark (no flow), with horizontal lines (reduced flow) or vertical lines (high flow). Resulting ellipses were oriented along the iso-speed line, suggesting speed was dominant in all conditions. But ellipses were less elongated in the presence of flow, suggesting backgrounds enhanced distance cues not speed. In Experiment 2, distance cues were downgraded using short-lifetime dots. Discrimination ellipses were now more stretched along the iso-speed line. The results suggest: (1) eye-velocity estimates are tuned for speed; (2) both extra-retinal and retinal-flow cues contribute.

11:00
Conscious perception of local elements enforces their global integration and vice versa
SPEAKER: unknown

ABSTRACT. The primary task for the visual system is to organize the disparate retinal input into integrated perceptual representations. The extent to which conscious perception is required for visual organization to persist, and vice versa is yet unclear. We addressed this question using a continuous flash suppression (CFS) paradigm. In experiment 1, we tested whether a visible and invisible global context differentially modulates the perceived motion direction of a visible aperture stimulus (i.e., a stimulus that appears to move behind an aperture.) We found that the global context influenced perceived motion direction only when the context was visible. In experiment 2, a variant of the bistable diamond was used, consisting of four drifting gratings, which can be perceived as drifting independently, or as a global diamond shape moving behind occluders. The drifting gratings were presented in one eye in a square arrangement around fixation, while masks presented in the contralateral eye perceptually suppressed two of the gratings. We observed that a global perceptual interpretation of the visible gratings boosted the suppressed gratings into awareness faster, relative to when the gratings were perceived to drift independently. These results emphasize the mutual reciprocal reinforcement between conscious perception and global visual integration.

11:00
The Influence of Familiar Size on Simple Reaction Times.
SPEAKER: unknown

ABSTRACT. It has been shown that simple reaction times (SRTs) respond to the perceived rather than the retinal size of objects (Sperandio et al, 2009). It has also been shown, using a Stroop-like paradigm, a RT advantage to objects that are congruent to their real-world size (Konkle & Oliva, 2012). It is well known that familiar size influences the perceived size of objects, however, it remains unclear if and how SRTs are affected by object familiarity. Three experiments were carried out where participants were asked to react as fast as possible to pictures of familiar objects equated for luminance and angular size on the retina. A variety of objects were used with varying real-world sizes. Stimuli were observed under natural (experiment 1) and reduced viewing conditions (experiments 2 and 3). We found that SRTs decreased in response to objects that were presented at a size that was closer to their real-world size (experiment 2) and become progressively slower with increasing incongruence to their real-world size (experiment 3), but only under restricted conditions. These findings indicate that when visual and oculomotor cues are reduced, SRT is affected by previous knowledge of object size in a manner that reflects congruence with real-world information.

11:00
Using the intermodulation term as a measure of selective responses to coherent plaids
SPEAKER: unknown

ABSTRACT. Mid-level neural mechanisms that combine signals encoding low-level visual features are still relatively poorly understood. Steady state visual evoked potentials (SSVEPS) were recorded to measure the nonlinear combination of two sinusoidal gratings (1cpd and 3cpd in spatial frequency, respectively). They were orthogonally overlapped by themselves or by each other to form spatial frequency-matched (‘coherent’) or non-matched (‘incoherent’) plaids. While fundamental SSVEP responses directly represent the components of a presented stimulus, intermodulation responses represent their nonlinear combination at the point of or after summation (Spekreijse & Oosting, 1970; Spekreijse & Reits, 1982; Zemon & Ratliff, 1984). Grating components were simultaneously flickered at different frequencies (4.6Hz, 7.5Hz) resulting in fundamental component-based responses at these frequencies, as well as intermodulation responses at their difference (7.5 - 4.6 = 2.9Hz) and sum (7.5 + 4.6 = 12.1Hz). When the grating components formed an incoherent plaid, the sum intermodulation responses were small (if present) compared to when they formed a coherent plaid. This may represent differences in suppression from cross-orientation masking between the plaid conditions, or it may reflect selectivity for stimulus coherence. In support of the latter, the extent of fundamental response suppression that occurred for coherent and incoherent plaids was similar.

11:00
Perceptual momentum influences bistable perception of the Lissajous figure
SPEAKER: unknown

ABSTRACT. We have recently re-introduced the Lissajous figure as a tool to study bistable perception (Weilnhammer, Ludwig, Hesselmann, & Sterzer, 2013). The Lissajous figure is an ambiguous depth-from-motion stimulus, first introduced to experimental psychology more than 80 years ago (Weber, 1930). The figure’s complexity, line width, and rotational speed modulate its perceptual dynamics, and we found that longer self-occlusions resulted in shorter dominance durations, while higher rotational speed yielded increased dominance durations (Weilnhammer, Ludwig, Sterzer, & Hesselmann, 2014). We tentatively proposed that higher rotational speed resulted in larger ‘momentum’, thereby decreasing the probability of perceptual transitions (i.e., an inversion of rotation direction). Here, we sought to further investigate the ‘momentum’ account by manipulating the rotational speed and the size of the Lissajous figure, under the assumption that perceptual momentum is independent of stimulus size. We replicated a significant effect of speed, but found no effect of stimulus size. This pattern of results supports an influence of representational momentum on the perceptual dynamics of the Lissajous figure. Using a Bayesian modelling approach, we will also address the question of how increased rotational speed leads to higher estimates of stimulus stability and how this might act on the occurrence of perceptual transitions.

11:00
Manual grips selectively influence visual, auditory and audiovisual speech categorization
SPEAKER: unknown

ABSTRACT. Activating speech motor system can affect speech perception. In addition, it has been previously shown that some articulations are systematically associated with specific grip representations, for example syllable [ke] with power grip, and syllable [te] with precision grip (Vainio, Schulman, Tiippana & Vainio, 2013). Consequently, it is possible that activating grip motor representations can affect speech perception via vision and audition. Hence, we studied whether performing manual grips could influence speech perception. Participants watched and listened to visual (talking face), auditory (voice) and audiovisual (face and voice together) syllables [ke] and [te] while performing either a power or precision grip. Grip performance influenced speech categorization by increasing visual and auditory categorizations of the syllable congruent with the performed grip, i.e. power grip increased [ke] responses and precision grip increased [te] responses. Signal detection theory analysis revealed that grips did not influence the detectability of the stimuli, but they shifted the response criterion. That is, the perceptual category boundary moved to favour [ke] when power grip was performed, and [te] when precision grip was performed. The current study is the first to show that manual actions can have an effect on speech categorization.

11:00
Figure and ground from 2D surfaces with ambiguous border ownership

ABSTRACT. Image segregation into foreground and background requires information relative to which border in the image is likely to belong to which surface. Receptive field structures of cortical neurons likely to deliver the border ownership code have been identified. To clarify how the human perceptual system resolves ambiguous border ownership, configurations with contours bridging gaps between edge inducers with varying contrast polarity where presented in random order to human observers. The contours could be interpreted as belonging to the surface in the center of the configuration, or the surface surrounding the center. Control configurations consisted of surfaces (dark-on-light surround and light-on-dark surround) with unambiguous border ownership. Observers had to judge whether they perceived the central surface in front, behind, or in the same place with the surrounding one. Results show that response probabilities are determined by the theoretically predicted direction of filling-in in the ambiguous configurations, irrespective of the contrast polarity of the inducing elements. In the control configurations where border ownership is unambiguous, the polarity of contrast is found to predict the perceived relative depth of the two surfaces.

11:00
Feedback contribution to collinear facilitation is group dependent
SPEAKER: unknown

ABSTRACT. Collinear facilitation refers to an increase in sensitivity for a low-contrast Gabor target when placed between nearby, similarly aligned supra-threshold flankers. Many studies have explored the spatial and temporal characteristics of this phenomenon, and there is general consensus that the facilitation could occur via two sources: i) a slower, sustained mechanism based on lateral connections in V1, ii) a more rapid, transient mechanism involving extra-striate feedback to V1. There is some debate, however, about whether facilitation can occur if the target precedes the flankers, a manipulation known as backward masking. Such effects, if present, are more likely to be driven by the more rapid transient feedback mechanism. Here, we shed light on this debate using forward, backward and simultaneous masking with a sample of 25 participants. We used a shorter stimulus presentation times (35 ms) and shorter stimulus onset asynchronies (±35-70 ms) than previous studies, to help isolate transient feedback facilitation. We found collinear facilitation with forward masking for all participants, but backward masking for only 60% of participants. We describe a simple model that predicts our data based on the relative contributions of lateral and feedback facilitation mechanisms.

11:00
Simulated travelled distance in an immersive virtual environment is better estimated when adding biological oscillations to the optical flow.
SPEAKER: unknown

ABSTRACT. Distance estimation from visually simulated self-motion is imprecise. Depending on the evaluation method, travelled distance can be under- or over-estimated. One particular method consists of asking a stationary observer, exposed to an immersive optical flow, simulating forward self-motion, to indicate when s/he thinks s/he has reached the remembered position of a previously seen distant target. In this case, subjective evaluation of travelled distance is generally overestimated (i.e., the subject undershoots the target). Recent studies suggest that a translational optical flow with biological additional oscillations (simulating the optical effects of natural locomotion) would increase the sensation of walking and improve spatial perception, as compared to a purely translational optical flow. In the present study we tested this hypothesis, by measuring travelled distance estimation, according to two conditions of visual simulation of forward self-motion, at constant speed, in a CAVE setup: (1) an optical flow simulating pure forward translation (2) an optical flow with added "biological" oscillations, reproducing the optical effects of the natural motion of the head during walking. Our results show that an optical flow containing additional biological information enhances the accuracy of travelled distance estimation. The perceptual advantage provided by the biological oscillations in the optical flow is discussed.

11:00
Ambiguous motion perception in vision and touch
SPEAKER: unknown

ABSTRACT. Introduction: Von Schiller’s Stroboscopic Alternative Motion (SAM) stimulus alternates two visual diagonal dot-pairs, inducing apparent motion. A linear increase of the SAM’s aspect ratio (“AR”: vertical divided by horizontal dot distances) causes a nonlinear change from horizontal to vertical motion perception with a vertical bias at AR = 1. We compared apparent motion perception evoked by visual versus tactile stimuli, with a focus on reference frames. Methods: For the tactile SAM stimulus we attached vibrotactile stimulators to participants’ forearms and varied ARs by changing either the distance between forearms or between stimulators on each forearm. We further varied the relation between endogenous and exogenous reference frames by rotating the forearms (45° and 90°).Results: Visual SAM results reproduced previous findings. Tactile motion perception stayed ambiguous for small ARs, becoming biased towards vertical motion with increasing AR, but to a lesser extent than in vision. Surprisingly, a 90° forearm rotation had no effect, whereas 45° biased perception towards horizontal motion. Discussion: Similarly to vision we found a tactile vertical bias, being largely independent of the relation between reference frames, however with one surprising exception: A 45° forearm rotation biases perception to horizontal motion. Our results confirm Bayesian probability approaches of perception.

11:00
Understanding parity: Is the odd-effect odd or even?
SPEAKER: unknown

ABSTRACT. Parity (odd/even) and magnitude are well accepted as two fundamental features in the representation and processing of numbers. It has been shown that odd numbers are processed slower relative to even numbers (the odd effect). One explanation of this effect comes from Linguistic Markedness (LM) which shows that marked adjectives are difficult to process compared to unmarked. Hines (1990) suggested that odd is marked because of the specific linguistic associations of that word. This seems contradictory to the idea that parity is a fundamental feature of numerical cognition. In the present study we test for this using a same-different classifications based on parity using odd-odd, even-even and even-odd pairs. If LM has a role in the odd effect, odd pairs would be judged slower than not just even pairs, but also odd-even pairs. The results showed that there was no difference in processing efficiency between odd-odd pairs and odd-even pairs whereas even-even pairs were classified relatively more efficiently. Hence, the odd effect seems to be, in fact, an even-effect wherein classification of even pairs is facilitated. This cannot be explained by the LM account. We propose that research on symmetry, specifically bilateral and translational symmetry, can explain the findings better.

11:00
Hand proximity effect: The role of Space, Object and Disengagement
SPEAKER: unknown

ABSTRACT. It has been argued that objects appearing near the hands enjoy enhanced visual processing. Enhanced spatial prioritization is thought to underlie the hand-proximity effect. It has also been suggested that a slower attentional disengagement from objects near the hands is more critical. In two experiments, we pit the two accounts against each other to better understand the hand proximity effect. Participants completed a visual search task with their hands either on the monitor or on their lap. When on the monitor, the target could appear near the hand or farther away. Consistent with the disengagement account, search was more efficient in the lap condition as compared with the hand condition. However, consistent with the spatial prioritization account, search was more efficient in the near as opposed to far condition. In Exp 2, where items crowded only near or far from the hand, rendering the respective far/near location empty, all three conditions showed the same search slopes, further discrediting slower disengagement as an explanation. It also shows that the objects, not space near the hands are prioritized. That is, when there are no objects near the hand, the far condition is as efficient as near and no-hand condition.

13:30-15:00 Session 18A: Colour vision: appearance and constancy

.

Location: B
13:30
Extraretinal factors modulate color after effect.
SPEAKER: Takao Sato

ABSTRACT. The visual system uses several different coordinate systems, such as retinal, world, and body/head coordinates. In this study, we manipulated world and head coordinates while keeping the stimulus with regard to retina coordinate during adaptation, and measured the duration of Color after effect (CAE) to examine contributions of extraretinal factors. In the fixation condition, where observers fixate on stationary adaptor, the adaptor was stationary as regard to all coordinate systems, but in the other three conditions, pursuit, head-movement, and head-eye-movement conditions, the adaptor was stationary in retinal coordinate, but were moved in at least on of the other coordinate. The motion speed in all motions was 10 deg/sec and amplitude was 30 deg. The movements of eyes and head were monitored by eye-tracker and laser-pointer attached to observer’s head. It was found that CAE lasted for about 80 sec for fixation condition, and the duration was reduced by approximately 30% for the other conditions, although the stimulus was stationary on the retina in all conditions. These results demonstrate that CAE, which is generally understood as a retinal phenomenon, is affected by extraretinal factors, i.e. the stationarity and movement in world and head-centered coordinates.

13:45
Colour constancy without colour experience.

ABSTRACT. We used metacontrast masking to render coloured stimuli invisible. Even though the stimuli are invisible they nevertheless prime decisions about the colour of the masks that follow them, speeding the decision if the prime and mask match in colour. What constitutes ‘matching’? Under a change in illumination a match could either mean that light reaching the eyes from the prime and mask has the same spectral composition or that the prime and mask have surfaces with similar reflectance properties. We show that decisions are speeded most when prime and mask match in reflectance properties. In a separate signal detection experiment we show that the primes are undetectable. This implies that a colour constancy process operates independently of colour experience. Control experiments rule out explanations in terms of expectation, local contrast or effects of retinal adaptation. This constancy-based priming does not depend upon visual attention and, in a conscious feature based attention experiment, we show that the constant surface representation is more effective than spectral composition in directing attention. We discuss the implications of these results for understanding the basis of colour experience.

14:00
Unmasking the dichoptic mask: Binocularly matched features reduce dichoptic masking for both chromatic and luminance stimuli

ABSTRACT. In dichoptic masking a suprathreshold mask contrast in one eye is found to elevate thresholds for detecting a target contrast in the other eye, and more so than if the mask and test are in the same or in both eyes. Recent studies have shown that binocularly matched luminance features reduce, or ‘unmask’ chromatic dichoptic masking (Wang & Kingdom, JOV, 2014). Here we explore the effect using a dichoptic disk surrounded by a binocular, i.e. matched in the two eyes ring of variable width. When the disk is chromatically defined both chromatic and luminance rings increasingly unmask dichoptic masking as the ring width increases from zero, the effect asymptoting at a ring width approximately a quarter of the disk diameter. The smallest unmasking effect is found for a chromatic ring surrounding a luminance disk. These results suggest that binocularly matched features have a general effect in reducing interocular suppression among unmatched features. We argue that our results are consistent with the “object-commonality hypothesis”, whereby matched features in the two eyes promote the interpretation that features that are unmatched nevertheless arise from the same object, and as a result are relieved from the effects of interocular competition.

14:15
Does colour constancy exist? Yes and No

ABSTRACT. The question whether colour constancy exists or not is a multilayered one. Researchers since Hering agree that the term „constancy“ is not to be taken litteraly, but refers to an approximately constant colour perception of „things“ („angenäherte Farbkonstanz der Sehdinge“, E. Hering). This is the semantic level. Secondly, there is a conceptual level, in that the question cannot be isolated to one level of processing, in particular not to the level of receptor adaptation. Instead, we have to include the cognitive level, where humans (and possibly also animals) use inferences from prior knowledge in order to interpret sensory data (“unbewußter Einfluß des Urteils“, v. Helmholtz); in other words, colour constancy should be discussed at the level of object perception. This is highlightened by the recent observations concering the blue&black dress, which is seen by different individuals as being either blue/black (correct in terms of object recognition) or white/gold (incorrect). This shows (1) strong influences of cognitive processes and (2) the impact of different priors, whose origin is still a mystery. In this context I will show and discuss the results of colour matches of the blue&black dress made by subjects of different ethnical backgrounds and different levels of „colour experience“.

14:30
Colour constancy predicted by metameric mismatch volumes

ABSTRACT. The present study investigated what determines the variation of colour constancy across colours. We examined the role of linguistic colour categories, and of the stability of the LMS signal across illumination changes. In particular, we tested the impact of metameric mismatch volumes, which describe the volumes of all theoretically possible LMS signals that may potentially result from a change in illumination if the reflectance is unknown. Observers were simultaneously presented two photorealistic images of the same scene rendered under different daylight illuminations. One of 12 coloured objects in one of the images was set to a random hue, and observers were asked to adjust it so that it matched the colour in the other image. Colour constancy did not peak at the category prototypes and was not correlated to “sensory singularities” (the dimensionality of the mapping of LMS signals). Instead, metameric mismatch volumes explained more than 50% of the variance even when controlling for performance in a control condition, in which illuminations did not vary across images. These results show that observers know, probably from experience, how uncertain different colour signals are when illumination changes. More generally, these results demonstrate the importance of metameric mismatching for colour constancy.

14:45
Changes in the lighting or reflectance of isolated glossy surfaces reveal a bias to associate particular colour directions with changes in lighting.

ABSTRACT. For a single matte surface, changes in surface reflectance and illumination are indistinguishable. Introducing a specular component to the reflectance properties of the surface allows separation of surface and illuminant colours (Lee & Smithson, ECVP, 2015; Smithson & Lee, ICVS 2015). We presented hyperspectrally raytraced movies showing isolated objects undergoing gradual changes in reflectance or illumination, and asked observers to classify the transition as a change in the colour of either the paint or the lighting. We used spectral functions interchangeably as reflectances and illuminants, allowing us to test observers' willingness to assign particular colour-transitions to either surface or illumination changes. Even low levels of specularity were sufficient to support discrimination of surface and illumination colour changes, but observers' response biases – quantified with a log likelihood ratio – depended on the chromaticity direction of the change. For spectral changes aligned to the daylight locus, the log likelihood ratio favoured lighting changes by 0.1 to 0.2 units, compared to spectral changes orthogonal to the daylight locus. By testing performance at different levels of specularity we estimated bias in relation to the reliability of cues in the movies. The bias was particularly marked at low specularities but persisted as specularity increased.

13:30-15:00 Session 18B: Eye movements

.

Location: A
13:30
What can saccadic inhibition reveal about foveal and peripheral information processing within fixations?

ABSTRACT. The abrupt onset of task-irrelevant distractors leads to saccadic inhibition. The magnitude and delay of the effect is related to the location and size of the distractor (Glaholt et al., 2012; Pannasch & Velichkovsky, 2009). It was hypothesized that the saccadic inhibition represents the time required for information processing of the distractor. To test this assumption, we presented distractors of different occlusion conditions. Distractors occluded either (1) the foveal region, (2) the periphery, (3) a small part of the periphery, (4) the foveal region and most of the periphery or (5) the whole image. The onset of saccadic inhibition was the same for all distractor conditions but for (4) it was earlier terminated and showed a smaller magnitude. Combining the effects of conditions (1) and (2) produced a similar inhibition distribution as in (5). Results implicate that in a single fixation, foveal (i.e., information extraction) and peripheral (i.e., saccade target selection) information processing can occur in parallel and independent of each other.

13:45
Transformation priming promotes stable and consistent perception in spite of unstable retinal input

ABSTRACT. If there is one thing constant about retinal input, it is that it constantly changes due to eye-movements, self-motion, illumination changes, object-motion, etc. All these changes must be correctly interpreted on the fly in order to keep visual perception stable and consistent. This poses an enormous challenge, as many transients are highly ambiguous in that they are consistent with many alternative physical transformations. We investigated how our visual system uses a recent perceptual experience to overcome this problem. We used three dynamical displays (structure-from-motion (SFM), shape-from-shading (SFS), and streaming-bouncing object collisions (SB)) with a transient ambiguous change that can produce two qualitatively different perceptual experiences: stable or reversed rotation in SFM, stable or inverted depth in SFS, streaming or bouncing in SB. For all displays, we observed reliable transformation priming, as the perceptual interpretation of a physical change in earlier trials tended to repeat in subsequent trials. Additional experiments demonstrated that the observed priming was specific to the perception of transient events and did not originate in priming of perceptual states, selective attention, or low level stimulus attributes. In summary, we demonstrate how experience-driven updates of prior knowledge about physical transformations build stable and consistent vision from unstable retinal input.

14:00
The role of visual stability in representations of pre- and post-saccadic objects
SPEAKER: Caglar Tas

ABSTRACT. In the present study, we investigated how object information is integrated and updated across saccades. Specifically, we asked how visual stability (perceiving the target object as continuous across the saccade) influences the pre-saccadic representation of the object. Participants were presented with a colored target and instructed to memorize its color before executing a saccade to it. On some trials, the color was changed to a new value during the saccade. Participants reported either the pre- or the post-saccadic color. Stability was manipulated with target blanking paradigm. The data were fit with probabilistic mixture models. We found that when reporting the pre-saccadic color, incorrect reports of the post-saccadic color were more likely under conditions of visual stability versus instability, supporting an object-based model of transsaccadic updating and integration: under visual stability, pre- and post-saccadic features are mapped to the same object representation, leading to the overwriting of the pre-saccadic features by the post-saccadic features. If stability is disrupted, pre- and post-saccadic features are represented as different objects, leading to protection of the pre-saccadic features. In addition to overwriting errors, color reports were subtly shifted toward the non-reported color, regardless of the stability condition, suggesting some degree of color integration.

14:15
How aware are we of our own eye movements?

ABSTRACT. People can identify their own fixations compared to those of someone else but only slightly above chance (Foulsham and Kingstone, 2013). This conclusion is based on fixations recorded during a scene memory task, so people may remember fixated objects as opposed to eye movements. In oculomotor capture (Theeuwes et al 1998), in contrast, it has been claimed that people are unaware of their own erroneous saccades towards distractors. This claim is based on general statements of remembered accuracy made after the experiment. Here we asked whether people could accurately report on their own eye movements using three different approaches: first, we asked participants after a visual search experiment to discriminate their own eye movements from those of someone else searching the same image. Second, we asked participants in an oculomotor capture experiment to report after each trial whether they looked directly at the target. Third, we replayed an animation of saccades after each trial in a double-step saccade experiment and asked participants if they were viewing their own or someone else’s behaviour. The results across all three studies suggest that observers are sensitive to what they looked at, but have little knowledge about their own eyemovements per se.

14:30
Substhreshold post-saccadic errors decelerate oculomotor learning
SPEAKER: Martin Rolfs

ABSTRACT. Saccadic eye movements remain accurate through a process called saccadic adaptation, compensating for errors experienced upon landing off intended targets. We have shown recently that saccades continuously track intrasaccadic steps (ISS) whose amplitudes rise and fall with a sinusoidal profile, varying forwards and backwards across trials at a slow, fixed frequency. Specifically, we found saccadic landing errors were modulated exactly at the ISS frequency, but lagging the ISS modulation by 20 trials on average (Cassanello et al., 2014). Here, we used this method to examine the speed and completeness of adaptation as a function of objective visibility of the ISS. Twenty observers completed adaptation trials with sinusoidal-ISS modulation (3x100 trials/cycle) in three ISS-amplitude conditions: either completely invisible (ISS amplitude at the observer’s threshold for detecting the ISS, determined in a pretest), barely visible or clearly visible (one and two JNDs above threshold, respectively). Adaptation occurred in all three conditions and its completeness was independent of visibility (~10% of ISS amplitude). Conversely, visibility significantly decreased the oculomotor response lag (22, 16, 13 trials in the three visibilities). These results suggest that the oculomotor system tracks subthreshold ISS as properly as visible ones, by integrating errors over a larger number of saccades.

14:45
Predicting oculomotor strategies in reading with normal and damaged visual fields

ABSTRACT. The distribution of fixation positions during reading shows that normally-sighted subjects typically read by successively placing their fovea next to the center of the words, in order to reduce the deleterious effects of the visual periphery such as reduced acuity or crowding. In this study, we present a Bayesian ideal observer of reading which is based on spatiotemporal characteristics of letter recognition across the visual field. Our approach to reading assumes that the optimal fixation is the one that optimizes the “Expected Information Gain” to identify the letters from the current word. At each fixation, Bayesian inference is used to combine prior knowledge with newly extracted information about letter identities. This is done by using letter recognition rates and letter confusion matrices across the visual field. Lexical inference is also used to update letter identity information at each fixation. The model predicts the 2D spatiotemporal pattern of saccades during reading, in contrast with theories that use 1D letter-slot approaches to model reading. This is critical to predict optimal reading strategies for readers who cannot extract visual information with their fovea, such as Age-related Macular Degenerescence patients. Model predictions for normal and damaged visual field are discussed.

13:30-15:00 Session 18C: Magnitude, time, and numerosity

.

Location: C
13:30
Central tendency effects in temporal interval reproduction in autism

ABSTRACT. Central tendency, the tendency of judgements of quantities to gravitate towards their mean value, has been attributed to the use of ‘prior knowledge’ representations of a mean stimulus, which are integrated with noisy sensory estimates to improve precision. Based on this model, and a recent theoretical account positing attenuated prior knowledge in autistic perception (Pellicano & Burr, 2012), we predicted that children with autism should present reduced central tendency compared to typical children in temporal interval reproduction. We tested this prediction using a child-friendly, dual-task temporal interval reproduction/temporal discrimination paradigms which we administered to 23 children with autism, 23 age- and ability- matched typically developing children and 14 typical adults. Central tendency effects (assessed with a Bayesian computational model) reduced with age in typical development, while temporal discrimination improved. Children with autism performed far worse in temporal discrimination than matched controls, which predicts that they should show more central tendency than the controls. However, their central tendency was far lower than predicted by their poorer temporal resolution. The results are consistent with the theoretical prediction that individuals with autism use prior knowledge to a lesser extent than controls to improve perceptual performance.

13:45
An illusion of numerosity explained
SPEAKER: Quan Lei

ABSTRACT. Last year we reported that there seem to be more grey disks than white disks when 50 randomly-located white disks are intermingled with the same number of grey disks on a dark grey field. On a light grey field, there seem to be more dark grey than black disks. Thus lower-contrast disks paradoxically trump higher-contrast ones. (This was shown by comparison or matching to isolated disks, when the largest effect size was 36%, and now by differential numerosity power laws over the range from 20 to 80 disks.) When intermingled white and grey disks are segregated in depth, or by motion or shape, the illusion disappeared. Why? We assume salience improves grouping, and this increases clustering which is known to reduce perceived number. Segregated stimuli all group similarly, salient or not, but when stimuli are intermingled, only the stronger ones are grouped, on the principle that no more than one ‘object’ can occupy the same space-time volume; weaker stimuli remain disaggregated and therefore appear more numerous.

14:00
Motion-induced compression of perceived numerosity

ABSTRACT. In 2003 Walsh proposed an innovative theory, proposing that the perception of time, space and number share a common encoding system of magnitude. Much evidence supports this idea, including the fact that adaptation to fast translational motion produces a robust reduction of the perceived duration (Johnston et al., 2006; Burr et al., 2007); however, adaptation to flow motion of comparable speed does not (Fornaciai et al., 2014). Here we tested whether adaptation to visual motion also affects numerical estimates. Subjects were asked to discriminate the numerosity of two patches of dots within a numerosity range of 8-30, after adapting to a grating translating or rotating at 20 Hz or 5 Hz (in different sessions), positioned at one of the patches. Adaptation to fast translational motion yielded a significant reduction in the apparent numerosity of the adapted stimulus (up to 25%), while adaptation to slow motion had no effect on numerosity. Adaptation to complex rotational motion of either speed had no effect on numerosity. Control experiments show that none of these effects can be accounted for by trivial masking aftereffects. Taken together these results clearly support Walsh’s idea of a common, shared-mechanism for encoding space, time and number.

14:15
Perceived Duration, Task Difficulty and Performance: A General Metric
SPEAKER: Andrei Gorea

ABSTRACT. Time perception has been shown to depend on the difficulty of and, indirectly, performance on a concurrent task. Here we offer the first design permitting the metrical quantification of this relationship and its generalization over any task and duration range. The time necessary to perform a variable length visuo-motor task was first assessed for each participant. The task consisted in clicking-off as fast as possible a variable number of discs displayed on a virtual circle around fixation. Participants were then required to perform this same task during either the necessary durations (520, 960 and 1760 ms) or during durations 1.8 times shorter or longer than necessary and to estimate the given durations via a comparison and reproduction technique. Task difficulty was quantified as the ratio between the given and necessary durations. Performance was quantified as the ratio between the numbers of clicked and displayed discs for each given duration. Perceived duration was also assessed in the absence of the visuo-motor task to serve as a reference baseline. The results show that difficulty per se (as defined) is not a factor in duration estimation but that the latter increases linearly with performance with a slope of about 260 ms.

14:30
Tempus Fugit: Competitive social interactions impair time perception

ABSTRACT. Humans are capable of discriminating small intervals of time ranging from milliseconds to seconds. Despite our frequent engagement in social interactions, where sometimes time-keeping is essential, their effect over our perception of time’s passage has hardly been studied. We assessed the effect of social interactions on time perception by asking participants to solve puzzles either by themselves (isolated condition) or in competition with a partner (competitive condition). In a control condition, they passively watched the partner solving the puzzle. They then reported whether each trial was longer or shorter than a standard interval. We fitted psychometric curves to these reports and found that, relative to the control, the psychometric curve was shifted rightwards only in the isolated condition. In contrast, the slope of the curve was substantially shallower for competitive interactions than for both non-social interactions (isolated or passive viewing). These findings indicate that time is felt to be consistently longer when performing a task by oneself, but temporal discrimination is strongly impaired during competition. Further, these results could not be explained by general arousal or by factors such as number of visual or action-based events. We conclude that social interactions alter time perception in ways distinct from other factors.

14:45
Individuation of objects and object parts rely on the same neuronal mechanism

ABSTRACT. Humans can enumerate up to three-four objects very efficiently but their performance decreases sharply above four items. This ability is called subitizing and is evident for separate objects. Recently, a study showed the same subitizing effect when participants enumerated parts of a single object. Here we searched for the neural mechanisms underlying this new type of subitization. To this end, we measured a lateralized EEG response (N2pc) previously associated with individuation of multiple objects. In Experiment 1, participants were asked to enumerate the number of outdents of one of two solid half discs presented in each hemifield. In Experiment 2, a single circle with bilateral indents was presented and participants were asked to enumerate the number indents on one side of the circle. In both experiments, participants’ error rate was low (less than 10%) when enumerating up to three parts but increased for larger numerosities. The N2pc amplitude increased as a function of the number of object parts, and reached an asymptote corresponding to the behavioral subitizing limit. These results replicate the ones previously reported for separate objects, and suggest that the same individuation mechanism operates when enumerating a small set of different objects or parts of a single object.

15:00-16:00 Session 19

Motion, Time, Space & Magnitude / Vision & Motor Control / Wholes and Parts (Illusions, Objects & Grouping)

Location: Mountford Hall
15:00
Interaction mechanisms of global and local image analysis in visual systems of observers with field-dependent and field-independent cognitive style
SPEAKER: unknown

ABSTRACT. We investigated the effectiveness of identification of fragmented figures in people with field-dependent and field-independent cognitive styles. We used our computerised version of the Gollin-test and showed 75 contour images of objects. Contour lines of an object formed on the screen by progressive accumulation, via the addition of random blocks of pixels. At the time of identification of the object the accumulation of fragments was stopped. We recorded the minimum total area derived fragments in terms of the percentage of the total area of the image outline and also the time taken for image formation. It was established that participants with field-independent cognitive styles needed more contour fragments for identification of the object than those with field-dependent cognitive styles. The hypothesis of the study is based on the notion of spatial-frequency filtering, selection of signal from noise and a matched filtering model. The data indicate the dominance in individuals with field-independent cognitive style of the local analysis, but in individuals with field-dependent cognitive style, more global image analysis. Searching for masked figures by patients with schizophrenia demonstrated field dependence. For the perception of fragmented figures they needed more elements. This result demonstrates their dysfunction in the mechanism of global analysis.

15:00
Visuo-motor delay in fast and slow ball sports
SPEAKER: unknown

ABSTRACT. Visuo-motor delay (VMD) is the time taken to process and respond to a change in the visual environment. Given the demands of fast-ball sports, one might expect athletes to have shorter VMDs as this could convey a performance advantage. Previous research has shown that elite tennis players have shorter VMDs than controls. However, at present, little is known about the magnitude or significance of VMDs across a wider range of sports. Here we determined the VMDs in elite-level athletes (high-level cricketers, professional rugby league players) and controls in a go/no-go, coincidence task. In our task, participants were instructed to press a button when a horizontally moving target contacted a stationary, vertical line, but to inhibit the button-press if the target disappeared before contact. Cricketers’ average VMDs were shorter (139ms) than both rugby-league players’ (156ms) and controls’ (159ms). Overall, there was a main effect between groups (F(2,53)=3.226, p<.05). Fisher’s LSD post-hoc analysis revealed a significant difference between cricketers and controls, (p<.05) but other differences were not statistically significant. This suggests that competing in fast-ball sports may require, or enhance, the ability to inhibit responses to changes within the visual environment.

15:00
Motion-induce position shifts smaller across the vertical and horizontal meridians
SPEAKER: unknown

ABSTRACT. When a Gabor patch drifts across the screen while its internal pattern drifts in the orthogonal direction, the perceived direction of motion is the combination of the two motion vectors (Infinite Regress Illusion, Tse and Hsieh, 2006; Lisi and Cavanagh, 2014). If the Gabor patch oscillates sinusoidally back and forth on a linear path while the speed of the internal pattern is modulated sinusoidally 90° out of phase with the path motion, the path is perceived as elliptical. In the present study we measure the strength of this motion-induced illusion by asking subjects to add a physical shift orthogonal to the path (adding to the illusion) until the perceived path appeared circular. The initial physical path was centered 10° in the periphery and oriented in one of four directions (-45°, 0°, 45°, 90° where 0° is vertical) at one of the eight possible locations around the fixation point. The results show that that for the vertically oriented stimuli the illusion is significantly weaker when presented at the vertical meridian, while for the horizontally oriented stimuli the illusion weakens at the horizontal meridian. These results suggest that the integration of the motion vectors is disrupted in the viscinity of the meridians.

15:00
Characteristics of target appearance changes in crowding
SPEAKER: unknown

ABSTRACT. An important limiting factor of peripheral vision is crowding - the inability to identify targets in clutter that are easily identified in isolation. Recent results suggest that a common 'crowding error' is that target elements are not perceived ('omission errors'). Here, we show detailed characteristics of omission errors in crowding. Observers were presented with targets consisting of 3 horizontal and 3 vertical lines in different square-like arrangements. There were two types of flankers: 1) one vertical & two horizontal lines 2) one horizontal & two vertical lines. The target was presented for 150 ms at an eccentricity of 10 degrees to the left or right of fixation. Following stimulus offset, participants were shown six alternative items and asked to choose which one resembled the target the most. In each of the alternatives, one of the six lines of the target was missing. We found that vertical lines were perceived to be missing more often than horizontal lines, and outer vertical lines more often than inner vertical lines. Alternatives without an X-junction (as present in the target) were rarely chosen. The flanker type had no effect. We suggest that detailed characterizations of target appearance changes will help to understand crowding.

15:00
Effect of search strategy on tactile change detection
SPEAKER: unknown

ABSTRACT. Haptic perception of two-dimensional images can make heavy demands on our working memory. During active exploration, we need to store not only the latest local sensory information, but also integrate this with kinaesthetic information of hand and finger locations to generate a coherent percept. We tested the active search process and working memory storage of tactile exploration as measured in a tactile search for change task. We previously reported an extremely small estimated tactile memory (1±1) suggesting little or no cross-position integration in tactile perception (Yoshida, Yamaguchi, Tsutsui, & Wake, 2015). Here, we tested possible contributions of the hand movements or information sampling strategies on this small estimated memory storage. The index finger movements of the participants during haptic search for change were recorded. Analysis showed that participants repeatedly stopped and moved their finger similarly to fixations and saccadic eye-movements. Most stopped positions were near stimulus item positions, suggesting participants’ strategy was to compare only one item at a time. When this strategy was inhibited by using a one-shot change detection task, participants held a maximum of 3 items in memory. These results show that haptic working memory can hold multiple items, but shows reduced capacity due to sensory limitations.

15:00
Verbal working memory influences time perception in explicit time estimation

ABSTRACT. In this set of two experiments we tried to study how two different systems, a rhythmic and a memory-based one, can work together to generate explicit time perceptions. Using a time estimation task, participants were asked to report the duration of a visual stimulus appearing for a random interval ranging from 1 to 8 seconds. In one condition participants had to count the seconds before responding. In a different block participants were told not to count and simply guess the time. Both strategies produced greatly different performance functions: 1) the counting strategy presented similarly fast reaction times as a function of interval and better discrimination in general; 2) the non-counting condition produced an inverted U-shape distribution in which extremes were responded to faster than intermediate values. This function was also linked to a pattern of poor discrimination in the extreme intervals, with clear overshooting in the shorter and undershooting in the longer ones. More importantly, manipulation of verbal distraction and alterations to a rhythm produced an impact in the counting condition only, but not in the non-counting one. The results are interpreted under a combination of clock-based and memory-based systems that coexist to produce explicit time estimations.

15:00
Point me in the Right Direction: Same and Cross Category Adaptation Aftereffects to Hand Pointing Direction
SPEAKER: unknown

ABSTRACT. Comprehension of the pointing gesture is integral to developing shared attention. Using the index finger, the pointing gesture permits us to indicate non-verbally to another an object or event of interest. Very little consideration has been given to adult visual perception of hand pointing gestures. Across two studies we use an adaptation paradigm to explore the mechanisms underlying the perception of proto-declarative hand pointing. Fourteen participants judged whether 3D modeled left and right hands, from an allocentric visual perspective, pointed in depth, at or to the left or right of a target (test angles of 0°, 0.75° and 1.5° left and right) before and after adapting to either hands (left and right) or arrows which pointed 10° to the right or left of the target. After adaptation, the perception of the pointing direction of the test hands shifted with respect to the adapted direction, revealing separate mechanisms for coding right and leftward pointing directions. The considerable cross adaptation found when arrows were used as adapting stimuli and the asymmetry in aftereffects to left and right hands suggests that the adaptation aftereffects are reliant on fine tuned visual discrimination of the morphological structure of both the pointed index finger and hand.

15:00
The role of shape complexity in the lateral occipital complex
SPEAKER: unknown

ABSTRACT. We investigated the relationship between the physical properties of three animal silhouettes and the BOLD signals from early visual cortex through to the object-selective lateral occipital complex (LOC). Complexity of the three silhouettes was manipulated by effectively low-pass filtering the outer contours, creating low, mid and high complexity images. These stimuli were then presented to participants in a rapid event-related design, allowing us to identify patterns of neural activity specific to each shape. We correlated this neural activity with various physical and more abstract measures of shape similarity to identify which shape properties may be influencing the activation in the various visual ROIs. We found the strictly physical measures were the best predictors of activation in earlier visual areas, likely due to their retinotopic organisation. For extrastriate areas (LOC/pFs), we found measures of complexity were the best predictors. These data appear to reflect the trend away from physical tuning towards more abstract shape representations in higher-level visual areas.

15:00
The Helmholtz size illusion is processed by extrastriate visual cortex
SPEAKER: unknown

ABSTRACT. Neuroimaging research has indicated that for some illusions of size, there are commensurate distortions of retinotopy in V1. It remains unclear whether these distortions in retinotopy arise from processing within V1 or feedback from higher visual areas. Here, we examined the extent of activity within V1 in response to the Helmholtz illusion, in which physically square, horizontally lined stimuli, are perceived as taller than their physically square, vertically lined counterparts. This illusory percept can be neutralised by extending the lines to make the stimuli appear square. We found that the spatial extent of activity in V1 followed the physical rather than the perceptual dimensions of the stimulus, suggesting an extrastriate locus for the illusion. To explore the causal role of extrastriate cortex further we performed a TMS experiment in which participants made judgements about the aspect ratio of rectangular stimuli that were perceptually square. We stimulated V1 and two extrastriate areas, LO1 and LO2. Only stimulation of LO1 resulted in a significant release from the illusion. We show that extrastriate, rather than primary visual cortex plays a causal role in our perception of illusory size.

15:00
The interference effect of color in amodal completion
SPEAKER: unknown

ABSTRACT. We investigated possible influences of surface color in amodal completion using a sequential matching task. During this task, participants had to judge whether a test shape could be a previously shown partly occluded shape. Similar to De Wit et al. (2006), we used two different shape completions (global, local) but now combined these with different color completions (global, local, anomalous). Global completions extended the global shape and color regularities (e.g. repetition of protrusions and colored patches), whereas local completions extended the local shape and color properties at the occlusion boundaries. We compared the response time of correct judgments in match pairs. To account for shape complexity, and focus on the effect of color context, we used stimuli with the same overall shape but without colored patches as a baseline. We found a strong effect of color, with faster response times for global color completions relative to local and anomalous color completions. Additionally, when comparing global and local color completions, color marginally interacted with shape, revealing the highest facilitation for global color / global shape completions. We will discuss implications of the current results in a framework that accounts for both shape and color completions.

15:00
Visual adaptation distorts judgments of human behaviour during naturalistic viewing
SPEAKER: unknown

ABSTRACT. Observing the behaviour of other individuals allows us to infer the goals of their actions and to derive information about their internal thought processes. A network of brain areas sub-serves these processes that contain neurons that respond selectively to specific visual actions. Visual adaptation results in a selective reduction of the sensitivity of neurons tuned to a visual stimulus and results in perceptual aftereffects. The extent to which adaptation influences processing of actions and human behaviour at increasingly higher stages of processing is unknown. We show that processing of action kinematics for action recognition is biased by visual adaptation leading to incorrect judgments of human actions. Visual adaptation also biases more complex inferences about the mental states of individuals in the social scene, but this is due to downstream effects of visual processing biases, rather than adaptation operating within mentalizing or simulation systems. Our research overcomes previous limitations resulting from the use of unrealistic or simplistic stimuli by using Virtual Reality to present life-sized, photorealistic and 3D actors within naturally unfolding social scenes. Judgments of human behaviour are dependent on a combination of what an individual is doing and the adaptive effect of other individuals within the social environment.

15:00
The shape of opto-kinetic nystagmus as indicator of perception of vection illusion
SPEAKER: unknown

ABSTRACT. Motion sickness symptoms can occur in the absence of real physical motion of the observer. Specifically, the vection illusion (VI) often ensues as a result of exposure to dynamic visual displays. We developed a method of quantitative evaluation of the vection illusion strength based on optokinetic nystagmus (OKN) characteristics. We studied the VI strength depending on viewing angle values of dynamic visual displays. The VI was initiated using the CAVE virtual reality system. The VI strength was analyzed using the SSQ questionnaire and OKN characteristics. Results revealed complex links between viewing angle values, the VI strength and OKN characteristics. When dynamic visual displays were occupying half of the visual field, the VI strength and OKN characteristics were not very pronounced. For displays which occupied the whole visual field the VI strength was greatly higher and the OKN characteristics were significantly changed: there were a lot of low-amplitude saccades in the slow OKN phase and high-amplitude high-frequency saccades in the fast OKN phase. Our result showed that the OKN characteristics were tightly linked with the VI strength, so it would be possible to use them as real time indicators of the VI perception.

15:00
Action video game play increases the coupling between visual motion processing and visuomotor control
SPEAKER: unknown

ABSTRACT. We previously found that action video game play improves visuomotor control. Here we examined whether the improvement is related to visual motion processing. We tested 12 male action gamers and 11 male non-gamers with a manual control task in which they used a joystick to keep a target moving randomly along the horizontal axis centered on the display under two controller dynamics that required them to primarily rely on target position or target motion information to generate control responses. Action gamers had better control precision and higher response amplitude for both control dynamics than non-gamers. We then examined visual motion processing of these participants using an oculomotor pursuit task in which participants tracked randomized radial motion of a step-ramp target. Action gamers did not differ from non-gamers in their performance on this task. However, action gamers’ pursuit latency, initial acceleration, steady-state gain, pursuit direction- and speed-tuning were significantly correlated with their superior control precision when the use of target motion information was required for the control task (Pearson’s r: 0.61-0.77, p<0.05). No such correlations were found in non-gamers. Action video game play increases the coupling between visual motion processing and the sensory-motor system that uses motion information for visuomotor control.

15:00
Perception of biological motion in central and peripheral visual field
SPEAKER: unknown

ABSTRACT. Studies analyzing motion perception in peripheral visual field demonstrate that central retina is more specialized for motion perception (Finlay, 1982). In the current research we used biological motion stimuli (consistent with and extending the paradigm by Johansson, 1973) with a two-fold aim: first, to explore the perception of biological motion when limited information of object’s movement is given; second, to analyze whether a stimulus magnification can compensate for reduced motion perception in the peripheral visual field (Gurnsey et. al., 2010; Ikeda et. al., 2005). Participants were instructed to determine whether the presented stimulus is a biological object (walking in any of five different directions) or a scrambled version of it. The number of dots representing the motion varied from 1 to 13 according to psychophysical staircase method.The results indicate that perception of biological motion in the central visual field is highly individual (average thresholds range from 3.8-7.1 points). Stimulus magnification can compensate for the performance of the task only for smaller eccentricities (up to 8 degrees), but cannot compensate for larger eccentricities (16 degrees), thus demonstrating that the central retina is also specialized for biological motion perception additionally to e.g. detection of just noticeable object displacement.

15:00
The impact of eye movements on perception of “spine drift” illusion
SPEAKER: unknown

ABSTRACT. It was shown that drift illusions are strongly affected by eye movements (Kitaoka, 2010). The aim of our study was to reveal the role of eye movements in “spine drift” illusion perception. The original display of the illusion was changed to construct four modified variants: each spine of the central square was rotated at 30°, 60°, 90° for the 1-3 variants respectively and all spines of the central square were oriented in random order for the 4th variant. The observers were asked to look at each displays for 10 sec and then to estimate the illusion strength on a scale of 1-5. During the performance eye movements were recorded. The results revealed that the illusion strength was highest for the original display and then decreased gradually from the 1st to the 3rd variants. The data for the 4th variant were medium in value. Differences in fixation durations and microsaccade counts were correlated with subjective ratings. No significant differences for saccade counts were found. Our results indicate that the measured micro and macro movements may be considered as reliable indicators of the illusion perception.

15:00
Relationship between vection and body sway
SPEAKER: unknown

ABSTRACT. When observers view a large visual stimulus that moves uniformly, they often perceive illusory self-motion (vection). A vection-inducing stimulus is also known to induce postural responses and many studies have used visually evoked postural responses (VEPRs) as objective measures of vection. In the present study, to investigate the relationship between vection and VEPRs, we measured vection and center of foot pressure (COP). In an experiment, participants were asked to stand still with their arms at their sides while viewing a vertically or horizontally moving random-dot pattern. They were also asked to rate vection magnitude (from 0 to 100) after each trial. The results showed stronger vection to vertically moving stimuli than horizontally moving stimuli. Vection was also stronger for upward motion than for downward motion. COP started to move in the inducing stimulus direction immediately after the onset of the inducing stimulus, and its magnitude (difference from the baseline, i.e., COP data before the stimulus presentation) gradually became larger during the stimulus presentation. The mean COP during 1-s intervals before and after vection onset showed larger COP after the onset than before it. This suggests that, at least to some degree, mechanisms underlying vection and VEPRs are related.

15:00
Orientation Decoding in V1 During Motion Induced Blindness
SPEAKER: unknown

ABSTRACT. During Motion-Induced-Blindness (MIB), a target surrounded by a coherent motion field becomes perceptually invisible even though the target remains physically present (Bonneh et al., 2001). Scholvinck and Rees (2010) found increased V1 and V5 BOLD activity during target invisibility, suggesting that target representation in V1 is overwritten by feedback from V5 which actively completes the motion field. However, perceptually invisible targets retain orientation adaptation properties (Montaser-Kouhsari et al, 2004), suggesting target representation in V1 is actually preserved, by feedforward processing.

Using Gabors (45°/135°) as targets during a 3T fMRI experiment, we investigated whether the target is overwritten by completion of the motion field in V1, or if target properties are preserved. If target orientation is present in V1, feedback from V5 does not overwrite the feedforward processing of orientation. If orientation is not present, feedback from V5 completes the motion field, extinguishing target representation in V1. We confirmed that Gabors function as MIB targets at 7.1° and 5.8° eccentricity, but not 4.2°. We retinotopically mapped the target region in V1 and, using multivariate classifiers, decoded orientation when target was either visible or perceptually invisible (MIB). We provide evidence for both feedforward processing of target orientation and feedback processing of motion.

15:00
Is proprioceptive perception of distance affected by exercise?
SPEAKER: unknown

ABSTRACT. Distance perceived both by visual and proprioceptive information is shown to be anisotropic: distances toward zenith are perceived as greater than distances toward the ground. Furthermore, perception of effort depends on the direction of movement: effort invested in movment is perceived as greater when it opposes the direction of gravity relative to movement in the opposite direction. Aim of this resarch was to examine if these two phenomena are related. Namely, will proprioceptively perceived distance change if arm muscles are fatigued? Stimuli were presented to subjects in two directions: horizontal (arm was orthogonal to the body) or vertical (arm raised up); 20, 40, or 60 cm away from the participants' shoulders. Participants task was to match horizontal and vertical distances by moving their arm. Twenty blindfolded participants performed the task before and after exercise, during which they raised weight of 2,5 kg up (vertical direction), or pushed it away from the body (horizontal direction). Results showed effect of distance (F(1, 17)=52.6, p<0.01), and direction (F(1, 17)=1051, p<0.01), but no effect of fatigue. Participants perceived distances in vertical direction as greater, regardless of fatigue. These are not in line with the hypothesis that anisotropy of perceived distance and effort are related.

15:00
The decay of perceptual grouping by collinearity

ABSTRACT. Through grouping processes, the visual system continuously builds coherent wholes out of its local input descriptors. However, it must at the same time deal with regular interruptions of this input, such as blinks, saccades or occlusions. Here we investigate the persistence profile of perceptual groupings across an interruption. Subjects were briefly presented with meaningless object outlines defined by the collinearity of Gabor elements, placed on this contour in a field of randomly oriented distractor elements. After a variable ISI without orientation information, only part of the contour was re-aligned. Subjects were instructed to respond to which side of the display, left or right, the re-aligned contour fragment could be seen. A two-stage persisting benefit to contour detection was observed, consisting of an early stage specific to the local element positions of the first stimulus (<200 ms) and a later stage independent of it. We conclude that the grouping by collinearity of local orientations can indeed survive brief input interruptions, to the benefit of subsequent grouping processes.

15:00
Perceiving and acting upon weight illusions in the absence of somatosensory information
SPEAKER: unknown

ABSTRACT. When lifting novel objects, the fingertip forces employed are influenced by a variety of visual cues such as object volume and apparent material. This means that heavy-looking objects tend to be lifted with more force than lighter-looking objects, even when they weigh the same amount as one another. Expectations about object weight based on visual appearance also influence how heavy an object feels when it is lifted. For example, in the ‘size-weight illusion’, small objects feel heavier than equally-weighted large objects. Further, in the ‘material-weight illusion’, objects which seem to be made from light-looking materials feel heavier than objects of the same weight which appear to be made from heavy-looking materials. Here, we investigated the degree to which peripheral somatosensory information contributes to these perceptual and sensorimotor effects in IW, an individual with peripheral deafferentation (i.e., a complete loss of haptic and proprioception feedback). We examined his perception of heaviness and fingertip force application over repeated lifts of identically-weighted objects which varied in size or material properties. Despite being able to judge real weight differences, IW neither appeared to experience the size or material weight illusions nor showed any evidence of sensorimotor prediction based on size and material cues.

15:00
The effect of proximity in numerosity judgements
SPEAKER: unknown

ABSTRACT. The human ability to estimate the numerosity of large sets of visual elements is well known. We present a study on combinatorial properties that affect human numerosity judgements. We used patterns of elements (between 22 and 40) placed at random within a circular region. We asked a sample of observers (N=24) to compare two patterns of equal cardinality (presented in two intervals) and choose the one that appeared as more numerous. Observers also had to judge which pattern appeared more dispersed, and which appeared more clustered. We then compared the human answers to the section based on spatial properties of the patterns: the area of its convex hull, the occupancy area, the total degree of connectivity, and its local clustering. Note that all indices except the convex hull depend on the notion of proximity between pairs of elements. Our experiments investigate the effect of such parameters on perception. The results suggest that estimates of numerosity, dispersion and clustering are based on diverse spatial information, that there are alternative approaches to quantifying clustering, and that in all cases clustering is linked to a decrease in perceived numerosity. The alternative measures have different properties and different practical and computational advantages.

15:00
Using Visual Search to assess cues for Object shape.
SPEAKER: unknown

ABSTRACT. The visual system determines the shape of closed contours and can find a target among a set of distractors very rapidly if it contains a unique cue. Search speed and asymmetries in performance that can arise when target- and distractor-element roles are reversed were used to determine elements of the visual system’s code for globally-integrated shape. Kristjansson and Tse (2001) argued curvature discontinuities (CDs) are critical local cues to shape, supporting rapid visual search with minimal distractor interference when present in the target but absent in the distractors. However, studies using Radial Frequency contours have suggested the internal polar angle between adjacent corners plays an elementary role. Two search experiments will be presented in which performance within-observers (n=5) is contrasted for patterns differing in curvature, CD, corner numerosity and internal polar angle. The results show that efficient search does not depend on the presence or absence of CDs, nor on differences in corner curvature or numerosity but that the angle separating corners was a primary feature driving both ‘pop-out’ and search asymmetry. The results support the conclusion that polar angles are labelled cues to shape in human vision and therefore a critical element in the code for object shape.

15:00
Louder voice for bigger physical movement: Compatibility between vocalization and action production
SPEAKER: unknown

ABSTRACT. Previous researches have shown that attributes of perceived stimulus influences the produced manual force in stimulus-response compatibility paradigm (e.g., Romaiguère, Hasbroucq, Possamdi, & Seal, 1993). The present study investigated a response-response compatibility between vocalization and manual movements. The task of participants was to draw a circle on a touch-screen display while vocalizing one of Japanese vowel sounds (close to “a” in International Phonetic Alphabet). The level of vocalization was fixed either below 55 or over 70 dB in each block (background sound level was 45 dB). Participants drew a circle so that it inscribes in a virtual square consisted of a set of briefly (500 ms) presented vertices of small, middle, or large square. Results indicated that the area of the drawn circles was larger with louder vocalization than with lower one regardless of the size of the cue. We suggest that the information related to the magnitude of physical movements was shared with different kinds of actions, leading to interferences between them.

15:00
Limitations of the ODOG filter in special cases of brightness perception illusions
SPEAKER: unknown

ABSTRACT. The Oriented Difference of Gaussian (ODOG) filter of Blakeslee & McCourt (1999) has been successfully employed to explain several brightness perception illusions which include illusions of both brightness-contrast type, e.g. Simultaneous Brightness Contrast (SBC) and Grating Induction (GI) (McCourt 1982), and brightness-assimilation type, e.g. White effect and shifted White effect. We demonstrate some limitations of ODOG filter in predicting perceived brightness through a study involving specific parameters such as test patch length and spatial frequency in the White and shifted White illusions. More specifically we find that for very long grey patch lengths the ODOG filter fails to correctly predict the direction of brightness change and this failure persists for a wide range of frequencies. [We are grateful to Alan Robinson of University of California, San Diego, for providing his MATLAB implementation of the ODOG filter]

15:00
An action to an object does not improve its episodic encoding, but removes distraction
SPEAKER: unknown

ABSTRACT. There is some debate as to whether responding to objects in our environment improves episodic memory or not. Some authors claim that actively encoding objects improves their representation in episodic memory. Conversely, episodic memory has also been shown to improve in passive conditions, suggesting that the action itself could interfere with the encoding process. This study looks at the impact of attention and action on episodic memory using a novel WWW task that includes information about object identity (What), spatial (Where) and temporal (When) properties. With this approach we studied the episodic memory of two types of object: Target, where attention or an action is defined, and Distractor, object to be ignored, following two selective states: active vs. passive selection. When targets were actively selected, we found no evidence of episodic memory enhancement; but instead memory from irrelevant sources was suppressed. The pattern was replicated across a 2D static display and a more realistic 3D virtual environment. This selective attention effect on episodic memory was not observed on non-episodic measures, demonstrating a link between attention and the encoding of episodic experiences.

15:00
Up-down asymmetry in vertical vection
SPEAKER: unknown

ABSTRACT. Research has reported close relations between mechanisms underlying vection and optokinetic nystagmus (OKN). In the present study, we investigated whether up-down asymmetry, similar to that found in vertical OKN, i.e., larger OKN responses for upward motion than for downward motion, would appear in vertical vection. In the present study, we conducted two experiments. In both experiments, participants viewed a vertically moving random-dot pattern and reported vection by using a joystick whenever they experienced the vection. After each trial, they also rated the vection magnitude. In Experiment 1, vection was measured with or without a fixation stimulus. In Experiment 2, the time course of the vection magnitude (with a fixation stimulus) was examined. Experiment 1 showed larger vection for the upward motion than for the downward motion, irrespective of the presence or absence of the fixation stimulus. However, the vection onset latency did not change with the stimulus motion direction. Experiment 2 showed that the up-down asymmetry in vection manifested progressively during the later part of the stimulus presentation period. These results clearly indicate up-down asymmetry in vertical vection, and suggest an overlap in the mechanisms underlying vection and OKN.

15:00
How to study geometrical perceptual illusions

ABSTRACT. Using Titchener's (1901) inverted T as my prime example, I discuss two approaches to the study of geometrical perceptual illusions, aimed at discovering their causes: In the context approach, the illusion-inducing figure is put into different contexts of other illusion or non-illusion figures, and in the variation approach, the critical figure itself is modified in various ways. In nine recent experiments, I let naïve observers verbally compare the lengths of the T's two lines and, by spreading thumb and index finger appropriately, haptically indicate the lengths of target lines of Ts. The visual T-illusion, which always consisted in an overestimation of the T's undivided line, survived in Ts, the lines of which had been replaced by dashed lines or dots; it was hardly affected by self-similar contexts of other Ts, and it even existed in plane figures, for which the T constituted a skeleton. The haptic illusion vanished in delimited branching patterns, in which Ts had been embedded, and in a periodic discrete pattern of symmetry group pmm when all Ts were in lateral orientation. The visual illusion seems to be caused by the T's dihedral symmetry; the haptic illusion depends on stimulus conditions.

15:00
Auditory rhythms influence perceived distance of an occluded moving object
SPEAKER: unknown

ABSTRACT. Using displays in which a moving disk disappeared behind an occluder, we examined whether an accompanying auditory rhythm influenced the perceived displacement of the disk while being occluded. Starting with an auditory rhythm (the baseline rhythm), comprising a relatively fast alternation of equal sound and pause lengths, we had two different manipulations to create auditory sequences with a slower rhythm; either the pause lengths (block-1) or the sound lengths (block-2) were increased. During a trial, a disk moved at a constant speed, and was accompanied by a sound sequence. Participants were instructed to judge the expected position of the disk the moment the auditory sequence ended (indicated by a higher tone) by touching the judged position on the touch screen. Additionally, we included a no-rhythm condition that ended with a single high tone. We found that the baseline rhythm led to much more accurate distance judgments as compared to the no-rhythm condition. Slower rhythms generally led to an underestimation of the distance for both pause length and sound length variations, with a larger differentiation between the pause lengths. We will discuss implications of the results in terms of crossmodal processing and timing of external events.

15:00
Determination of relevant component parts of an object in a discrimination task with the bubbles method
SPEAKER: unknown

ABSTRACT. To perform some tasks, like detection, recognition, discrimination or identification of objects in natural environment by day or by night, it is possible to use images acquired from different sensors : images recorded by visible image sensor, thermal images from infrared sensor or images acquired during the night with light intensifier. Related to the environment and sensors characteristics, the quality of images may be very different, the object appearance is also different according to the sensors. The aim of this work is to evaluate the observer performances to discriminate objects related to different sensors and to determine what part of objects is useful in a discrimination task. To obtain this knowledge, we have developed a psychovisual experiment to discriminate vehicles, by using the method of bubbles [Gosselin, Schyns, 2001, Vision Research 41, 2261-2271]. Stimuli presented to the observers are constructed by filtering the original image at different scales signal [Lelandais, Plantier, BIOSIGNALS 2013, Spain] and multiplied by Gaussian “bubbles” that partially obscure the signal [Lelandais, Roumes, Plantier, ECVP 2013, Bremen]. Previous results of experiment give the number of bubbles necessary to perform the task. Now, we determine the useful parts of vehicles for their discrimination which highlight differences between sensors.

15:00
Motion dazzle camouflage in groups; evidence for an interaction between high contrast patterns and the confusion effect.
SPEAKER: unknown

ABSTRACT. Research into animal camouflage has implications that are relevant to many fields. One hypothesis for a class of apparently conspicuous animal colourations, high contrast stripes, is that they represent ‘dazzle camouflage’. Thayer (1909) hypothesised that such patterns may disrupt an observer’s perception of the trajectory or speed of a moving animal. Psychological research with human participants in computer based tasks has found support for this hypothesis with single targets (Scott-Samuel, Baddeley, Palmer & Cuthill, 2011; Stevens et al., 2011). However, the ways in which camouflage affects the capture of moving individuals in groups is unknown. One advantage of grouping behaviour is the ‘confusion effect’ which describes reduced predator attack success with increasing prey group size, possibly due to the increased sensory challenge of tracking one target among many distractors (Landeau & Terborgh, 1986). We investigated the hypothesis that the confusion effect can be compounded by the effects of dazzle camouflage. Our results suggest that some high contrast colourations are superior to background matching colourations as a defence against predator tracking in groups, and that these patterns interact with the confusion effect to a greater degree than background matching colourations. These findings will impact future understanding of camouflage and movement.

15:00
Is adaptation to human motion necessary to change the apparent speed of locomotion?
SPEAKER: unknown

ABSTRACT. Adaptation to videos of human locomotion (recorded at the London Marathon) affects observers’ subsequent perception of human locomotion speed; normal speed test stimuli are perceived as being played in slow-motion after adaptation to fast-forward stimuli and conversely are perceived as being played in fast-forward after adaptation to slow-motion stimuli. In this study we investigated whether the presence of recognisable human motion in the adapting stimulus is necessary for this effect to occur. The adapting stimuli were spatially scrambled; horizontal pixel rows were randomly shuffled. The same shuffled order was used for all frames preserving horizontal motion information, but ensuring no human form could be recognised. Results showed that the after-effect persisted despite spatially scrambling the adapting stimuli; human motion is not a necessary requirement. The after-effect seems to be driven by adaptation in relatively low-level visual channels rather than the high-level processes that encode human motion.

15:00
Relationship between reaction time and ball catching in vision-restricted conditions in elite sportspeople
SPEAKER: unknown

ABSTRACT. What aspects of vision are important for the exceptional visuomotor skills needed in many sports? We compared performance on reaction times to visual stimuli and ball-catching under restricted vision, with elite cricketers, elite rugby league players, and controls. Balls were delivered by bowling machine; participants wore Plato liquid crystal goggles to occlude portions of ball flight. We have previously shown elite cricketers have faster reaction times to visual stimuli, and excel at tasks requiring rapid pickup of visual information (Cruickshank et al 2014). We again find cricketers have faster reaction times than controls to visual stimuli presented centrally (ΔRT=17ms, t(55.3)=2.16,p=0.035) and peripherally (ΔRT=15ms, t(54.1)=2.301,p=0.025). Rugby league players’ reaction times fall between the two groups, but are not significantly different from either. Peripheral and central reaction times correlate with ability to catch a ball when only the first quarter of the flight is visible (central r=-0.27,p=0.023; peripheral r=-0.354, p=0.002). This task distinguishes between cricketers and the other groups (cricketers catch around 10% more balls, t(31.8)=2.901, p=0.007). Faster reaction times correlate with more catches, despite approximately 600ms delay between last available visual information and catch. Further research will focus on tasks probing fast pickup of visual information and visual working memory.

15:00
A study of magnitude estimation with depth cues changing visual perception of circle's size judgment
SPEAKER: unknown

ABSTRACT. We used magnitude estimation to obtain apparent size of circles under four different experimental conditions: (1) black background, and gradients to evoke depth perception ((2) vertical, (3) radial and (4) horizontal lines). Thirty subjects with normal or corrected-to-normal visual acuity (mean age=28.4yrs; SD=5.5) were tested. The procedure consisted of two gray circles luminance of 151cd/m2, 18.3° apart from each other. On the left side was the reference circle (VA of 4.5 deg) in which was assigned an arbitrary value of 50. The subjects' task was to judge the size of the circles in the right side of the monitor screen assigning the number proportional to the changed size, relative to the reference circle. Ten sizes (1.0, 1.9, 2.7, 3.6, 4.5, 5.4, 6.2, 7.2, 8.1, 9.0 deg at 50cm) were presented in each condition randomly. Our results have shown a high correlation for circle size and depth conditions (R=0.994, R=0.992, R=0.995 and R=0.998) between the logs of the stimuli and the subject response. The Power Law exponents were (1) 1.28, (2) 1.40, (3) 1.27 and (4) 1.26. The circle size was judged subjectively closer to the physical size in all conditions except in that son with vertical lines as visual cues.

15:00
Neural responses to symmetry presented in the visual hemifields
SPEAKER: unknown

ABSTRACT. Symmetry ERP work has identified a difference wave called the Sustained Posterior Negativity (SPN); amplitudes in posterior electrodes are more negative for symmetrical than random patterns from around 250ms after stimulus onset (Bertamini & Makin, 2014). Based on the psychophysical and electrophysiological evidence, it seems logical to suggest that a specialized network spanning both hemispheres generates the SPN. Due to its interhemispheric connections, the fibres of the corpus callosum may play a role in mediating symmetry detection between the hemispheres (Herbet & Humphrey, 1996). We examined whether the SPN was produced only by reflectional symmetry presented at fixation, or whether each hemisphere could produce a SPN specific to the pattern presented in the respective hemifield. Symmetrical and random dot patterns were presented to each hemisphere by positioning them either side of the fixation cross. Participants were then required to make a colour discrimination about the presented patterns (were the patterns light or dark red). A SPN was produced in each hemisphere for the pattern that was in the contralateral visual hemifield. Our results, therefore demonstrate that each hemisphere has an independent symmetry detector and that interhemispheric connections do not play a role in producing the SPN.

15:00
Side view dynamic cue for gender recognition of Point-Light Walker based on information from spectral component analysis
SPEAKER: unknown

ABSTRACT. Previous studies have demonstrated gender-specific lateral body sway in the frontal view of a Point-Light Walker (PLW). However, this dynamic cue is obscured, especially in lateral view. This study aimed to find another dynamic cue for gender discrimination in lateral view of PLW. Twenty-one undergraduates (10 males, 11 females) served as walkers. Seven viewers were asked to judge the gender of 21 PLWs 10 times in a random order and finally identified 7 males and 7 females above chance. In these 14 PLWs, cross-correlation function showed that left hip motion correlated inversely with ipsilateral shoulder motion. Fast Fourier transform of hip and shoulder swing demonstrated two large spectral components; the first component corresponded to a step cycle and the second rapid component, to a half step cycle. The first component amplitude was greater in female hips (3.89 vs. 1.75 arbitrary unit, p<0.001) and shoulders (7.73 vs. 3.70, p<0.01) than in males. The second component was also greater in female hips (4.27 vs. 2.21, p<0.01) than in males, while there was no gender difference in the second component of shoulder motion. Thus, the feminine gait in lateral view could be characterized by the rapid hip swing with a half step-cycle length.

15:00
A new principle of figure-ground segregation and object formation: The accentuation
SPEAKER: unknown

ABSTRACT. In this work we explored phenomenologically a new principle of figure-ground segregation and object formation: The accentuation. This principle, first suggested in previous works (Pinna 2010; Pinna & Sirigu, under revision), is now extended systematically to new visual domains going from the figure-ground segregation to the part-whole organization. The effectiveness of the principle of accentuation has been studied in the same spirit of Gestalt psychologists and demonstrated through new phenomena. It was also demonstrated that this principle is independent and autonomous and that it can be pitted against or in favor of other Gestalt principles of grouping and figure-ground segregation. Moreover, the accentuation has been extended from simple drawings to biological conditions, where the appearance and the evolutionary success of a living organism depend on the accentuation of single parts of the body aimed to hide, show, deceive, attract, repel other organisms. Our results suggest that the accentuation can be considered as one of the biological key elements aimed to improve more strongly the biological adaptive fitness.

15:00
The role of orientation information in motion perception.
SPEAKER: unknown

ABSTRACT. Increasing psychophysical evidence shows that the orientation of a moving stimulus directly affects its perceived direction of motion. Here, we analyzed this orientation- induced motion shifts (OIMS) for a variety of stimulus conditions. In our general procedure, a single Gabor pattern was horizontally displaced for several frames with a particular frame duration and ISI, and the observers indicated if the Gabor pattern appeared to move upward or downward. The apparent bias in motion trajectory was measured as a function of the orientation of the Gabor patch. The results showed that the apparent motion trajectory was systematically attracted toward the orientation of the pattern. The bias was large when the frame duration was short and the number of frames was small, but no bias was found when the stimulus moved continuously (i.e., ISI=0).The subsequent experiment using the tilt aftereffect revealed that the motion shift depended upon the perceived, but not physical, orientation of the pattern. We also found that the second-order orientation as defined by the contrast envelope of a pattern can induce the direction shift. These results indicate that local orientation information is used for motion integration in a relatively higher-order motion processing.

15:00
Floor pattern orientations impact on human-human spatial interaction
SPEAKER: unknown

ABSTRACT. Last year, we reported that floor pattern orientations such as those of paving stones can induce lateral veering (Leonards et al., 2014). Here, we wondered whether veer-inducing floor patterns facilitate spatial interactions between two people when passing each other in a corridor in opposite directions. Seven pairs of participants passed each other repeatedly in a corridor, walking as straight as possible but without bumping into each other, while we varied the orientation of the floor patterns from trial-to-trial. 3D-motion capture allowed estimation of the distance between the two participants at the time of cross-over. In conditions, in which participants had no prior instructions on which side they should pass the oncoming person, the orientation of veer-inducing floor pattern did not only predict the side participants passed each other, but also produced bigger distances between participants at the time of cross-over than non-veer inducing patterns. In conditions, in which one of the participants received instructions on which side to pass the oncoming person, instructions congruent with pattern orientations lead to significantly bigger passing distances than instructions incongruent with pattern orientations. Together, these data suggest that floor pattern orientations indeed facilitate spatial interactions between people when passing each other in opposite directions.

15:00
The Piling Illusion
SPEAKER: unknown

ABSTRACT. Prepare 4 bank notes. Pile three of them on your left in such a way that no bills is occluded completely, and place the rest on your right. Then you will observe that the bill on the top of the pile appears smaller than the right one. This is a typical example of what we call the piling illusion, which we have found recently. The illusion magnitude is small, but the effect can be observed in various 2D and 3D objects. Try coins, books, boxes, containers, and geometrical figures. We checked its basic phenomenal properties by making variations. The exploratory observation revealed that the illusion is robust across objects and their configurations but disappeared when the object was complex or non geometrical (e.g., faces, animals). The piling illusion looks similar to the occlusion illusion (Palmer, 2007). The difference lies in that in the piling illusion no amodal completion is needed and the occluder shrinks, while in the occlusion illusion the occluded object expands in comparison with the isolated control object. It seems plausible that the piling illusion occurs because the distance between the observer and the occluder was perceived smaller relative to the standard, thereby rendering the perceived size smaller.

15:00
Dance expertise modulates the visuomotor perception of body motion
SPEAKER: unknown

ABSTRACT. The modulation of visuomotor processing of various body movements by motor expertise due to dance practice was investigated in 12 professional contemporary dancers and 12 right-handed controls. 212 video pairs of dance actions lasting 3 seconds were shown to participants, while their event-related brain potentials (ERPs) were recorded. The second video of each pair might be either the repetition of the previous one, or a slight variation of it, along 3 main dimensions (time, space and body). The task consisted in responding to static images of a dance action by pressing a button. A repetition suppression (RS) effect elicited by a repetition of the same video was visible in both groups, whereas only in dancers it was found a significant modulation of brain responses to deviant stimuli indexing a strong effect of neural plasticity due to motor practice. SwLORETA source reconstruction, performed on the ERPs difference waves “different” minus “same” videos (450-550 ms) recorded in dancers, showed a widespread network of activations related to visuomotor perception including the limbic (BA 38, 23) and the fronto-parietal systems (BA 40, 3, 4, 9), plus areas devoted to biological motion (BA 20, 21, 41), face and body processing (BA 20, 37).

15:00
Population code modelling of grating detectability along the apparent motion path
SPEAKER: unknown

ABSTRACT. Apparent motion masking refers to the decreased detectability of stimuli when presented along the path of apparent motion. This masking is typically attributed to activation in primary visual cortex: V1 neurons presumably represent apparent motion by showing an increased activation along the motion path. This activation masks the perception of target stimuli that are presented in the motion path and that match the apparent motion inducing stimuli. Previous studies have measured target detectability at a single stimulus intensity level. In the present study, we measured full psychometric functions, relating the detectability of grating targets to their contrast. We find that apparent motion has a strong masking effect. The masking is tuned: detectability is only impaired when target orientation matches the orientation of the apparent motion stimuli, suggesting that masking is indeed related to activation in primary visual cortex. However, we found that masking is particularly strong at large contrast levels. We use computational modelling to show that such an effect is not expected when assuming a mere increase in V1 activation along the apparent motion path. We propose a new population code model that relies on strong V1 inhibition instead of excitation to account for our results.

15:00
An effect of noise on numerosity comparison
SPEAKER: unknown

ABSTRACT. In a typical numerosity estimation task, where one has to compare two sets of objects, the visual system can have access to activation of relevant feature maps, e. g., color. The question is whether such activation can modulate the number of incorrect responses. We simultaneously presented two sets of red squares (2 sizes) in the left/right visual fields, so that that total perimeters on both sides were equal. The subjects (N=21) were asked to choose the larger side. We manipulated the color (red/grey) and the proportion of noise in the left and right visual fields (0/30/50/70%). Only relevant (red) noise led to an increase in incorrect responses in the condition with 70% of noise on the “smaller side” as compared with 50% (p<0.05) and 30% (p<0.01). Irrelevant (grey) noise had no effect on the responses. Also, no differences in RTs were found, assuming that the change in the number of incorrect responses was not caused by an increased task difficulty. The number of correct responses was significantly different from chance (p<0.01) for all conditions except the condition with 70% of red noise on the “smaller side”. Results suggest that activation of color feature map can modulate approximate estimation of numerosity.

15:00
Parameters that modulate the interaction between target and background patterning in speed perception
SPEAKER: unknown

ABSTRACT. ‘Motion dazzle’ is a phenomenon where high contrast patterns on moving targets are hypothesised to cause errors in speed and direction perception, and has been suggested to provide an explanation for why striped patterning has evolved in animals such as zebras. In nature, predators may be trying to pick out one zebra from many in a herd, meaning that the pattern on the target is similar to the pattern of the background. We have previously shown that the perception of speed of striped targets on striped backgrounds is different from the perception of grey targets on the same background. Here, we extend this work using two alternative forced choice paradigms to show that these effects seem to be strongest with backgrounds with alternating black and white stripes (in comparison to average luminance matched random striped backgrounds) and also depend upon the spatial frequency of striped targets, with the largest effects being seen at intermediate spatial frequencies. We also show that the effects seen differ depending on whether subjects fixate or track the targets. We discuss what these findings may mean in terms of the underlying mechanisms of motion perception and the purpose of striped patterning in the natural world.

15:00
The influence of segmentation on rapid scene categorization
SPEAKER: unknown

ABSTRACT. Scene categorization is performed extremely rapidly suggesting that an efficient, feedforward coding system is in place. Converging evidence from behavioral, neural, and computational investigations indicate that this system may rely on the extraction of simple features and does not require the segmentation of these features into coherent objects, or surfaces. This is consistent with the idea that image segmentation is a computationally expensive process requiring feedback. However, are there truly no grouping and image segmentation processes occurring during the fast feedforward processing? In a series of three experiments, we investigated the influence of segmentation cues on scene categorization. We presented participants with two scenes divided into four parts using different segmentation cues displayed for 300 ms prior to image onset. These cues established either a congruent (supporting the correct image segmentation into two scenes) or incongruent (prompting observers to incorrectly group scene segments) segmentation. Participants were less accurate in scene categorization when incongruent segmentation cues were presented, indicating the segmentation can influence categorization. Moreover, the effect remained robust even when the cues were presented concurrently with the images, suggesting that, whilst scene categorization might be rapid, it can also be influenced by segmentation mechanisms.

15:00
Within- and between-individual integration across visual perspectives in an object location task
SPEAKER: unknown

ABSTRACT. Humans combine multiple visual cues in order to form more reliable percepts (Ernst and Banks, 2002; Ernst and Bulthoff, 2004), including integration of information from varying perspectives (Avraamides et al., 2012). Recent studies in group psychophysics (Bahrami et al., 2010, 2012) have demonstrated that different individuals can achieve a similar integration. The present study asked how efficient different individuals are in integrating location information across different spatial dimensions with different reliability by comparing group performance to individual performance. Participants were asked to locate objects in 2D projections of a 3D layout. We generated projections from different camera view angles to simulate different perspectives on the same layout. In a series of experiments we systematically manipulated: a) angular difference between two individual views; b) possibility to verbally communicate; c) presence of feedback. The results showed that the opportunity to combine information with a partner resulted in increased accuracy and reduced variability of location judgments. Importantly, cross-individual integration of spatial information was as efficient as within-individual integration. Our results also suggest that complementarity of individual uncertainties is a sufficient condition for dyads to reliably outperform individuals even in an absence of feedback and verbal communication.

15:00
Subcortical influences in tool processing - the case for the magnocellular processing under high temporal frequencies
SPEAKER: unknown

ABSTRACT. Within the retina, parasol and midget ganglion cells differ in their temporal resolution. This distinction gives rise to the parvocellular (P) and magnocellular (M) pathways, respectively. Neurons within these pathways retain the same physiological response profiles of their respective ganglion cells: the M-pathway prefers fast moving stimuli, whereas the P-pathway prefers static or slow moving stimuli. Here, we investigated the role of the M and P-pathways on manipulable object recognition. We collected fMRI data using rapid serial visual presentation (RSVP) of tool and animal images at different presentation rates (5Hz, 10Hz, 15Hz, 30Hz) to bias processing towards the M or P-pathways. Previously we showed that tool preferences in the inferior parietal lobule (IPL) are driven by P-inputs, whereas in the superior parietal lobule (SPL) tool preferences are driven by M-input (Mahon, Kumar, & Almeida, 2013). In the current study, we found that the tool selective responses in these two areas are modulated by the presentation rate. Specifically, we demonstrated that the SPL, dominated by M-input, shows a preference for fast moving stimuli, whereas the IPL, dominated by parvocellular input prefers slow moving or static stimuli. Our findings illustrate how these anatomical pathways influence the organization of the tool processing networks.

15:00
The effect of inter-stimulus interval on the partially occluded slalom illusion
SPEAKER: unknown

ABSTRACT. The slalom effect is a kinetic illusion of direction where the straight trajectory of a dot is perceived as sinusoidal due to its intersection with static tilted lines. The illusion has been explained as a global integration of local distortions occurring at each dot-line intersection (Cesaro & Agostini, 1998). When the dot trajectory is partially occluded by replacing the inducing lines with solid black triangles, the magnitude of the effect increases (Soranzo, Gheorghes & Reidy, 2014). A possible explanation is that the inferred motion path behind the occluder is longer than that perceived directly; Kim, Feldman & Singh (2012) showed that when two objects are alternately presented at the ends of an occluder, the reported path of the object varies with the inter-stimulus interval (ISI), in that a longer ISI induces a longer reported path. The present study investigates whether the magnitude of the slalom illusion depends on the time spent by the dot behind the occluding triangles. To test this, the dot speed is kept constant when the trajectory is visible, but manipulated when the dot is occluded. Results are discussed in relation to apparent motion and amodal completion as well as possible delayed global integration of local distortions.

15:00
Impacts of fatigue on mental rotation
SPEAKER: unknown

ABSTRACT. According to Shepard & Metzler (1971) and Cooper (1975, 1976) reaction time (RT) for recognizing the identity of an object increases corresponding to the increase of rotational angle of this three- and two-dimensional objects. The purpose of our study is to explore possible impacts of the fatigue on mental rotation. Therefore, our study has a two-fold aim: to estimate a possible correlation between the time of being awake and RT; to explore the impact of fatigue on the RT in mental rotation task. To analyze the RT for recognizing rotated objects we have constructed a digitized test consisting of 256 object pairs (128 two- and 128 three-dimensional). According to our results RT is longer for mental rotation of mirrored objects in both 2D and 3D stimuli. The error rate is higher in 3D (18.3%) than in 2D objects (10.7%) but it does not depend on the time when the test is conducted. The average RT of the 2D and 3D objects’ rotation is faster in the period 5-10 hours after awakening. Although we can observe that fatigued subjects have fewer errors, there is no the impact of fatigue on the RT in mental rotation.

15:00
Visual search for objects with different direction variability and speed
SPEAKER: unknown

ABSTRACT. Previous research found that visual search for a static target among moving targets is slower then reversed condition (Dick, 1989). Moreover, people are able to find moving target following a different trajectory than distractors (Horowitz et al., 2007). Varying object speed and direction variability is an important aspect in other cognitive tasks (e.g., Multiple Object Tracking). In two experiments, we studied how sensitive participants were in the search for a target differing in higher/lower direction variability (exp 1) and faster/slower speed (exp 2). In both experiments, 8 objects moved for 8 seconds and one of the objects differed in variability of direction (exp 1) or speed (exp 2). We tested the performance over eight levels of variability/speed for targets and two levels for distractors. Participants were able to detect successfully both faster/slower targets, but the performance for direction variability was asymmetrical: more variable directions (Von Mises kappa > 16, sampled 100 fps) were difficult to distinguish from each other. Overall performance was lower in experiment 1 showing that detecting variability of motion direction is harder than detecting difference in speeds.

15:00
Spatial and motion stimulus-response correspondence effects under cognitive load.
SPEAKER: unknown

ABSTRACT. Previous studies show that spatial stimulus-response correspondence (spatial SRC) and motion stimulus-response correspondence (motion SRC) are separate phenomena with regard to the interaction between perception and motor actions. In the first experiment, we tested this hypothesis by designing a visuo-motor task in which we pitted both SRC effects against each other. Participants moved leftward or rightward with two joysticks held in left or right hand in response to a stimulus with leftward or rightward motion that could be located on the left or right side. The results showed that spatial and motion SRCs are independent. Since it has been claimed that SRC effects are based on automatic processes we expected that both SRCs should not be affected by cognitive load. We verified this hypothesis in the second experiment by testing both SRC effects in a single task under working memory load. Participants had to maintain in working memory either additional spatial or alphabetic information while performing the task with the joysticks. Results showed that working memory load led to interaction between spatial and motion SRC effects. Our findings demonstrate the role of cognitive load in SRC phenomena as well as constraints to the idea of automaticity underlying SRCs.

15:00
Representational similarity analysis of contour shape processing in the visual cortex
SPEAKER: unknown

ABSTRACT. Psychophysical studies suggest that human visual system analyses the shape of a closed contour on the basis of radial frequency (RF) components, consisting of sinusoidal modulations of the circle radius. We studied contour shape representations in the visual cortex using functional magnetic resonance imaging (fMRI). We used event-related design and measured activity patterns for 65 different shapes. We varied RF (3-6 cycles/perimeter), orientation (polar phase 0-270 deg) and amplitude (0-0.5 in proportion to radius) of the shapes. We used a searchlight-based representational similarity analysis together with a probabilistic atlas of the visual areas. First we calculated representational dissimilarity matrices (RDMs) for RF, orientation, local curvature, contrast energy and spatial frequency (SF) spectrum. Then these model RDMs were compared to the measured RDMs. The resulting correlation maps revealed RF specific activity patterns in areas V2d, V3d, V3AB, and IPS0, but not in areas hV4 and LOC. Orientation and local curvature did not show such specificity. Positive correlation maps were also found for SF-spectrum and contrast energy, but these showed no selectivity across areas. The results provide further support for the RF analysis of contour shapes and suggest that RF is represented in a subset of the mid-level visual areas.

15:00
When do we need attention for grouping?
SPEAKER: unknown

ABSTRACT. Grouping processes aid the construction of bits and pieces of visual information into a coherent percept of the environment. Previous studies have yielded contradicting findings regarding the role of attention in grouping, some suggesting that grouping requires attention while others indicating that it does not. The current study aimed to discover in which circumstances grouping requires attention, using an inattention paradigm. Participants engaged in an attentionally demanding change detection task on a small matrix presented on a background of task-irrelevant organized elements. The background organization stayed the same or changed, independently of any change in the target. If the background organization is accomplished without attention, changes in background organization should produce congruency effects on target-change judgments. The results showed that attention was required for grouping elements by shape similarity but not by proximity, and for grouping organizations that involved element segregation and configuring into a shape but not configuring alone. Interestingly, attention was not required when grouping organizations were in competition, demonstrating congruency effects for only one of the competing organizations. These results support the view that perceptual organization is a multiplicity of processes, and provide evidence that attentional requirements vary as a function of the processes involved in grouping.

15:00
Mechanisms of short interval timing: The influence of interval filling on perceived duration and discrimination performance
SPEAKER: unknown

ABSTRACT. The ability to estimate temporal properties like interval duration is crucial for our successful interaction with the environment. Quantifying how factors other than physical duration can distort duration estimates helps understanding the mechanisms underlying temporal perception. Previous research has shown that intervals defined by two temporal markers (empty intervals) are perceived shorter and less precise than intervals consisting of a continuous stimulus or of a sequence of stimuli (filled intervals, e.g., Rammsayer & Lima, 1991; Thomas & Brown, 1974). Here, we present a systematic investigation of perceived duration and discrimination performance using continuously filled, isochronously filled, anisochronously filled, and empty intervals (Horr & Di Luca, 2015). Participants compared intervals of different duration, indicating which of two is longer. We find duration discrimination to be most precise when two continuous or isochronous intervals are compared and it is worst for anisochronous intervals. Duration of filled intervals is overestimated compared to empty intervals, and this effect is higher for stimulus sequences (isochronous and anisochronous) than for continuous intervals. Quantitative analysis of the duration distortions with different intervals suggests that an explanation based solely on individual intervals is not sufficient, as the difference in the types of intervals compared also biases duration estimates.

15:00
Does It Really Exist? Creating the collinear masking effect by illusory contours
SPEAKER: unknown

ABSTRACT. Usually objects on a salient line should be easier to be found due to the obvious position these objects located. However, Jingling and Tseng (2013) discovered that when objects are arranged regularly to form a collinear line, a target on this line is harder to be found. This phenomenon is called the collinear masking effect. One possibility that creates the collinear masking effect is a filling-in process of the collinear grouping, making an illusory contour and smearing the visibility of the target. To test the conjecture, we designed several search displays formed by different illusory contours (e.g., Kaniza type and abutting line illusory contours) to examine whether the illusory contour can mask a local target. The results of three experiments showed that targets on an illusory contour actually easier to be found (i.e., faster RT and higher accuracy) than those not on illusory contours. In other words, the collinear masking effect was not due to a perceptual filling-in on illusory contours. This experiment may improve our understanding about the underlying mechanism of the collinear masking effect.

15:00
Motor coding of visual objects in peripersonal space is task dependent : an EEG study
SPEAKER: unknown

ABSTRACT. Previous studies have shown that visual perception of manipulable objects spontaneously involves the sensorimotor system, but predominantly in peripersonal space. It was also suggested that motor coding of manipulable objects in peripersonal space depends on the intention to act on them. The present study aims at unravelling this issue by recording EEG activity on the centro-parietal region while judging the reachability or shape of visual objects presented at different distances in a virtual environment. Visual objects were either real objects with a prototypical shape or distorted objects with an altered shape resulting from a Gaussian blur filter. Time-frequency decomposition of EEG signals was performed and event-related-desynchronization of μ rhythm was computed using the 200 ms pre-stimuli period as baseline. In the reachability judgment task, EEG analysis showed a desynchronization of μ rhythm starting 315 ms after object presentation when objects were presented in peripersonal space with a prototypical shape. Desynchronization of μ rhythm reduced progressively from peripersonal to extrapersonal space. By contrast, no such gradation was observed in the shape judgment task. These data indicate that motor coding of visual objects expressed in the μ rhythm depends on both their location in space and the intention to interact with them.

15:00
No correlations between the strength of visual illusions
SPEAKER: unknown

ABSTRACT. In cognition, audition, and somatosensation, performance correlates strongly between different paradigms suggesting the existence of common factors. Surprisingly, this does not hold true for vision. For example, performance in line bisection and visual acuity (FrACT) correlate very weakly (r2 = 0.001). Here, we show similar results for visual illusions. For 143 participants (69 females), aged from 8 to 81, we measured the strength of six illusions using the method of adjustment. Correlations were very low and mostly non-significant. For example, the correlation between the Ebbinghaus and Ponzo illusion was r2 = 0.08, i.e., only 8% of the variability in the Ebbinghaus illusion is explained by variability in the Ponzo illusion. Results for males and females did not differ significantly. Interestingly, illusion magnitude decreased with age for the Ebbinghaus, Ponzo, and Tilt illusions. Our null results are supported by good test-retest reliability and a Bayesian analysis. Factorial analysis revealed no common factor.

15:00
Effects of stimulus ambiguity on task-related ERP components
SPEAKER: unknown

ABSTRACT. During observation of an ambiguous figure (e.g. the Necker cube) our perception is unstable and alternates spontaneously between two interpretations. Tiny low-level changes can disambiguate the ambiguous stimulus and evoke two large ERP positivities (“ambiguity effect”). These components show larger amplitudes in the go compared to the nogo trials of a go-nogo paradigm, indicating an involvement of attentional processes. In the current study we compared the ambiguity effects between the go-nogo and a forced choice paradigm variants. Methods: Ambiguous and disambiguated lattice variants were presented discontinuously in separate experimental blocks. In Experiment 1 (forced choice) participants reported both perceptual reversals and perceptual stability, in Experiment 2 they only reported perceptual reversals (go-nogo) between successively presented stimuli (go-nogo experiment). EEG data were selectively averaged for stimulus and response type. Results: We found the ERP ambiguity effect in both experiments. In Experiment 1 we found an additional fronto-central positivity around 400 ms after onset (“P400fc”) of disambiguated but not of ambiguous stimuli. Discussion: The novel P400fc is strongly determined by both stimulus ambiguity and task. It may represent a time stamp of task-related decision processes and show their dependence on stimulus ambiguity. Interestingly, reaction times cannot explain the ERP effects.

15:00
Feature integration in plaid revealed by visual search
SPEAKER: unknown

ABSTRACT. Plaid is composed of two orthogonal oriented gratings but may be perceived as a checkerboard with an oblique orientation. It is not clear whether the detection of a plaid is mediated by a plaid-specific mechanism or by a combination of two oriented filters. We used a visual search paradigm, which is sensitive to feature integration, to investigate this issue. The stimuli were either grating or plaid patches. The salience of the plaid structure was manipulated by varying the contrast (low vs. high), spatial frequency (Low-Low SF, High-High SF and Low-High SF) and orientation (same vs. oblique) of the components. The participants (N=8) were asked to search for a grating among distractor plaids or a plaid among grating distractors. The results were consistent with a parallel search in searching for grating among plaids, with the exception of plaid distractors having mixed SFs and the same component orientation. This indicates the existence of a grating-sensitive mechanism. When searching for plaid, serial search characteristics were observed at Low-Low SFs and mixed SFs, and their component orientation was the same as that of the distractors. This orientation and spatial frequency -dependent performance indicates that plaid detection is based on the processing of its components.

15:00
Second-order chromatic plaid-motion perception mediated by s-cone channel signal
SPEAKER: unknown

ABSTRACT. Plaid motion perception has been investigated to clarify whether genuine chromatic information can produce motion perception, the same as luminance information can do. We therefore tested whether s-cone second-order motion was integrated with the achromatic second-order motion. Our motion stimuli consisted of second-order chromatic and achromatic patterns, in which spatial frequencies of envelope and carrier component were 0.2 cpd and 1 cpd, respectively. Contrast of both the motion patterns was five-fold of each motion discrimination threshold. We measured probabilities of coherent motion perception as a function of temporal frequency (TF) of envelope component. In the identical TF-chromatic and -achromatic motion patterns, the probability functions reached the maximum in all conditions. And these functions decreased as the difference of TF between the chromatic and achromatic motion stimuli increased. These indicate that the s-cone second-order motion signal can be integrated with the achromatic second-order motion signal in a specific neural site and that its temporal tuning could be determined by physical parameters but not by perceived speed. This result corresponds with our previous study (Yoshizawa et al., 2005). We conclude that the second-order chromatic motion signal can be mediated by a different process from that for the first-order chromatic motion.

15:00
Tilt aftereffect from perception of global form from Glass Patterns
SPEAKER: unknown

ABSTRACT. Glass Patterns (GPs) contains randomly distributed dot pairs (dipoles) whose orientations are determined by certain geometric transforms. In this psychophysical study we measured the tilt aftereffect (TAE) following adaptation to oriented GPs. Adapting stimuli were parallel GPs which global orientation was varied between 0° (vertical) and 90° (horizontal). The test pattern was a circular grating presented for 33 ms and observers judged whether it was tilted clockwise or anticlockwise from vertical. The results showed that adaptation to GPs produces an angular function similar to that reported in previous TAE studies, peaking at 15° (TAE: 1.73°). Moreover, we measured the inter-ocular transfer (IOT) of the GP-induced TAE and found an almost complete transfer (88.1%). In additional experiments we assessed the role of attention in TAE from GPs. The rationale was that if attention play a role in extracting the global form from local oriented dipoles, then diverting attention away from the adapter TAE should be dramatically reduced. The results show an attention-related reduction of 83%. We conclude that TAE from GPs depends on a lateral inhibitory mechanism implemented at a level in which neurons are binocular, selective for orientation and their activity is strongly modulated by attention (e.g., V3A and V4).

15:00
Reach trajectories curve away from remembered, past, and present distractor locations
SPEAKER: unknown

ABSTRACT. Previous research indicates that reaching movements curve away from to be ignored and inhibited distractors while they curve toward facilitated distractors. Here we investigated how the trajectory of a target-directed reaching movement is affected by different distractor conditions: (a) spatial memory of a distractor location, (b) automatic encoding of a distractor location, and (c) perceptual presence of a distractor. Participants performed vertical reaching movements to a visual target after an audio Go-signal with their right index finger on a computer monitor. They either had to remember the location of a previously presented distractor (remembered), ignore the location of a previously presented distractor (past), or ignore the location of a currently presented distractor (present). Distractors were always presented laterally, either left or right of the target. Our results showed that irrespective of the distractor condition reaches curved away from the distractor location. Additionally, latencies of the reaching movement differed between distractor conditions (a>b>c). The present results are in line with previous findings on saccadic eye movements and suggest that distractor related movement plans are inhibited and thus the competition with target related movement plans are successfully resolved causing reaching movements to curve away from task relevant and irrelevant distractor locations.

15:00
Natural districts of pictorial relief
SPEAKER: unknown

ABSTRACT. Is the structure of pictorial relief globally coherent? From local measurements of surface attitude one constructs global pictorial reliefs that reveal a rich landscape of hills and dales. But do the observers indeed have a “bird’s-eye view” of the landscape? We studied this in a two-point depth comparison task, where the points might be at considerable mutual distance. We find that variability depends strongly on the mutual locations of the points, though not necessarily on their mutual distance per se. Depths can be compared very well when the points are located on a single hill side, but much less so if there is a stream separating them. Phenomenologically, it is as if the pictorial relief were partitioned into “natural districts” that are individually well defined, but globally only roughly assembled in a quilt-like assembly. This might be the reason why a depth inverted relief appears so different. In that case the hills and dales exchange place, thus the natural districts map changes qualitatively.

15:00
The Leuven Embedded Figures Test (L-EFT): Measuring perception or cognition?
SPEAKER: unknown

ABSTRACT. Our visual system prioritizes global structures above local elements (Navon, 1977). A myriad of tasks claim to dissociate global from local perception, but the constructs underlying these tasks remain unclear. One paradigm commonly used in this field is the Embedded Figures Test (EFT; Witkin, Ottman, Raskin & Karp, 1971) but its results have been prone to a wide variety of interpretations. In the current study, testing over 130 participants, we aimed at a better understanding of what is measured by the EFT. Therefore, a new EFT was designed where local features at the target level (e.g., symmetry or closure), and global features at the pattern level (e.g., number of lines continuing from target into context) were independently manipulated in order to dissociate local from global processing. Secondly, the association between EFT performance, non-verbal intelligence and several executive functions was assessed to evaluate the impact of both perceptual and cognitive aspects on EFT performance. These data could clarify the construct validity of this paradigmatic task of global/local processing. In addition, our newly designed EFT may offer a more controlled measure, which is better able to differentiate between genuine perceptual, as opposed to executive contributions to EFT performance.

15:00
The Component Level Feature Model of Motion Computes Direction for Random Dot Patterns
SPEAKER: Linda Bowns

ABSTRACT. Bowns (2011) describes a biologically inspired motion model - the Component Level Feature Model (CLFM). The model uses similar filters to motion energy models, and computes the Intersection of Constraints (IOC) from the component information, however, it differs from energy models because it is a phase based model. Output from CLFM reliably computes the motion of two component (plaid) stimuli, and provides new explanations for challenging plaid results (Bowns, 2011; Bowns, 2013). In this presentation, CLFM direction output is reported for random dot patterns varying in 8 different directions. Ten new random dot patterns were produced for each direction. The dots had 1% dot density and were displaced by 2 pixels on each frame over 20 frames. To facilitate comparison with human data, percent accuracy was calculated as follows: 90-error (in degrees of angle) / 90 * 100 (Pilly & Seitz, 2009). These preliminary results look promising with CLFM performing at over 85% accuracy on all 8 directions, with performance on the cardinals over 90%, consistent with human data on the motion oblique effect. All parameters of the model were the same as those used for previous simulations.

15:00
Weight allocation in summary statistics
SPEAKER: unknown

ABSTRACT. The visual system can rapidly estimate statistics like the average size of circles and the regularity of dot patterns, but its estimates aren't always very good. In particular, its estimates of average orientation are notoriously inefficient. Last year we reported that most observers effectively use just two or three randomly selected items in their estimates of average orientation. We now report that observers do not select items completely at random. This conclusion stems from an experiment in which observers were asked to reproduce the average orientation in an array of 8 Gabor patterns, regularly positioned in a circle around fixation. Reproductions were better correlated with the orientations of some Gabor patterns, because observers had idiosyncratic preferences for certain positions. However, the reproductions of all our observers were better correlated with the orientations of those Gabor patterns positioned closest to an imaginary line through fixation whose orientation matched each sample's mean. In other words, visual estimates of average orientation are weighted averages, where the assignation of weights is determined in conjunction with the estimate, not prior to it.

15:00
Changes in the apparent speed of human locomotion: Norm-based coding of speed
SPEAKER: unknown

ABSTRACT. We report a new after-effect of visual motion in which the apparent speed of human locomotion is affected by prior exposure to speeded-up or slowed-down motion. In each trial participants were shown short video clips of running human figures (recorded from the London Marathon) and asked to report whether the speed of movement was ‘slower than natural’ or ‘faster than natural’, by pressing one of two response buttons. The clips were displayed at different playback speeds ranging from slow-motion (0.48x natural speed) to fast-forward (1.44x natural speed). Adaptation to stimuli played at normal speed resulted in the P50 of the psychometric function falling close to normal-speed playback. However after adaptation to 1.44x playback, normal-speed playback appeared too slow, so the P50 shifted significantly towards a higher playback speed; after adaptation to 0.48xplayback, normal-speed playback appeared too fast, so the P50 shifted significantly towards a lower playback speed. The shifts in apparent speed were obtained using both same- and opposite-direction adaptation-test stimulus pairs , indicating that the effect is a speed adaptation effect rather than a directional velocity after-effect. These findings are consistent with norm-based coding of the speed of movement.

15:00
Intentional action expands Time perception: An ERP study
SPEAKER: unknown

ABSTRACT. While studies on Intentional binding have mainly focused on the effect of intentional action on the perception of time between the action and its outcome, not many studies have focused on the temporal perception of outcome itself. We conducted an ERP study to understand the neural mechanisms involved in changes in time perception due to intention by using a temporal bisection task. To manipulate intention, before each trial participants were asked to choose what color circle (red/ green) they want to see. The probability of participants getting the intended color was kept at chance level. Participants were trained for two extreme durations (300ms/700ms). In each trial, a test duration was randomly presented out of nine duration levels (300ms, 350ms … 650ms, 700ms). Participants reported whether they perceived the test duration to be closer to the short or long anchor duration. Psychophysical results showed that participants perceived the duration of intended outcome to be longer compared to unintended outcome. A similar temporal expansion effect was observed in CNV component, indicating that intention enhances the neural representation of time. Further studies are needed to understand the underlying processes which mediate the effect of intentional action on time perception.

15:00
Imagining Circles: A perceptual model for the Arc-Size Illusion
SPEAKER: unknown

ABSTRACT. An essential part of visual object recognition is the evaluation of the curvature of both an object’s outline as well as the contours on its surface. We show that a little-known and poorly understood illusion of visual curvature – the arc-size illusion – reveals fundamental truths about the visual coding of curvature. In the arc-size illusion, short arcs of a circle appear less curved than longer arcs, even though the arcs have the same physical curvature. Using new data and a model of the arc-size illusion we show that perceived curvature is scale-invariant, that is a curve appears similarly curved irrespective of viewing distance, even though its curvature in the retinal image changes with viewing distance. Second we show that curvature is computed only for arcs up to a sixth of a circle in length. These two properties of curvature perception are shown to predict a number of other illusions of curvature.

15:00
Suggested Independence between Perceived Size and Distance in the Optical Tunnel
SPEAKER: unknown

ABSTRACT. Introduction. Regarding perception of size at a distance, two models explain the relationship between perceived size (S') and perceived distance (D'): the mediational model and the direct perception model. The former assumes S’ is inferred from D' and visual angle (θ), while the latter suggests S' and D' are tied to different higher-order variables in the optic array. To compare them, an optical tunnel was constructed to manipulate optical environments. Independence of S' from D' was investigated. Method. One of three objects differed in size was hung in the middle of tunnel at four locations. Either the tunnel terminated or continued behind it. Participants viewed it monocularly, and reproduced its size and distance by adjusting comparison objects. Results. Participants underestimated S' and D' more at further locations. The termination did not change D', but changed S'. Different power functions of θ described S'/D' between termination and continuation. Partial correlation analysis showed that S' and D' were not correlated when other variables were controlled. Discussion/Conclusion. S' was different between optical conditions when both θ and D' remained the same. Insignificant partial correlation implied that D' was not a mediator of S'. S' and D' were independent, supporting the direct perception model.

16:00-17:45 Session 20A: Attention: visual search

.

Location: B
16:00
The preview benefit in single feature and conjunction search: Constraints of visual marking

ABSTRACT. Watson & Humphreys (1997) proposed that the preview benefit (pb) rests on visual marking, a mechanism which actively encodes distracter locations at preview and inhibits them at search. We also used a letter-color search task to study constraints of visual marking in conjunction search and near-efficient single feature search. Search performance was measured for fixed target and distracter features (block design), and for changing them randomly across trials (random design). In single feature search there was a full pb in both designs. In conjunction search a full pb was obtained only for the block design. Randomly changing target and distracter features disrupted the pb, but it was restored when the distracters were organized in coherent blocks. Apparently, the temporal segregation of old and new items is sufficient for visual marking in near-efficient single feature search, while, in conjunction search, it is not. When the new items add a new color conjunction search is initialized, and attentional resources are withdrawn from the marking mechanism. Visual marking can be restored by a second grouping principle that joins with temporal asynchrony. This principle can either be spatial or feature-based. For feature-based grouping repetition priming is necessary to establish joint grouping with temporal asynchrony.

16:15
Simulated hemianopia: the effect of partial information loss on serial and parallel search

ABSTRACT. Patients with hemianopia tend to start searching a visual display from their intact visual field, causing a larger proportion of the search array to fall within the damaged field. This is generally considered to be a sub-optimal strategy. However, what constitutes an efficient search strategy depends on where and what kind of information is present in both the damaged and intact field. We investigated the degree to which healthy participants adapt their search strategy to conditions of total and partial information loss, target position, and search difficulty. Participants showed a bias towards the sighted field that diminished with increasing information in the sighted field. The sighted-field bias also persisted across search difficulty, which we manipulated by altering the heterogeneity of the distractors. This result was surprising because during search for a pop-out target, participants should execute a large saccade into the blind field on trials where the target is not immediately detected in the sighted field. We conclude that observers are driven largely by bottom-up information and do not switch their search strategy under circumstances when it would be beneficial to examine the area corresponding to the field deficit first.

16:30
Serial vs parallel processes in Visual Search: model comparison to RT-distribution
SPEAKER: Marius Usher

ABSTRACT. Visual search is central to the investigation of selective visual attention. The classical theory postulates two processing stages: i)a parallel unlimited capacity stage, during which a salience map is generated, ii)a serial and capacity-limited identification stage during which attention is serially deployed between items. While this theory accounts for set-size effects over a continuum of task-difficulties, it has been suggested that parallel models can account for such effects equally well. Here we compared the serial Competitive Guided Search with a parallel model in their ability to account for RT-distribution and error rate data from a large visual search experiment (Wolfe et al., 2010; Vis. Res., 50,1304–11). In the parallel model each item is represented by a diffusion to two (target/distractor) boundaries. The process is self-terminating with respect to 'target present' responses and exhaustive with respect to 'target absent responses. Both limited and unlimited capacity variants of the parallel model were examined. The serial model turns out to be superior to the parallel model, even prior to penalizing the parallel model for its increased complexity (four extra parameters with strategic dependencies on set-size). We discuss the implications of the results and the need for future studies to resolve the debate.

16:45
Attentional Guidance by Simultaneously Active Working Memory Representations: Evidence from Competition in Saccade Target Selection
SPEAKER: Valerie Beck

ABSTRACT. The content of working memory (WM) guides attention, but there is debate over whether this interaction is limited to a single WM representation or functional for multiple WM representations. To evaluate whether multiple WM representations guide attention simultaneously, we used a gaze-contingent search paradigm to directly manipulate selection history and examine the competition between multiple cue-matching saccade target objects. Participants first saw a cue composed of two colors (e.g., red and blue) followed by two pairs of colored objects presented sequentially. For each pair, participants selectively fixated an object that matched one of the cue colors. Critically, for the second pair, the cue color from the first pair was presented either with a new distractor color or with the second cued color. In the latter case, if two colors in memory interact with selection simultaneously, we expected substantial competition from the second cued color, even though the first cued color was used to guide attention in the previous pair. Indeed, saccades for the second pair were more frequently directed to the second cued color object than to a distractor color object. This competition between cue-matching objects provides compelling evidence that both WM representations were interacting with and influencing attentional guidance.

17:00
Choice Invaders: A new iPad task to explore fixed-interval target selection

ABSTRACT. During visual foraging, the ability to switch target categories varies considerably between individuals (Kristjánsson, Jóhannesson & Thornton, 2014). Do such individual differences occur for target selection in the absence of search? In the current task, rows of four objects moved down the screen in waves, reminiscent of classic Space Invaders. Each row contained two targets and two distractors, their position shuffled independently, row-by-row. Participants (N=14) moved a player icon via tilt control to physically collide with either target in a row. If a distractor object was selected, or if a row passed untouched, the trial was aborted. A trial finished after 30 successful rows. In the “feature” condition, targets and distractors were identified by unique colours. In the “conjunction” condition, by colour and shape. Our dependent measure was the proportion of rows in which the same target category was repeated. Overall, the tendency to repeat categories increased under conjunction conditions (t = 3.1, p < 0.01). However, approximately 25% of participants showed very similar patterns of target category selection/switching under the two conditions. This replicates our previous finding with visual foraging, and further suggests that limits on top-down control of attention may be more flexible than fixed.

17:15
Very large memory sets in hybrid search: Can the log still save us?
SPEAKER: Todd Horowitz

ABSTRACT. Hybrid search refers to the combination of visual search and memory search: searching through arrays of visually presented items for any of a set of targets held in memory. The reaction time (RT) by memory set size function seemed linear for set sizes up to 4 (Shiffrin & Schneider, 1977), but appears logarithmic when memory set size is increased to 16 (Wolfe, 2012). In many expert search domains, such as medical image interpretation, the memory set is very large, and overlearned compared to typical laboratory protocols. I utilized the Airport Scanner (Kedlin Co., www.airportscannergame.com) dataset (Mitroff and Biggs 2014) to study these issues. Airport Scanner is a commercial x-ray baggage search game. New targets (threats) are added as the game progresses. I analyzed 836,738 single-target trials (bags) from 65,822 experienced players. Memory set size (potential threats) ranged from 7 to 218 items. Expertise decreased RT. RT by memory set size functions were more logarithmic than linear, but more quadratic than logarithmic; these trends were more pronounced for less experienced players. Encoding and retrieval strategies may change with both expertise and memory set size; models developed for small set sizes may not generalize to naturalistically large set sizes.

17:30
Eye-of-origin guides attention away: Search disadvantage by ocular singletons

ABSTRACT. Collinearity and eye-of-origin are recently discovered to guide attention: target search is impaired if it is overlapping with a collinear structure (Jingling and Tseng, 2013) but enhanced if the target is an ocular singleton (Zhaoping, 2008). Both are proposed to occur in V1, and we study their interaction here. In our 9X9 search display, all columns consisted of horizontal bars (non-collinear column, NCC), except one randomly-selected column contained orthogonal bars (collinear column, CC). One randomly-selected column was projected to one eye (ocular singleton column, OS) while the rest of the columns were presented to the other eye (NOS). We expect the best target search at NCC+OS, and the worst search performance at CC+NOS. The other combinations would depend on the relative strength of collinearity and ocular information in guiding attention. As expected, we observed collinear impairment, but surprisingly, we did not observe any search advantage to OS but impairment. Our subsequent experiments confirmed that OS search disadvantage also occurred when color-defined or luminance-defined columns were used instead of collinear columns. While our result agrees with earlier findings that eye-of-origin information guides attention, it highlights that our previous understanding of search advantage by ocular singleton targets might have been over-simplified.

16:00-17:45 Session 20B: Face perception

.

Location: A
16:00
Face shape cues to health
SPEAKER: David Perrett

ABSTRACT. Observers perceive health from faces with some accuracy. Prior work shows that weight perceived from faces predicts illness frequency, suggesting that visual cues to weight contribute to perceptions of health. We investigated whether facial shape cues to body physique and composition account for weight perception and predict illness. 3D facial surfaces were scanned (3dMD) for 118 Caucasians (age 19-31, 68 female). Height, weight, body composition (Tanita SC-330) and self-reported antibiotic use were recorded. The face surfaces were subjected to: Procrustes alignment, delineation of 49 feature landmarks, resampling, cropping to discard hair and neck, and Principal Component Analysis (PCA). Vectors were derived from PCA coefficients to define how face shape relates to BMI (weight scaled for height, Holzleitner et al. 2014) and relative fat mass. Estimations of BMI and relative fat mass from face shape for each participant accounted for weight perception. Face shape estimations predicted antibiotic use and outperformed body measures (actual BMI and % body fat) in accounting for illness frequency. The results show that facial shape provides an index of health that is more accurate than body measures routinely used in medicine. The study also contributes to understanding of the cues used in weight and health perception.

16:15
The visual gamma response to faces reflects the presence of sensory evidence and not awareness of the stimulus
SPEAKER: Gavin Perry

ABSTRACT. It has been suggested that gamma (30-100Hz) oscillations mediate awareness of visual stimuli, but experimental tests of this hypothesis have produced conflicting results (Aru et al., 2012; Fahrenfort et al., 2012; Fisch et al., 2009). We used phase scrambling to vary the perceptibility of face stimuli presented to 25 participants. MEG was used to measure the gamma response while individuals viewed three conditions in which faces were presented either above, below or at the perceptual threshold. In each of 400 trials (100 each for the sub- and supra-threshold conditions, and 200 for the threshold condition) participants indicated whether or not they perceived a face in the stimulus. Gamma-band activity during the task was localised to bilateral ventral occipito-temporal cortex. We found that gamma amplitude was significantly increased both for threshold relative to subthreshold stimuli and for suprathreshold relative to threshold stimuli. However, for the threshold condition we did not find a significant difference in gamma amplitude between trials in which the face was percieved vs those were it was not perceived. We conclude that the gamma response to faces is modulated by the amount of sensory evidence present in the stimulus and not perceptual awareness of the face itself.

16:30
An objective measure of facial identity adaptation with fast periodic visual stimulation
SPEAKER: Talia Retter

ABSTRACT. The human brain is remarkably adept at extracting visual identity information from faces, although understanding this process remains challenging. Here, a novel measure of system-level discrimination between two individual facial identities is presented. This measure utilizes fast periodic visual stimulation (FPVS) and electroencephalogram (EEG) recording combined with an adaptation paradigm (as in Ales & Norcia, 2009). Adaptation to one facial identity is induced through repeated presentation of that identity over a 10s baseline, flickering at a base rate of 6 images per second (6 Hz). Subsequently, this identity is alternated with its anti-face (e.g., Leopold et al., 2001), over 20 s at the same rate. During this alternation, a response exactly at half the base presentation rate (3 Hz), localized over the right occipito-temporal cortex, indicates that adaptation produced an asymmetry in the perception of the two facial identities. Importantly, this 3 Hz response is not observed in a control condition without the single-identity baseline. These results indicate that neural adaptation to one identity can produce a measurable, electrophysiological discrimination response between that identity and another, which could be further investigated with different categories of face pairs in future studies to increase understanding of individual face representation.

16:45
Caloric Vestibular Stimulation Modulates High Level Face Processing

ABSTRACT. Understanding of the link between the vestibular organs and the visual system is becoming established (e.g. Della-Justina et al., 2015), yet few studies have explored the use of vestibular stimulation to modulate visual processing. Preliminary evidence that this can be achieved is seen in one clinical case study which reported improved face perception in an acquired prosopagnosic following vestibular stimulation (Wilkinson, Ko, Kilduff, McGlinchey, & Milberg, 2005), and in one study that demonstrated an enlarged N170 in healthy adults during vestibular stimulation (Wilkinson, Ferguson, & Worley, 2012). The present study tested the behavioural effects of caloric vestibular stimulation on a higher cognitive level of face recognition in sixty adults. Participants were required to identify the nationality of celebrities in four testing sessions following a counterbalanced ABAB design. Relative to no stimulation, caloric vestibular stimulation significantly increased accuracy scores which could not be accounted for by practise effects. This study constitutes the first attempt to improve healthy face recognition skills through vestibular stimulation and the findings have immediate real-world value in settings that require superior face recognition performance such as passport control and identity parades. The study also provides further evidence to the efficacy of vestibular stimulation in modulating cognitive processes.

17:00
Self-representation of facial appearance
SPEAKER: Robert Ward

ABSTRACT. Here we explore people's understanding of their own facial appearance, and individual differences in these self-representations. There is increasing evidence that facial appearance is correlated with personality, and that observers are sensitive to this correlation: for example, trait neuroticism can be identified merely from controlled "passport" facial photos (Little & Perrett, 2006; Kramer & Ward, 2010; Jones et al, 2012). We used these statistical regularities in personality appearance to investigate self-representation. A controlled photograph of the participant was morphed using sex-appropriate averages of people high and low in neuroticism, to create a looping image sequence of the participant, in which the objective visual signal of neuroticism varied from high to low and back again. Participants were asked to select the image within this sequence which best matched their actual appearance. Participants did not choose accurately, but instead chose images which exaggerated the visual characteristics associated with their personality. These exaggerations were also selected as having more positive social traits. In this case, visual self-representations exaggerated differences from group norms. More generally, these results show how visual self-perception can be influenced by visual trait signals and non-visual social trait differences.

17:15
How does image background colour influence facial identification?

ABSTRACT. In the UK, identification lineups have a standard background, either grey for VIPER lineups, or green for PROMAT lineups. However, as lineup fillers and suspects are filmed under a variety of lighting conditions, there can be a large variation in the colours of the background on which lineup members are presented, potentially causing some faces to appear more salient than others. Using the 1-in-10 face recognition paradigm (Bruce et al., 1999), we investigated whether manipulating the background colour of faces influenced identification for target present (TP) and target absent (TA) arrays. The first experiment used faces that were the same race (SR) as the participants, and found the colour manipulation significantly increased accuracy for TP lineups. The second experiment investigated the relationship between this effect and the own race effect (ORE). The ORE predicts individuals are more likely to correctly identify SR as compared to OR faces from TP lineups, and falsely identify OR faces from TA lineups at a higher rate to SR faces (Brigham, Bennett, Meissner & Mitchell, 2007). Results are discussed in terms of the implications for the creation and use of lineups and the relationship between background colour variation and the own race effect (ORE).

17:30
The crucial role of facelike configuration in the development of visual expertise: objective electrophysiological evidence
SPEAKER: Aliette Lochy

ABSTRACT. Whether learning to individuate novel objects leads to visual expertise (i.e., the automatic processing with a change in the level of visual representation) and the factors that mediate expertise acquisition remain unknown. Here we used a well-controlled set of novel objects that could appear facelike or non-facelike depending on the objects’ orientation (Vuong et al., 2014). Two groups of 11 adults were trained for 14 sessions (~20 hrs) at individuating 26 objects. The groups differed only in whether participants were trained and tested with the facelike or non-facelike orientation. Pre and post training, we used fast oddball periodic visual stimulation to measure robust and objective electrophysiological discrimination responses at predefined frequencies (Liu-Shuang et al., 2014). Sequences of identical objects (unseen at training) were presented at 5.88 Hz for 60 sec. At regular intervals (1.18 Hz), a different “oddball” object was inserted into the sequence. After training, only the facelike groups showed a significant increase in the discrimination response at 1.18 Hz and harmonics (2.36 Hz, etc.) over lateral occipital sites. These results indicate that a facelike configuration is essential to observe the effect of extensive training on the visual representations of novel objects in adulthood.

16:00-17:45 Session 20C: Colour vision

.

Location: C
16:00
Dichoptic color gratings reveal a perceptual bias for binocular summation over binocular difference, which is stronger in central than peripheral vision
SPEAKER: Li Zhaoping

ABSTRACT. When left and right eyes are presented with composite patterns A+B and A-B, respectively, ambiguity can ensue between percepts reflecting ocular summation (A) and opponency (B) channels in primary visual cortex (Li and Atick 1994; May, Zhaoping, and Hibbard, 2012). When A and B are foveal gratings having different drift directions (Shadlen and Carney 1985) or different orientations (Zhaoping 2013), subjects more frequently perceive the ocular sum, A. This perceptual bias is weaker or absent in the periphery (Zhaoping 2013, 2014). Here, I generalize these findings to color. A and B are static, colored, horizontal gratings, with random spatial phases. Each grating exhibits spatial alternations between its own pair of colors: e.g., one grating is red-green and the other is blue-yellow. Each monocular image, A+B or A-B, typically displays a collection of hues. Observers briefly saw the dichoptic stimulus (e.g., 0.2 second) and reported whether it appeared more like reference A or B in color. The bias for ocular summation may be associated with a perceptual prior acquired through visual experience; its enhanced strength in the fovea is likely general across different visual feature dimensions, with top-down feedback (to implement visual analysis by synthesis) favored in central vision (Zhaoping 2013).

16:15
Flicker antagonism and synergism caused by multiple cone responses

ABSTRACT. Psychophysical measurements reveal clear evidence for antagonistic and synergistic interactions between visual responses generated by uniform fields of flickering light. Such light generates fast or slow responses with the slower responses’ being delayed by tens of milliseconds and being of either the same or the opposite sign as the faster response.

The interactions of fast and slow responses can be clearly seen in the delays between pairs of S-, M- or L-cone flicker stimuli measured using a flicker-photometric cancellation technique, which expose ubiquitous and often sizeable delays between the various responses. The interactions can alter the shape of temporal contrast sensitivities depending on the frequencies at which the responses constructively or destructively interfere.

Overall, the results are consistent with interactions between fast “centre” responses and delayed, antagonistic “surround” responses through a network of recursive, inhibitory lateral interconnections in which one step through the network of discrete elements produces a delayed inhibitory signal, two steps, a more delayed but excitatory signal, and so on. The delays for a single step are typically greater than 25 milliseconds. We suppose that the interactions reflect the properties of a recursive network of lateral connections each acting across several cells, perhaps horizontal cells.

16:30
Putting the S (cones) into Symmetry

ABSTRACT. Previous studies have argued that symmetry perception makes use of neural mechanisms that are temporally sustained and pool information from relatively large receptive fields. The S-(L+M) cone-opponent mechanism fits this description. S-cones could thus contribute to symmetry by providing a large-scale integration window for co-localised luminance signals. We ran a series of psychophysical and event-related potential (ERP) experiments in order to assess the contribution of different cone-opponent mechanisms to symmetry perception, in isolation or in combination with luminance. Psychophysical findings indicate that at low, threshold contrasts, S-(L+M) only stimuli produced the largest bias towards perceiving images as symmetrical, whilst luminance stimuli introduced a bias towards perceiving them as asymmetrical, with no bias for images that combined the two signal types. The ERP experiment was run at high, multiple-of-threshold contrasts. Sustained Posterior Negativity (SPN), a symmetry-selective component of the ERP, was observed in all conditions and showed the expected enhancement for symmetry. The SPN symmetry effect was significantly larger when a relatively large S-(L+M) signal was combined with a luminance signal. This was not observed for other tested colour and/or luminance stimuli. In conclusion, S-(L+M) signals can facilitate symmetry processing, probably through providing a low-resolution window for large-scale spatial integration.

16:45
Testing measures of saturation

ABSTRACT. Several different measures of saturation have been suggested in the literature. Most of these measures are not ordinally equivalent. Nevertheless, it is not known which of the measures fits human perception of saturation best. We selected three standard colors and ten comparison color directions from the 30 cd/m2 equiluminant plane in CIE 1931 xyY color space. In each trial, we presented two color patches for 750 ms against a gray background whose luminance was 10 cd/m2 in one experimental session and 45 cd/m2 in another. One patch always had the color of one of the standards, while the other patch's color was sampled with an adaptive algorithm from one of the comparison directions. Observers had to decide which of the patches is more saturated. For each of the ten comparison directions and each of the standards we computed the point subjective equality (PSE). These PSEs were compared to the predictions of different saturation measures defined in the CIECAM02, HSV, DKL, LAB, LUV, and CIE 1931 xyY color spaces. On average, the predictions of the measures defined in LAB and LUV space fit human perception of saturation best, while the measures defined in CIE xyY and HSV space performed worst.

17:00
Pedestal masking of S cone tests: Effects of gain control and cone combination
SPEAKER: Rhea Eskew

ABSTRACT. Masking and habituation experiments have demonstrated psychophysical asymmetries for detection of S cone increment (+S) and decrement (-S) stimuli (reviewed by Smithson, 2014). Wang, Richters, & Eskew (2014) found significantly more masking of +S tests than -S tests by the identical noise masks. In the present study, masking of +S and -S tests was measured using a 2afc pedestal procedure. The chromaticity of the pedestal mask was varied in a plane in cone contrast space in which the L and M cone contrasts were equal, and the ratio of S to L=M varied, keeping constant the resultant vector length of the pedestal. ‘Purplish’ pedestals (combinations of +S and –L=M) produced significantly more masking than ‘yellowish’ (combinations of -S and +L=M) ones. This was true for both +S (purplish) and -S (yellowish) tests. Consistent with the noise masking results of Wang et al. (2014), and with some single-cell physiological findings, this masking pattern suggests there is more contrast gain control in S-On than S-Off pathways; the difference in masking depends on the (high contrast) pedestal polarity rather than the (relatively weak) test polarity. Models of cone combination in the two pathways, based upon the masking pattern, will be discussed.

17:15
Classification Images of chromatic edge detectors in human vision.

ABSTRACT. Edge detection is an important early stage of visual processing. Spatial changes in luminance are associated with object boundaries, but they may also indicate shadows. Changes in colour, however, are associated with object boundaries but not shadows, and so may be more reliable indicators of object boundaries. For this reason, one might expect to find colour edge detectors in the human visual system. We mapped the shapes of luminance and colour edge detectors using classification image methods (Beard & Ahumada 1998). The observer’s task was to detect a luminance edge embedded in luminance noise, or an isoluminant (L-M or S-cone) edge embedded in isoluminant chromatic noise. In both cases, brown noise (with 1/f2 power spectrum) was used. Brown noise constrains the width of optimal edge detectors (McIlhagga, 2011). Chromatic edge and noise were smoothed to lessen chromatic aberration artifacts. The luminance condition was also smoothed, for comparison purposes. We found that the classification images for the luminance and chromatic conditions were very similar to one another. The chromatic edge detectors were analogous to those found in primate V1 (Johnson et.al. 2008). These results suggest that chromatic channels contain edge detectors like those found in luminance channels.

17:30
fMRI adaptation in the human LGN

ABSTRACT. Adaptation effects in fMRI, in which prior exposure to contrast causes a reduction in the BOLD contrast response, are known to occur in human visual cortex. Here we investigate whether the human LGN shows fMRI adaptation and whether it is selective for red-green (RG) chromatic or achromatic (Ach) contrast. We localized the LGN in 12 subjects (Mullen et al, 2008). Test and adapting stimuli were RG or Ach high contrast sinewave counter-phasing rings (0.5cpd, 2Hz). Adaptation and no-adaptation conditions were compared within a block design, with adaptation or no-adapt stimuli presented for 12s, test stimuli for 18s, and fixation-only for 9s. Ach and RG adaptors were tested in separate runs. The LGN showed significant fMRI adaptation. The signal for the RG test stimulus was significantly reduced following both RG and Ach adaptation, whereas the signal for the Ach test showed little change following either adaptor. Assuming the RG test response is mediated by LGN P-cells, our results suggest that: 1. this pathway can show significant adaptation and 2. it is sensitive to both RG and Ach contrast. Results differ profoundly from the lack of adaptation reported neurophysiologically for primate P-cells, indicating the two types of adaptation likely have different origins.

18:30-19:30 Session 21: Rank Lecture (Sponsored by Rank Prize Funds)

Rank Prize Lecture Marisa Carrasco

Location: St George's Hall
18:30
How attention affects visual perception

ABSTRACT. How attention affects visual perception

19:30-22:00 Session : Social Dinner

Dinner in St George's Hall

Location: St George's Hall