ECVP2015: EUROPEAN CONFERENCE ON VISUAL PERCEPTION
PROGRAM FOR THURSDAY, AUGUST 27TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-11:00 Session 22A: Machine vision

.

Location: A
09:00
General-Purpose Models in Biological and Computer Vision
SPEAKER: James Elder

ABSTRACT. The early days of computer vision were fired with an ambition to build impressive general-purpose vision systems. In these times there was a keen interest in understanding how biological systems operate so gracefully over a wide range of tasks and diverse conditions. As the full complexity of visual inference became apparent, the field of computer vision matured and become channelled into relatively narrow sub-problems. While leading to algorithms that actually work for some important applications, this has been at the expense of broader thinking about general-purpose vision systems. In this talk, I will argue that due to the combinatorial complexity of visual scenes, images and tasks, successful general-purpose vision systems, whether biological or machine, will have in common a generative core in which internal representations are determined not only by the task but also by the physical invariants and statistical structure of the visual scene. To illustrate, I will briefly discuss problems in perceptual organization, shape perception, linear perspective and spatial attention where generative models can play a central role and where research in biological and computer vision has been synergistic. I will also suggest a number of areas where opportunities for such synergy appear to be on the horizon.

09:30
Modelling Scene Structure: Vision as Inverse Graphics

ABSTRACT. The basic task of the sensory system (including the visual system) is to learn about the structure of the observed world. This problem can be phrased as one of building statistical models to represent an image in terms of primitives and their associated latent variables. A long-standing view of computer vision is that it is the inverse of a computer graphics problem, i.e., to infer the objects present in a scene, their positions and poses, the illuminant etc. In the language of machine learning, these quantities are latent variables which must be inferred in order to understand the scene. Recent work (see e.g. Kulkarni et al., 2015) is reviving this old idea, which can be formalized in terms of the Helmholtz machine architecture (Dayan et al., 1995). This incorporates a recognition model transforming the image into latent variables, and a generative model going the other way. Training of the recognition model can be greatly facilitated by using computer graphics models of scenes, for which the latent variables are known. Here one can use recent developments in image analysis like deep convolutional neural networks (Krizevsky et al., 2012) to obtain the best recognition network performance.

10:00
Human in the loop computer vision

ABSTRACT. Despite ubiquitous computing, most normal people are not benefiting from advancements in computer vision research. Equally, most vision systems do not improve with time or learn from their users' experience. This is a terrible waste, but is understandable: there are plenty of specific vision problems where progress a) can be made "offline" in labs trying to beat a recognized benchmark score, and b) the specific problem affects a big industry, like scene-flow for cars, or image-retrieval for search engines.

In this talk, I advocate that we should be aiming for responsive algorithms, and that these should be measured in terms of accuracy-improvement, and the user's ability to perform their specific tasks. This means we will need new benchmarks, and that we need to engage with real users for our models and experiments to be meaningful. While my group has started making software that adapts to specialist users, e.g. biologists/zoologists, the ageing population is just one mass-scale cohort that will require new computer vision models and interfaces.

10:30
The evolution of Computer Vision

ABSTRACT. The area of computer vision has evolved considerably over the years. From its early routes in AI, it has moved from engineering through to machine learning and big data. While datasets are abundant and performance on specific tasks increases each year, there remains one fundamental truth: that machines cannot understand and reason about what they see in the way that humans do. This talk will look at the state-of-the-art in computer vision from both my own research and the wider academic community covering topics such as tracking, object and activity recognition and interpreting the actions of humans. The state-of-the-art in computer vision relies heavily upon supervised learning and while deep learning has demonstrated huge performance gains over the last few years, the semantic gap remains. It is evident that learning is the way forward, but the questions of how we achieve this in terms of lifelong learning and high level reasoning at a linguistic level remain open research questions. This talk will not propose solutions to unifying human/machine vision but attempt to stimulate discussion about the common research questions such as higher level reasoning, context and language which we take for granted and machines have yet to master.

09:00-11:00 Session 22B: Colour Constancy

.

Location: B
09:00
What we mean by colour constancy and how to study it

ABSTRACT. The colour of an object apparently changes with its illumination implying that strictly speaking colour constancy is a myth. Specifically, illumination changes the whole colour palette. Q1: Why do we call it constancy? A1: Because the inner (hue component) structure remains same. Q2: What underlies an asymmetric object-colour match? A2: As across-illuminant match is impossible, it is not a match. It is a correspondence between the colour palettes corresponding to the different illuminants. It is based on the equality of component hue weights. Q3: Does colour constancy exist? Does the colour of an object retain the same hue component weights when the illumination changes? A3: No. Metamer mismatching causes an unavoidable (colour) shift in these. Q4: What to measure? A4: The colour shift caused by illuminant alteration; i.e. the transformation of the colour of an object induced by illumination. Q5: Can such measurement be done using pictorial displays? A5: No. Pictures are objects of dual nature. They are intrinsically ambiguous. A pictorial image of an object loses some important features, which makes it different from its real prototype. Thus, being special, pictorial perception deserves study on its own, but will tell us little about the perception of real objects.

09:15
Confessions of a Constancy Index Junkie

ABSTRACT. Each speaker was asked to identify key questions related to color constancy and provide opinions, with the hope of encouraging lively discussion. Thus: Q1) Why do we call it constancy, when the perception of object color is not in fact perfectly constant? A1) It is useful to situate our understanding of color within the framework of its functional value. The rubric of constancy helps with this. Q2) Should we be interested in color appearance or color performance? A2) Both, and how they are related. Q3) What should the field be trying to achieve? A3) Ultimately, we want a computable model that predicts appearance and performance for any scene, without handwaving. Q4) What do instructional effects tell us? A4) Instructional effects tell us either that there are multiple modes of color appearance or that subjects can reason about what they see, but not which. We need to find better methods and move beyond instructional effects. Q5) Do graphics stimulations produce the same effects as natural scenes? A5) Not yet. Q6) Should we therefore abandon graphics simulations as stimuli? A6) No. Results obtained for simplified stimuli allow identification of principles that may help us tackle the richness of natural scenes.

09:30
Colour constancy and the challenge of environmental change for perceived surface colour
SPEAKER: David Foster

ABSTRACT. In a 2001 SPIE article Larry Arend posed a basic question about colour constancy in natural environments: given their inhomogeneity, what does perceived surface colour mean? An indirect answer is offered by operational constancy, which requires judgements of objective properties rather than subjective appearance. For example, can surfaces of constant reflectance undergoing spectral changes in illumination be discriminated from surfaces undergoing spectral changes in reflectance? In practice they can, with constant reflectance signalled by approximately invariant spatial ratios of cone excitations. In natural environments, however, both spectral and geometric illumination changes occur through the day, including changes in mutual illumination and shadows. Can surfaces undergoing these illumination changes be discriminated from those undergoing spectral reflectance changes? In principle they can, since time-lapse hyperspectral radiance data show that spatial cone-excitation ratios over short distances are still approximately invariant. But how should operational constancy be reconciled with varying surface colour perception? One possibility is that surface colours in natural environments are perceived not in isolation but in combination with perceived local illumination, as advocated by Tokunaga and Logvinenko in a 2010 Vision Research article. These combined perceptions might correspond to objective judgements and simultaneously afford a more direct answer to Arend's question.

09:45
Real color constancy

ABSTRACT. We suggest that the main purpose of color constancy is to assign particular color sensations to objects and thus aid their identification. Most previous attempts to quantify constancy have deviated from the way we typically use color constancy in real life. Here we present a new and intuitive approach that allows us to measure constancy for almost arbitrary colors, using real objects in fully natural environments with a single full-field illuminant. Participants were asked to bring a personal object that had for them a well-defined color that they were confident they could identify. Without the object being present, participants selected the Munsell chip that best represented the color of “their” object. They performed the task first in a room under neutral daylight illumination and in four other rooms that had non-daylight illuminations provided by windows covered with colored filters. In this task, our participants were perfectly color constant and frequently selected the same chip under all illuminants. We conclude that color constancy does exist, if task and conditions are representative of the uses of color constancy in everyday life.

10:00
All Illuminations are not Created Equal: The Limits of Colour Constancy
SPEAKER: Anya Hurlbert

ABSTRACT. Colour constancy is a textbook example of perceptual constancy, yet colour scientists frequently report that it is imperfect, approximate, or simply does not exist. Here we address that key dichotomy by adopting the premise that colour constancy is a multi-level phenomenon and suggest that its completeness depends on the level; this premise calls for a distinction between “knowing” and “seeing”. Its completeness also depends on the measurement technique and the specific surfaces and illuminations involved. As an example, we take the appearance level, where constancy means there is no change in the appearance of a surface under an illumination change. We measure colour constancy at this level by determining discrimination thresholds for illumination change (parametrised by unit distances in a perceptually uniform space (Lu*v*)). In experiments using real surfaces and real illuminations, produced by tuneable multi-channel LED light sources, we find that thresholds for discriminating illumination changes are highest – i.e. colour constancy is best - for “blue-ish” daylight illuminations. Conversely, colour constancy is worst for atypical illuminations, and fails spectacularly for certain engineered illuminations, even when these are metameric. These findings argue against the notion of “equivalent illuminations”, and suggest that colour constancy is optimised for naturally encountered illuminations.

10:15
Functional color constancy
SPEAKER: Adam Reeves

ABSTRACT. Color constancy surely concerns not just appearance, but also function. Considering recognition as just one function, examples include the following: re objects (the classic question), do natural variations in viewing point, lighting, distance, prevent us recognizing objects (say, fruit) important for survival? Re surfaces: can one recognize – from color- whether land is soft or muddy, wet or frozen, well enough to guide walking? Re light: Can we identify an illumination, independent of the surfaces which reflect it to the eye? Do chromatic variations that co-vary with luminance (shadows are bluer and darker) aid constancy? Re range: do natural variations in lighting move surfaces and objects across or merely within color categories? (in the latter case, color constancy is moot). These issues, hard as some of them are, all need research. I will suggest that one way of understanding color constancy in functional terms (like other constancies) is to analyze sensitivity, criteria, and utility together, within a signal detection framework – what is the cost, for example, of a false alarm?

10:30
Questions in Surface Color Perception

ABSTRACT. The visual system allows to make more-or-less accurate judgments about the locations, orientations, and the material properties (including their color and lightness) of surfaces under a range of lighting conditions. We have accurate models of the retinal image formation process based in physics but, in practice, we approximate these models (which describe photon-molecule interactions) by an intermediate mathematical language that resembles the input to a computer graphic rendering package. An initial set of questions concern the accuracy of these representations and of our simulations of image formation. I will review work concerning the accuracy of models of light and surface. A second set of questions concerns the representations implicitly used in judging surface properties. I will review the considerable evidence concerning what is represented and used by the nervous system, highlighting what we know and do not know. In the last part of the talk I will discuss the impediments to research in this area introduced by our terminology and by our reliance on introspection to “explain” phenomena. Visual perception is an odd example of a field where the researchers are themselves the object of study. This apparent advantage, I will argue, has proven to be a liability.

10:45
Identifying surface colors across illumination conditions: Neural Adaptation, Similarity Judgments and Prior Beliefs
SPEAKER: Qasim Zaidi

ABSTRACT. Why is it useful to identify surface colors? For discerning illumination differences across scenes, separating shading cues from surface color variations, judging properties of materials, and identifying objects when shape and texture are not informative. What are the environmental cues that help this process? Cone absorptions from sets of natural surfaces are highly correlated across illumination spectra. Consequently, neuronal adaptation can reduce the impact of illumination differences on surface appearance, and algorithms can simultaneously estimate illumination and surface colors by template matching of chromaticity distributions. Correlations are generally not as strong for rough surfaces, so there is less similarity of appearance, unless the viewer-surface geometry stays constant. How do neural processes, cognitive judgments, and prior beliefs contribute? In 3-D scenes, neuronal adaptation to brightness and color reduces the impact of illumination differences, but generally not enough to achieve appearance invariance of 3-D objects. Hence many observers use similarity judgments about surface brightness and color to identify reflectance across illumination conditions. This strategy implies a naïve prior belief in appearance constancy, not just in constant surface reflectance. Other observers use illumination cues provided by appearance differences to identify surface reflectance across illuminations. The latter strategy is statistically optimal, the former opportunistic.

09:00-11:00 Session 22C: Pupillometry

.

Location: C
09:00
Pupillary responses to emotionally arousing words in bilinguals’ first versus second language

ABSTRACT. Bilinguals often report stronger emotionality in their first language (e.g., Pavlenko, 2006), which seems supported by findings based on skin-conductance (e.g., Harris et al., 2006). The present study measured pupil dilation in response to high- versus low-arousing words (e.g., riot vs. swamp) in German(L1)-English(L2) and Finnish(L1)-English(L2) bilinguals, both in their first and in their second language. A third sample of English monolingual speakers (tested only in English) served as a control group. The stimuli were selected on the basis of available emotionality norms, and were closely matched (both within and across languages) for length, frequency, emotional valence, and concreteness. During each trial, a mask was presented for 500ms, followed by a brief presentation of the word (for 50ms + 20ms × length-of-word) and then the mask again for 1700ms. Participants had to keep looking at the centre of the screen and to indicate (via a prompted button-response at the end of each trial) whether they recognised the word or not. Results indicated no appreciable differences in word-recognition performance, but larger and longer-lasting pupil responses to high-arousing words when stimuli were presented in participants’ first rather than second language, confirming less emotional involvement in the language acquired later in life.

09:17
The eyes' many stories about concurrent (cross-modal) action demands: Effects on saccadic latency, pupil response, and blink rate

ABSTRACT. Processing load is reflected in several eye-related parameters including saccadic latency, pupil response, and blink rate, indicating that “stressed” eyes hesitate, widen, and eventually shut down. Here, these facets are analyzed under single- vs. dual-action demands. Participants in Experiment 1 switched between single manual, single vocal, and dual (manual-vocal) response demands while fixating a central fixation cross. Results suggest dual-response costs for manual and vocal latencies. However, while blink rate and pupil dilation were also increased in the dual vs. single-manual condition, the data from the single vocal condition resembled those from the dual condition. Thus, vocal demands per se might increase blink rate and pupil dilation, potentially overriding any load-related effects. Experiment 2 compared saccade latency, pupil dilation and blink rate in blocks of trials involving only basic saccade demands vs. blocks with additional manual key press demands. Results suggest increased saccadic latencies and changes in pupil dynamics under dual- (vs. single-) response demands, but no effect on blink rates. Taken together, the results suggest that while all parameters may individually be associated with variants of processing load and appear to interact, the underlying mechanisms appear to be highly distinct.

09:34
Different measurements of pupil size as response to auditory affective stimuli and their application in a visual perception task
SPEAKER: Sarah Lukas

ABSTRACT. It has been shown in a variety of previous studies that pupil size increases as response to emotionally loaded stimuli (images, words, and sounds). It has been assumed that pupil dilation occurs as a response to highly arousing stimuli, independent of the valence of the stimuli. However, different studies use different measures of pupil size. Moreover, low arousing affective stimuli have rarely been used so far. The goal of the present study was two-fold: first, we are comparing and discussing different measures of pupil size like general pupil size change, change at a certain point of time, maximum, and maximum latency with respect to affective auditory stimuli and with respect to valence and arousal. In a second step, we use the knowledge of these measurements to apply them in a visual perception task to investigate effects of emotional impact on the visual useful field of view.

09:51
Evaluation of features derived from pupil dilation in a stress induction experiment

ABSTRACT. Using features derived from pupil dilation, can the affective state of a computer user be predicted? In this study, we evaluated the success of classification of neutral versus stressful states. Two experiments were designed with pictures chosen from IAPS: experiment 1 contained neutral, experiment 2 contained negative, highly arousing pictures. Both experiments had 20 trials with 6s picture viewing time and 12s rest. The participants were asked to respond to a cognitive criterion, while viewing an array of these pictures simultaneously. In the second experiment, the cognitive criterion was harder. All participants reported higher stress during the second experiment. Pupil dilation data was collected with TobiiTX300 at 60Hz. Preprocessing steps were as follows: eye-blink extrapolation, moving median filtering, outlier removal. Extracted features were based on absolute value and entropy of the signal. The WEKA 3.7 library is used for classification of neutral versus stressful trials. Out of 18 features, 5 were predicted as informative with feature selection methods. Success of stress prediction was 72.8 % sensitivity and 68.2 % specificity using bagging with random forest for classification. In sum, it can be said that pupil dilation is a subtle but promising signal to predict the stress of a computer user.

10:08
Voluntary Pupil Control
SPEAKER: Jan Ehlers

ABSTRACT. During the past years, increasing attention is being paid to operationalize pupil dynamics for affective classification in human-machine interaction (HCI) (e.g. Jacobs, 1996). Thereby, pupil dynamics are regarded as a passive information channel that provides direct and genuine impressions of the user’s affective state but defies any voluntary control. However, considering the large number of achievements in the history of biofeedback based on vegetative parameters (e.g. skin conductance or heart rate variability) one may also assume pupil dynamics to be brought under control by techniques of operant conditioning. We applied visual real-time feedback on pupil diameter changes to enable intentional influence on the related dynamics. Nine volunteers underwent a one-week training program and utilized affective autobiographical associations (imaginations of fear) to gradually exert control. Results indicate that every participant was capable of voluntary expanding pupil diameter beyond baseline values; albeit with varying degrees of success and over differing durations. In a follow-up to the training procedure, subject demonstrated voluntary pupil control even without the assistance of real-time feedback. As a consequence, we conclude that pupil size information exceeds affective monitoring in HCI and may constitute an active input channel to interfere by means of simple cognitive strategies.

10:25
Effects of emotion and cognitive load on pupillometric and saccadic responses in anxiety

ABSTRACT. Emotion and cognitive load affect pupillometric and saccadic responses by increasing pupil diameters and error rates-latencies of saccades as a result of increased emotional arousal and effort. These constructs map on to theoretical frameworks in anxiety, where cognitive indices of arousal and effort in high anxious (HA) individuals can be measured using pupillometric and saccadic responses. The purpose of the study was to examine the effects of emotion and cognitive load on pupillary and oculomotor systems in anxiety by manipulating emotion and load by using an emotional oculomotor-delayed-response task with different delay durations. The results showed that threat-related stimuli and long delays elicited increased peak pupillary diameters (PPD). Moreover, HA participants showed increased PPD compared to low anxious (LA) participants and increased PPDs were observed in HA participants regardless of delay duration. Error rates-latencies were affected by delay duration oppositely and latencies were affected by HA. The findings confirm that highly-arousing emotions and tasks that require more effort increased PPD. Additionally increased PPD in HA in long and short delay indicates increased compensatory effort for task demands regardless of task difficulty and lower processing efficiency in this group of individuals. Saccadic results support speed-accuracy trade-off and attentional control theories.

11:00-12:00 Session 23

Individual Growth & Difference (Disorders, Development & Aging) / Vision & Other Senses / Basic Visual Mechanisms (Binocular Vision, Depth Perception & Fovea vs. Periphery)

Location: Mountford Hall
11:00
Do strabismics perceive monocular stereopsis?
SPEAKER: unknown

ABSTRACT. A prerequisite for binocular stereopsis is the correct alignment of the two eyes and normal binocular fusion. Individuals with misalignment of the eyes (strabismus) are, most often, unable to obtain binocular stereopsis. Recently, we established that an impression of stereopsis can be obtained under monocular-aperture viewing of pictures. Though monocular stereopsis is weaker than binocular stereopsis, it shares the same phenomenological characteristics. This suggests that stereopsis (“seeing in 3D”) is not simply a byproduct of binocular vision but a more basic visual phenomenon linked to the derivation of the visual scale (Vishwanath, 2014). We report on a study aimed to determine if individuals with infantile constant strabismus can obtain the impression of monocular stereopsis. We tested individuals with various manifestations of strabismus, along with a control stereonormal group. Subjects compared viewing a pictorial image with two eyes or one eye through an aperture, and answered questions directed at understanding their depth impressions in both pictures and real scenes. Stereonormal observers confirmed our previous findings on monocular stereopsis. Interestingly, some observers with constant infantile strabismus reported depth impressions consistent with monocular stereopsis, though there was significant variability in the overall reports depending on the history and current manifestation of strabismus.

11:00
The effect of perceived reality of the visual scene in cross-modal interaction

ABSTRACT. Using a head-mounted display with head orientation tracking, observers can experience a visual scene recorded by a panoramic camera as a real ongoing world in some controlled situation (e.g. Suzuki et al., 2012). In this study, the participants' subjective reality was manipulated using this procedure and the effect of visual scene on cross-modal interaction was investigated. One group was explained that the visual scene was live and displayed via the attached camera. The other group was explained that it was a recorded video. If the participants noticed that the visual display is not real, those participants were categorized as a different group. Participants put their hand at the position corresponding to the dummy hand displayed in a HMD and their hand was stroked with a paintbrush synchronously to the visual display. The participants judged the orientation of the haptic stimulation which was incongruent to the visual display. The shift of the perceived haptic orientation influenced by visual stimuli was much larger in the group who perceived the video as reality than the group who didn't. The result indicates that the amount of the cross-modal effect of visual perception on haptic perception depends on the subjective reality of the visual scene.

11:00
Evaluating Multimodal Warning Displays for Drivers with Autism
SPEAKER: unknown

ABSTRACT. Providing consistent sensory information across multiple modalities can frequently improve performance, however, evidence exists that individuals with autism exhibit limited performance in multisensory integration. We designed two driving simulation experiments to test the effectiveness of multimodal audio (A), tactile (T) and visual (V) warning signals designed with different levels of urgency. In both experiments, warning signals had 7 modality levels (A, T, V, AT, AV, TV, ATV) and 3 levels of urgency (High, Medium, Low) for a total of 21 possible stimulus combinations. Experiment 1 measured perceived urgency and perceived annoyance of the warnings while Experiment 2 measured recognition time and accuracy of identifying the level of urgency. A total of 20 adult males participated, 10 in the ASD group and 10 age-matched individuals in the typically developed (TD) group. Results from Experiment 1 showed no group difference in perceived urgency, though the ASD group revealed lower ratings in perceived annoyance. Results of Experiment 2 showed that the ASD group responded more accurately than the TD group, they also demonstrated quicker recognition times than the TD group for warnings containing vision and were quickest for the Vision-only condition

11:00
Discrimination of blur in peripherally-viewed natural scenes
SPEAKER: unknown

ABSTRACT. All optical systems display some degree of blur. Although this may ultimately limit spatial resolution, blur also provides important signals for ocular accommodation, depth perception and motion perception. Predicting blur detection and discrimination of natural scenes, however, has been problematic and led to rather complex and varied models. Here we ask whether a recent blur discrimination model (Watson & Ahumada, 2011), operating on visible contrast energy differences between simple stimuli, can also capture performance with natural scene stimuli. We measured human blur discrimination performance using natural scene stimuli presented at three different eccentricities (0, 11, 22.5°) and blurred by seven different Gaussian kernels of varied scale (reference blurs). Images blurred by reference and test amounts were presented in a two interval forced choice method of constant stimuli task requiring participants to indicate the sharper image. Threshold vs Reference (TvR) functions were similarly shaped to those of previous studies using simple stimuli. Blur detection thresholds (no reference blur) increased with eccentricity. However, in contrast to previous work with edge stimuli, blur discrimination thresholds for different eccentricities converged at higher reference blurs. Modelling suggests that eccentricity-based differences in the contrast sensitivity function shape TvR functions for natural scenes.

11:00
Binocular summation, binocular fusion and the transition to diplopia
SPEAKER: unknown

ABSTRACT. The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamically weighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) - not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. 'half-squaring') at each stage may account for the blur constancy.

11:00
Visual attentional focusing in 8-month-old infants predicts their future language skills
SPEAKER: unknown

ABSTRACT. A multi-sensory dysfunction of attentional focusing might be responsible for language deficits typically observed in children with specific language impairment (SLI). Although previous evidence showed that children with SLI demonstrate a sluggish engagement of visual attention, which accounted for a significant percentage of unique variance in their grammatical performance, a longitudinal-prospective is needed to demonstrate the causal link between visual attention deficit and language acquisition disorder. Here we investigated whether pre-language visual frontoparietal-attention functioning may contribute to explain future language emergence and development. Since 8-month-old infants can already rapidly adjust their attentional focus size, we longitudinally studied the relationship between the infants’ time course of attentional focusing and the future language production skills measured at 31 months in 35 children. The present 2 year longitudinal study shows that pre-language rapid attentional focusing skills - assessed by attentional cue-size facilitation (i.e., the shorter pre-saccadic latency in the small than in the large cue condition) – explain a significant portion of variance of the future language acquisition. Our findings provide the first evidence that visual spatial attention in pre-language infants specifically predicts future language acquisition, suggesting new approaches for early identification and efficient prevention of SLI.

11:00
Developmental progression in the audio-visual binding of novel environmental features in children
SPEAKER: unknown

ABSTRACT. The reliable crossmodal binding of environmental features supports a range of cognitive activities. During development children learn the statistical and semantic associations between these features. The current study explored the role of binding during the critical period in a child’s life when they start to formally learn the association between the sounds and symbols of the alphabet. This study assessed whether the ability to bind improves with age independently of longer-term exposure to the alphabet or other sound-symbol relationships. Reception (4 yrs+) and Year One (5 yrs+) age children undertook a change detection task which involved mapping the relationship between novel (random Garner-like matrices) shapes and novel (scrambled environmental) sounds. Two sound-symbol combinations were sequentially observed and then one combination was tested which could be a new or old combination of the original features. Signal detection analysis revealed no difference in bias between the age groups, whereas sensitivity to the correct binding significantly increased with age. We conclude that children’s ability to learn associations is based not only on experience, but also on individual difference in the ability to bind environmental features.

11:00
Form-motion suppressive interactions in normal and disabled readers - ECPV2015
SPEAKER: unknown

ABSTRACT. Detection of a low contrast static Gabor is strongly reduced by high contrast flankers whose spatial frequency (SF) is either equal or differed by ±1 and ±2 octaves to the target’s SF (Petrov, Carandini, & McKee, 2005). Suppression of a 0.5 c/deg drifting target occurs, with transient stimulation, with flankers SF < 2 octave but not ≥ 2 octave, suggesting that suppression occurs when both target and flankers stimulate the same (magnocellular) but not different systems (magnocellular and parvocellular). Based on the hypotheses of earlier development of parvocellular than magnocellular system, and of a magnocellular deficit in dyslexia, we compared the suppressive effect by flankers of SF ± 2 octave on the drifting target in adults and children either normo-reader or dyslexics. Children’s contrast threshold for the drifting isolated target did not differ from that of adults. However, both children’s groups had higher thresholds than adults when flankers SF was either lower or equal to that of the target. Moreover, only in dyslexics thresholds with +2 octave flankers are higher than with no flankers and higher than in adults. These results indicate stronger suppressive magnocellular lateral interactions in children than adults and, only in dyslexics, a motion-form stimulation imbalance.

11:00
Differences between deaf and hearing adults in visual projections from eye to brain
SPEAKER: unknown

ABSTRACT. Previous research has suggested that differences in retinal nerve fibre layer (RNFL) thickness can predict peripheral visual sensitivity in hearing and early deaf adults (Codina et al., 2011). As the size of early visual structures is correlated within individuals (Andrews et al. 1997), we hypothesised that RNFL differences should lead to downstream structural differences in the visual pathways. Participants included congenitally, profoundly deaf adults and age-matched hearing controls, all without visual deficits. Retinal layer thickness was measured using spectral-domain optical coherence tomography (SD-OCT). Optic nerve, chiasm and tract widths were measured using structural MRI. The visual field representation within primary visual cortex (V1) was measured using functional MRI retinotopic mapping. Retinal layers projecting from the macula were thicker in hearing participants while peripheral projections were thicker in deaf participants. The optic nerve, chiasm and tract were wider in hearing participants, reflecting the predominance of central fibres comprising these structures. Finally, the area and volume of the representation of the central visual field in V1 were relatively larger in hearing compared to deaf participants. Differences in the distribution of neural processing across the visual field between deaf and hearing individuals provide compelling evidence that congenital hearing loss influences early visual structures.

11:00
Measuring visual field distortions in amblyopia
SPEAKER: unknown

ABSTRACT. Abnormal visual experience early in life alters the functional architecture of visual cortex and results in marked deficits in monocular visual acuity and binocular function - collectively referred to as amblyopia. Recently, we have shown there are also distortions in the visual field representation of amblyopic individuals (Hussain et al., 2015). Here, we attempt to map the associated changes in early visual cortex of subjects with amblyopia, using high-resolution magnetic resonance imaging (MRI) at 7 T. To measure visual field representations functionally, we used anatomical and functional MRI (GE-EPI, 1.5 mm isotropic voxels, TR=2s, TE=25ms) and standard retinotopic mapping stimuli in healthy and amblyopic participants. Stimuli were presented to each eye, monocularly. Outside the scanner, we also assessed fixation stability in each participant. We used the population receptive field (pRF) method to estimate polar angle, eccentricity maps, and pRF sizes (Dumoulin et al., 2008). Our results reveal systematic differences in the maps of normal and amblyopic subjects. We relate these changes to behavioural maps measured using a dichoptic positional matching technique, and individual anatomy.

11:00
The relation between inter-object distance and contributions of eye- and image-based grouping during rivalry
SPEAKER: unknown

ABSTRACT. Binocular rivalry occurs when the information presented to the two eyes is inconsistent. Instead of fusing into a single stable image, perception alternates between multiple interpretations over time. Integration across space is also disrupted: although perception during rivalry can be affected by image content, visual information presented to the same eye tends to be integrated into a dominant percept most of the time. This suggests that integration across space during rivalry occurs mostly at an early monocular level of processing.  The question remains whether integration across space during rivalry that is based on image-content occurs at a later stage of processing than eye-based integration. We investigate the relation between eye- and image-based grouping and inter-object distance (IOD). Since later visual areas have increasingly larger receptive fields, image-based grouping should continue to facilitate dominance duration at larger IODs compared to eye-based grouping. However, if both types of grouping show the same relation with IOD, this would suggest that both image-based and eye-based models of rivalry occur at the same level of processing. Results will indicate whether multiple levels of processing are required to explain the spatial-temporal dynamic of binocular rivalry.

11:00
Texture amplitude provides only limited support for shape-from-shading in a visual search task and older adults are less able to utilize this cue.
SPEAKER: unknown

ABSTRACT. Second-order texture amplitude cues can disambiguate the role of luminance cues helping observers to discriminate illumination/shading from reflectance changes. Older adults are less sensitive to such cues than younger adults and this insensitivity extends to shading-reflectance discriminations. We tested visual search performance in a task involving simulated shaded bumps on a textured surface. When luminance and texture amplitude varied in harmony the bumps appeared more rounded: bump. When the cues were antagonistic the bumps looked flatter/less realistic: patch. We also varied light source direction. There was a significant effect of age on search efficiency but no clear effect of lighting direction. However reaction times were always very slow (intercept=1-2s) and significantly slower when finding patches among bumps compared to bumps in patches. Accuracy followed a similar pattern. Control searches for horizontal vs vertical lines and un-textured bumps among dips were efficient, with no effect of age, and had more typical reaction times. We think that it is necessary to scrutinize the whole display to extract the relatively weak second-order cue, but that attention is drawn more to bumps than patches. Older adults are further impeded by their insensitivity to the second-order cue.

11:00
Contrast detection differences between dichromats and trichromats
SPEAKER: unknown

ABSTRACT. Dichromacy is a form of congenital retinal lesion: protanopes and deuteranopes lack a precortical (L-M) opponent channel but the consequences for cortical development are unclear. One possibility is that the number of neurons in primary visual cortex is unchanged relative to trichromats so that more V1 neurons are available to process achromatic signals. If this is the case, we might expect that dichromats have improved achromatic visual processing compared to trichromatic controls. The nature of this improvement would depend on the details of the reallocation across spatial and temporal frequency channels and contrast sensitivities. Here, we used a spatial 4AFC task, to measure achromatic contrast discrimination thresholds for trichromatic and dichromatic observers, across a range of pedestal contrasts. We find evidence of lower thresholds in dichromats for high contrast pedestals (p<.04). Thresholds for low pedestal contrasts are unchanged between groups. These findings are discussed in the context of signal detection theory and the population-level encoding of contrast. We hypothesise that reallocation of neuronal resources does occur in dichromats and that the effects of this plasticity are most pronounced in the relatively sparse populations of neurons sensitive to high contrasts. Plans for further neuroimaging and behavioural experiments are described.

11:00
Peripheral vision effects central task performance under visual fatigue
SPEAKER: unknown

ABSTRACT. Visual load and difficult visual conditions can reduce performance (Richter, 2014). Nevertheless, open space offices are becoming more popular as well as GPS navigation while driving. What is more, we use our peripheral vision all the time, though a lot of tests are made to examine central vision and few of them involve periphery. For this reason, we investigated the role of peripheral vision on central task performance due to visual fatigue. The central task was performed at 60 cm. It was a computer-based visual search task consisting of a matrix (19.7deg horizontally and vertically) of black Landolt squares in size of 1.1deg each. This task was demonstrated on white background, with 5%, and 15% noise in periphery.Each participant had to memorize the target and find all the ones looking alike by clicking on them with a computer mouse. In addition, we measured near point of convergence, positive and negative relative accommodation and phorias at near. From the results, peripheral visual noise decreases central task performance under visual fatigue. Visual search task with 15% visual noise in periphery differs significantly between the performance with and without visual fatigue (p<0.05, One-Way Anova).

11:00
The distribution of visual marking in 3-D space: Evidence for a depth sensitive mechanism
SPEAKER: unknown

ABSTRACT. In visual search, old distractors presented one second ahead of a new target containing set of items may be effectively ignored. This “preview benefit” has been argued to involve a “visual marking” mechanism that inhibits the locations of old distractors. The current study investigated the three-dimensional distribution of visual marking. Participants searched for and identified a target letter amongst distractors (always presented at 0 disparity). Subsequently, participants localised a square probe. Critically, the probe could appear at the same zero disparity depth, in front, or behind the letters. In the preview condition half of the items appeared first for one second, and the probe could surround an old or new distractor, or appeared in empty space. In the full-set condition, all the items appeared at the same time and the probe either surrounded a distractor or appeared in empty space. In the preview condition probe localisation was significantly slower for probes appearing on old compared to new distractors or empty space, but only when the probes appeared at the same depth as the distractor. No effects were observed in the full-set condition. The results are consistent with a depth sensitive visual marking mechanism that inhibits specific locations in three-dimensional space.

11:00
Distance Perception in Immersive Environments – The Role of Photorealism
SPEAKER: unknown

ABSTRACT. Immersive environments (IE) are being increasingly used in order to perform psychophysical experiments. The versatility in terms of stimuli presentation and control and the less time-consuming procedures are their greatest strengths. However, to ensure that IE results can be generalized to real world scenarios we must first provide evidence that performance in IE is quantitatively indistinguishable from performance in real-world. Our goal was to perceptually validate distance perception for CAVE-like IEs. Participants performed a Frontal Matching Distance Task (Durgin & Li, 2011) in three different conditions: real-world scenario (RWS); photorealistic IE (IEPH) and non-photorealistic IE (IENPH). Underestimation of distance was found across all the conditions, with a significant difference between the three conditions (Wilks’ Lambda = .38, F(2,134)= 110.8, p<.01, significant pairwise differences with p<.01). We found a mean error of 2.3 meters for the RWS, 5 meters for the IEPH, and of 6 meters for the IENPH in a pooled data set of 5 participants. Results indicate that while having a photorealistic IE with perspective and stereoscopic depth cues might not be enough to elicit a real-world performance in distance judgment tasks, nevertheless this type of environment minimizes the discrepancy between simulation and real-world when compared with non-photorealistic IEs.

11:00
Audiovisual synchrony improves temporal order judgment performance only in complex dynamic visual environments
SPEAKER: unknown

ABSTRACT. Visual temporal order judgments can be profoundly degraded by the mere presence of additional visual events at remote spatio-temporal locations. In this study we investigate whether this Remote Temporal Camouflage (RTC) effect can be modulated by the presence of auditory events paired with visual target events. In the first experiment visual temporal order judgment performance was compared under static or irregularly timed dynamic visual distractor conditions, without or in combination with a pair of broadband tones either synchronised with each target event (Synchronised condition) or preceding the first and succeeding the second by 75 ms (Ventriloquism condition). In the case of static distractor environments visual temporal acuity benefits were observed only in the Ventriloquism condition. Whilst thresholds were significantly elevated in the dynamic visual context, the presence of both tone conditions significantly improved visual performance. In our second experiment we examined the effect of distractor regularity under analogous sound-related conditions. We find that the visual performance benefits afforded by synchronous target tones do not occur when the distractor events occur at regular intervals. These results suggest that audio-visual correspondences improve visual temporal order judgments only to the extent that they facilitate visual temporal segmentation.

11:00
Central and peripheral vision loss differentially affects contextual cueing in visual search
SPEAKER: unknown

ABSTRACT. Visual search for targets in repeated displays is more efficient than search for the same targets in random distractor layouts. Previous work has shown that this contextual cueing is severely impaired under central vision loss. Here, we investigated whether vision loss, simulated with gaze-contingent displays, prevents the incidental learning of contextual cues or the expression of learning, that is, the guidance of search by learned target-distractor configurations. Visual search with a central scotoma reduced contextual cueing both with respect to search times and gaze parameters. However, when the scotoma was subsequently removed, contextual cueing was observed in a comparable magnitude as for non-impaired controls. This indicated that search with a central scotoma did not prevent incidental context learning, but interfered with search guidance by learned contexts. In contrast to central vision loss, peripheral vision loss was expected to prevent spatial configuration learning itself, because the restricted search window did not allow the integration of invariant local configurations with the global display layout. This expectation was confirmed in that visual search with a simulated peripheral scotoma eliminated contextual cueing not only in the initial learning phase with scotoma, but also in the subsequent test phase without scotoma.

11:00
Hearing through your eyes: the Visually-Evoked Auditory Response
SPEAKER: unknown

ABSTRACT. In some people, visual stimulation evokes auditory sensations. How prevalent is this, and can it affect performance of visual and auditory tasks?

We measured auditory versus visual ‘Morse code’ sequence matching in 40 randomly-sampled adults. When asked whether they had heard faint sounds accompanying the flash stimuli, 16% responded 'Yes'. These same participants performed significantly better than ‘No’ respondents on visual sequence matching, as if their concurrent auditory sensations benefited visual performance (Saenz & Koch, 2008). But in a separate test, we found that any such benefit for visual sequencing was balanced by a cost for detecting faint auditory signals in the context of irrelevant visual stimulation, regardless of reported awareness. Thus even when subliminal, visually-evoked auditory sensations may affect detection of real sounds.

The high prevalence of subjective reports of ‘hearing’ visual flashes greatly exceeds the estimated prevalence of other typical synaesthesias (e.g. 2-4%; Simner et al, 2006). Our objective results suggest that subliminal visually-evoked auditory sensations may affect an even larger population. Such prevalence might be explained by the greater natural correlation between visual and auditory stimuli, compared to other more arbitrary associations typical of synaesthesia.

11:00
Does the sense of agency occur when tactile feedback is substituted for proprioceptive feedback?
SPEAKER: unknown

ABSTRACT. When people view their left hand in a mirror positioned along the midsagittal plane while moving both hands synchronously, the hand in the mirror visually captures the unseen right hand’s position. This is called mirror illusion. The illusion evokes the sense of agency; that is, the participant’s sense of controlling their own body. In Gallagher’s model (2000), this sense occurs when sensory feedback predicted by the forward model of body movement matches actual sensory feedback. Since this model does not address multimodality, it is unclear whether the sense of agency occurs when information is incongruent between multimodal sensory feedback, particularly vision, proprioception, and tactile sensation. To answer this question, a 2 x 2 factorial design experiment was performed using the mirror illusion (the unseen hand’s movement: voluntary or relaxed; vibration on the unseen hand: applied or not). Questionnaires’ result showed that sense of agency was present in the relaxed vibrating condition, as well as in both voluntary conditions, indicating that the sense of agency may occur when visual and tactile feedback are supplied without proprioceptive feedback. This suggests possible feedback types that may evoke the sense of agency, and that the sense of agency’s criteria may depend on feedback context.

11:00
Cross-modal insights into the controversies of conceptual knowledge representation and temporal pole asymmetry
SPEAKER: unknown

ABSTRACT. How is conceptual knowledge represented? Do the temporal poles form a bilateral unified representational system for conceptual knowledge, or is there left/right specialization? To address this hotly debated issue and its generalization to non-visual modalities, we developed a novel cross-modal approach.

Methods: FMRI was conducted in a Siemens 3T scanner with a custom tablet system for the presentation of tactile information. The conceptual information was presented tactilely through Braille text, and its retrieval from memory was expressed through two non-visual modes – Braille writing and blind-drawing. Blind subjects read Braille paragraphs describing objects, faces, scenes and navigation sequences; then expressed their comprehension of each by i) non-visual (Braille) writing-from-memory, and ii) non-visual drawing-from-memory (20s/task).

Results/Conclusions: Comprehension of the Braille text concepts expressed through Braille writing-from-memory produced strong left-lateralized response at the temporal pole, while their expression through drawing-from-memory produced a right-lateralized response in the mid-anterior temporal lobe. In both cases, the corresponding regions in the opposite hemispheres were strongly suppressed. This first Braille writing and Braille-derived drawing study thus reveals a distinctive form of counterposed hemispheric specializations in the anterior temporal lobe. Furthermore, it extends the issue of conceptual knowledge representation beyond the visual modality for both encoding and retrieval.

11:00
A population response model of spatial crowding over time
SPEAKER: unknown

ABSTRACT. Visual crowding, the inability to identify an object when it is surrounded by clutter, places a fundamental limit on object recognition in peripheral vision. Crowding is thought to arise because the features of a target and nearby flankers are represented within common receptive fields in early visual cortex and thus cannot be individuated. However, recent work from our lab and others has challenged this view by revealing temporal dependencies of crowding. For example, if a brief onset asynchrony is introduced such that a target is presented 50ms after the onset of flankers, the deleterious effect of crowding is greatly reduced. These data have been taken as evidence for a role of top-down processing in crowding. Here we present a new computational model of crowding of orientation signals that can account for these observations in a feed-forward framework. We model the responses of populations of orientation-selective visual neurons and predict the perceptual reports made by observers in a difficult crowding task. By incorporating the neurophysiological finding that orientation tuning changes over time, our model simulates the temporal dependence of crowding phenomena. Our model thus explains recent crowding data without invoking top-down processes.

11:00
A role of cutaneous inputs in self-motion perception (2): Does the wind decide the direction of perceived self-motion?
SPEAKER: unknown

ABSTRACT. Perceived self-motion has been mainly investigated in vision. But, Murata et al. (2014) reported the wind for cutaneous sensation with vibration for vestibule also occurred perceived self-motion. They referred to as “cutaneous vection”. The authors of this study have compared perceived self-motion on cutaneous vection with actual body transfer. In this study, we prepared two conditions that were the wind direction (front or behind from the participant) and the transfer direction (forward, backward or vibration alone). We used two bladeless fans for cutaneous stimulus to the participant face and a DC motor for vibration to the participant body. The floor of the participant could move to and fro. Onset latency, accumulative duration and rating of subjective strength were measured. The participant was also asked to point to the perceived direction. When the direction of wind was consistent with the direction of transfer, latency was significantly shorter, and the value of rating was significantly higher than other conditions. When the vibration alone with wind was presented, the perceived direction depended on the wind direction. When the wind from the front and behind simultaneously blew the participant, perceived direction was ambiguous. The wind direction contributed to decide the perceived direction.

11:00
Optimal parameters of the treatment procedures for rehabilitation and development of binocular functions in different cases
SPEAKER: unknown

ABSTRACT. To increase the efficiency of functional treatment aimed at rehabilitation and development of binocular functions, one should choose the parameters of the procedures taking into account individual characteristics of the patient. We tried to find optimal treatment conditions for the two groups of patients who had concomitant strabismus – convergent with hypermetropia (group CH, N=31) and divergent with myopia (group DM, N=23) – and underwent surgical operation just before our study. The main purpose of the treatment was to achieve fusion and formation of a clear and stable binocular single image. The improvement of binocular functions was assessed quantitatively by measuring the ratios of monocular and binocular visual acuity values and accommodation ranges at four distances. Among other findings, it was revealed that, at the beginning of treatment, to obtain noticeable progress, one should use specific binocular optical correction differing from the monocular one and depending on the anamnesis. Thus, at the distance 0.5 m, in group DM, the required differences between optimal binocular and monocular optical correction varied from 0 to -4.5 D; in group CH, corresponding differences varied from 0 to +3.5 D, however, in the majority of cases, they were in the range from+2.0 to +2.5 D.

11:00
Boundary extension effect is larger in tilt shift photographs
SPEAKER: unknown

ABSTRACT. In boundary extension (BE) experiments, people report remembering the content, which was originally beyond the edges of a studied view (Intraub & Richardson, 1989). Close-up views are known to yield larger BE errors compared to wide-angle views. We tested the hypothesis that larger BE is not caused by field of view, but by perceived distance. As people move in an environment, their view of proximal space changes more profoundly (compared to distant objects); therefore, the extrapolation of visual input is more useful in the proximal space. To manipulate perceived scene depth we used tilt shifted photographs, because they have the same layout, evoke same semantic references, but make the impression of toy-like scenery. We presented participants (N=25) with brief study view, followed by mask and asked them to adjust the test scene zoom level to match the study view. The protocol consisted of 12 distant scenes, 12 tilt shift versions of different distant scenes and 12 fillers. We found that irrespective of the initial study view zoom level (+/-10%), the responses for tilt shift scenes were biased by 2-3% towards more zoom-out view. The observed pattern is consistent with the hypothesis about larger boundary extension found in proximal scenes.

11:00
Depth echolocation task in novices sighted people.
SPEAKER: unknown

ABSTRACT. Some blind people have developed a unique technique, called echolocation, to orient themselves in unknown environments. Specifically, by self-generating a clicking noise with the tongue, echolocation allows blind people to perceive some characteristics of objects, such as material, shape, size and how they vary with the distance, ultimately gaining knowledge about the external environment. It is not clear to date whether also sighted individuals can develop such technique. Here, we tested the ability of novices sighted participants to perform an echolocation task, where the position of an object in depth had to be estimated. The participants repeated the task three times in three different sessions. The first was a training session, where the participants received a feedback about their performance. In the next two sessions no feedback was given. Participants were able to properly achieve the task already by the first session after the training with a high level of correct responses. More interestingly, an improvement in precision and accuracy was observed in the second and in the third session, suggesting that echolocation can be progressively learnt by sighted individuals.

11:00
Object substitution masking, stimulus noise, and perceptual fidelity
SPEAKER: unknown

ABSTRACT. In Object Substitution Masking (OSM) a mask surrounding, simultaneously onsetting with, and trailing a target leads to a reduction in target perceptibility (Di Lollo et al., 2000). It has been questioned whether this process is due to target substitution or the addition of noise to the percept (Pödor, 2012). Two experiments examined this issue using an adjustment task in which a test Landolt C is presented and participants rotate it to match the target Landolt C shown during the trial (typical OSM paradigms use 2-4 alternative forced choice); the dependent measure was the angle of error. In Experiment 1 the effect of a trailing OSM mask (80ms-320ms) is compared against that of adding stimulus noise of varying densities (25%-75%) to the target location. Both manipulations (OSM, stimulus noise) produced a similar change in the distribution of errors compared against a baseline (0ms trailing mask, 0%-noise). The pattern is consistent with both mask manipulations reducing the fidelity of the target percept. In Experiment 2 the OSM and stimulus noise manipulations were varied factorially. Here the two manipulations had combinatorial effects on the error distribution. Implications are discussed regarding the mechanisms of OSM and the consequences of OSM for target perception.

11:00
Color Vision Deficiency Test using Multi-primary Image Projector
SPEAKER: unknown

ABSTRACT. This study develops a simple and effective color vision deficiency test using a multi-primary image projector. The multi-primary projection system is mainly configured with a light source component and an image projection component. The programmable light source can reproduce any spectral curve. Spatial images are then generated using a digital mirror device (DMD) chip that quickly controls the intensity of the light source spectra in 2D image plane. Consequently the multi-primary image projector can reproduce 2D spatial image with arbitrary spectral power distributions. As with the test with an anomaloscope, our system generates circle stimuli. The upper side of the circle is the mixture of 545 nm green light and 665 nm red light. The bottom side is 590 nm monochromatic yellow light. By using the multi-primary image projector, we simultaneously presented fifteen different circles which consist of various mixture ratios of the upper side stimuli and various intensity levels of the bottom side stimuli. Then we tested color vision deficiencies by finding distinguishable colored circles. Through experiments using color vision deficiency simulation glasses, we confirmed that our system could realize the simple and effective test for color vision deficiencies.

11:00
Modeling the development of visual perception with computational vision
SPEAKER: unknown

ABSTRACT. Since childhood humans acquire increasingly complex visual skills supporting their social development. Triggered by a presumably innate capability of perceiving the presence of interacting agents, human perception evolves and focuses on the quality of the observed motion. Hence, children learn to decode others’ action goals, and also to categorize different classes of actions on the basis of motion features. The long-term goal of our work is to model the development of visual perception with computational tools, bridging computer vision, cognitive science, and robotics. We start from the earliest stages of the human development and focus on the use of coarse motion models for discriminating between biological and non-biological dynamic events. In particular, we propose a model inspired by the Two-Thirds Power Law (Viviani&Stucchi, 1992) and discuss its empirical validity in the context of video analysis. We then proceed to the estimation of the similarity between actions, and, as an add-on, we infer classes of affine movements. The analysis includes an evaluation of the tolerance of our models to view-point changes. Our computational tools will be exploited to improve robotic interaction skills, and, in perspective, to drive further empirical research on human vision.

11:00
Combining body ownership illusions and time delay adaptation in virtual reality environments
SPEAKER: unknown

ABSTRACT. The promise of virtual reality is that it is possible to explore perception under extraordinary and impossible circumstances  some such circumstances may induce unusual percepts. Recently, body ownership/presence illusions have been studied using avatars in virtual reality environments; a hallmark of illusory presence is a strong physiological reaction to the avatar’s harm. A special manipulation that can be introduced to VR is time delay adaptation. Adapting to 250 msec time delays in flight simulators, then snapping it back suddenly, induces an extraordinary causality violation aftereffect: the pilot believes the aircraft maneuvered before the pilot moved the controls (Cunningham et al., Psychological Science, 2001). It should be possible to combine these causality violation aftereffects with presence illusions to create a premonition-of-death-of-the-avatar illusion. Operating an avatar running in a maze induced a strong body ownership illusion. A pursuing drone shoots the avatar in the back and the operator experiences the shots through a tactile vest. A time delay between the movements of the operator and the avatar is gradually increased and adapted to. Unfortunately, using time-delay adaptation interferes with the presence illusion; additional multisensory feedback is being programed to compensate for the time delay adaptation’s effects and restore the presence illusion.

11:00
Simple reaction times to stimuli in virtual 3D space
SPEAKER: unknown

ABSTRACT. Reaction times (RTs) to simple visual stimuli depend on several stimulus properties. Recently, there was converging evidence that larger stimulus size evokes faster simple RTs. This effect seemingly depends on the stimulus’ perceived size rather than on physical stimulus properties. This effect typically is investigated using visual size illusions. In contrast, the present investigation was conducted using stereo head mounted displays. In that way, a circular reference plane consisting of 12 spheres was rendered. An additional target sphere was presented in the plane’s center, either in the same depth plane or displaced (near, far). In two conditions the target sphere was modulated such that either physical or perceived size was constant across depth planes. Constant perceived size was expected to evoke constant RTs across space, while in the constant physical size condition decreasing RTs were expected along with increasing depth (and perceived size). However, the results show an opposite pattern: In both conditions RTs increased from near to far target position. This finding is at odds with recent investigations on simple RTs and perceived size. Apparently, the relative stimulus position in space as well as the physical stimulus size exert influence on simple RTs.

11:00
Briefly presented visual search tasks reveal superior parallel processing in individuals with autism spectrum disorder
SPEAKER: unknown

ABSTRACT. The underlying mechanisms of the superior visual search skills in individuals with autism spectrum disorder (ASD) still remain controversial. The present study compared the performance of individuals with ASD and controls in briefly presented (160 ms) search tasks where the participants were asked to determine instantaneously the presence or absence of a pre-defined target among distractors. The short presentation method allows us to assess how quickly and accurately the participants process multiple stimuli simultaneously rather than focus on a stimulus serially. We found that overall the ASD group achieved faster reaction times regardless of set size with higher accuracy than the controls in a typical conjunction search task. The superior performance of those with ASD was consistent in a hard search where the target feature information was ineffective in prioritizing likely-target stimuli. The results indicate that the search superiority of individuals with ASD derives neither from differences in feature-based attention nor from serial search processes. Unlike conventional models of visual search, in which only basic visual features such as color and orientation are processed at a parallel processing stage, individuals with ASD presumably distinguish a target on the basis of more complex visual information at this stage.

11:00
The perceptual integrability of 3D shape
SPEAKER: unknown

ABSTRACT. 3D shapes often strike us as unitary. This is also assumed in a well-known experimental paradigm: the attitude gauge figure task. Here, observers match the attitude of a virtual probe with the local attitude of a pictorial surface. To perform global analysis on the perceived 3D shape (like affine correlations) the local attitudes are integrated to a continuous 3D surface. This procedure requires an assumption that the surface is perceptually integrable. In this study we investigated this assumption. We rendered two shapes (a Gaussian surface and a torso) using five different material models: matte, mirror, glass, glossy and velvet. Three observers performed the attitude gauge figure task. Sampling was 229 and 249 points for the two stimuli, respectively, and number of repetitions was 3. We quantified the integrability by taking closed line integrals along the vertices. For a physical surface, these line integrals vanish to zero. Overall, we found that matte and glossy surfaces result in the most ‘stable’, integrable percepts. Glass and mirror surfaces caused more ‘cracks’ in the perceived surface, i.e. were less integrable. However, this was only found for the abstract Gaussian shape. For the torso, we did not find any difference between materials.

11:00
Perceptual compensation of pursuit-induced retinal motion in infantile nystagmus
SPEAKER: unknown

ABSTRACT. Infantile nystagmus (IN) produces constant involuntary horizontal oscillations of the eyes. Individuals with IN are thought to compensate for this by comparing retinal estimates of the continual image motion with extra-retinal estimates of eye velocity. This is similar to how typical observers are able to interpret image motion during smooth pursuit, but whether individuals with IN are able to do so for these larger, more deliberate eye movements remains explored. We conducted a monocular velocity-nulling task on 11 adult IN participants and compared their performance with age-matched controls. Participants followed a pursuit target moving at 8°/s, during which time a random dot pattern was presented for 500 ± 50ms. Dot velocity was adjusted to yield the Point-of-Subjective-Stationarity (PSS), which in typical observers is the point at which the dot pattern appears stationary. Preliminary findings show the PSS was small and positive for both groups, with a possible directional asymmetry in individuals with IN allied with their beat direction. Compensation for pursuit therefore appears similar and incomplete in both groups (the Filehne illusion), suggesting that the processing of pursuit-induced retinal motion is relatively normal in IN, despite the continual oscillation of the eyes.

11:00
The neural origins of visual crowding as revealed by event-related potentials and high-frequency oscillatory dynamics
SPEAKER: unknown

ABSTRACT. Visual crowding is the difficulty in identifying a target in the presence of nearby flankers. Most neurophysiological studies of crowding employed functional neuroimaging, but because of its low temporal resolution, no definitive answer can be given to the question: is crowding arising at the earliest (e.g., V1-V2) or at later stages of visual processing (e.g. V4)? Here, we used a classic crowding paradigm in combination with electroencephalography (EEG). We manipulated the critical space between peripheral target and flankers, while ensuring a proper control of basic stimulus characteristics. Analyses were focused on event-related potentials (ERPs) and oscillatory activity in the beta (15-30 Hz) and gamma (30-80 Hz) bands. We found that the first sign of a crowding-induced modulation of EEG activity was a suppression of the N1 component (~240ms post-stimulus), in agreement with a recent study by Chicherov et al. (2014). Oscillatory analysis revealed an early stimulus-evoked gamma enhancement (~100-200ms) that, however, was not influenced by the amount of crowding. Contrarily, a subsequent reduction in the beta band (~250-500ms) was observed for strong relative to mid crowding condition, and correlated with individual behavioral performance. Collectively, these findings show that crowding emerges at higher levels of the visual processing hierarchy.

11:00
The development of audio-visual integration processes in short-term memory for information used in literacy
SPEAKER: unknown

ABSTRACT. Although it is acknowledged that orthographic processing is important, learning the basic association (e.g. binding) between a shape (grapheme) and its sound is critical in the early stages of literacy development. This study reports an experiment measuring the accuracy with which children could discern between which of two previously heard and seen events (individual items and item strings) were associated with one another. A sample of 87 children (Reception and Year 1) representing low and high ability in literacy skills participated. The task involved a series of trials in which the child saw and heard two events sequentially (memory stage), with each of the two events consisting of a novel sound(s) and a novel shape(s) presented together. On the same trials, the original binding was maintained, and on different trials, a new binding was formed from one of the event sounds (or string) and one of the event shapes (or string). Not unexpectedly, the results clearly demonstrate that the younger age group of children have substantially more difficulty with the binding task. Further analyses suggest that the ability to bind is crucial to transitions in the development of writing skills in young children.

11:00
The effect of visual fatigue on clinical evaluation of vergence
SPEAKER: unknown

ABSTRACT. The number of complaints and visual fatigue increases after prolonged near work. It can be related to significant changes in coordinated work of accommodation and vergence system such as decreased accommodation and vergence range (Gur et al., 1994, Murata et al., 1996). The purpose of the study was to evaluate the effect of prolonged near work (computer and paper work >4 hours a day) on the clinical measurements of vergence response. Associated heterophoria (vergence state as a result of accommodation and vergence interaction), negative and positive fusional vergence (vergence amplitude), and vergence facility (dynamics of vergence response) were tested in 15 students (20-22 y., 11 with emmetropia and 4 with corrected myopia) using specially designed computerized tests. Dichoptic images were presented to each eye using red-cyan filters. The measurements were performed on five working days (in the morning and at the evening). Analysing the whole sample group, we observed no statistically significant changes of heterophoria, positive fusional vergence, and vergence facility at the end of the working day. Only negative fusional vergence demonstrated statistically, but not clinically significant decrease of values at the end of the working day. The results indicate that vergence response is rather stable over the day.

11:00
Effect of mental practice on mental rotation after stroke: comparison between alphabet letters and hands
SPEAKER: unknown

ABSTRACT. Mental practice (MP) is a recent rehabilitation method from cognitive psychology. In typical MP, stroke patients control visual images of the body and hands to improve their motor function (e.g., hand movements). In this study, the effects of MP using hand images on mental rotation performance were investigated. Three groups (control, normal rehabilitation program, and normal rehabilitation with MP) were assigned. In the MP group, patients observed video-instructed MP on a tablet twice per week. The mental rotation task featured two visual stimuli: an F and a mirrored F, and right and left hands. These images were rotated 0, 90, 180, and 270 degrees. The average reaction times of the mental rotation task in pre intervention, after a month (post1), and after six months (post2) were compared. The results indicated that there were no significant difference for the F- shaped mental rotation in both normal and MP groups. However, in the MP group, the mental rotation performances of post 1 and 2 improved, though no improvement was found in the normal rehabilitation group. These results suggest that the effect of MP in stroke patients might be task specific based on cognitive function.

11:00
A preference for stereopsis in deep layers of human primary visual cortex
SPEAKER: unknown

ABSTRACT. Recovering depth from binocular disparity requires multiple computations which are most likely implemented across early and higher visual areas. How these areas interact to support perception remains largely unknown. Here we used 7T functional magnetic resonance imaging at sub-millimeter resolution to examine laminar responses to stereoscopic stimuli in V1. To disentangle perceptual experience from disparity processing, we presented random dot stereograms in correlated and anti-correlated form while observers (N=4) performed a Vernier detection task. We measured blood-oxygenation level dependent (BOLD) responses using a gradient and spin-echo sequence with 0.8 mm3 resolution, and found increased BOLD signal for correlated versus anti-correlated stereograms in deep, not superficial, layers of V1. Control experiments ruled out bias in BOLD responses across cortical layers. By examining multivariate patterns in V1 we also found that voxels in deep layers are weighted most strongly when a classifier learns to discriminate activity patterns evoked by correlated vs anti-correlated stereograms. These results indicate a preference for disparities that support perception in the deep layers of V1. This is compatible with a role for recurrent circuits in stereopsis, either via local connections within V1 or feedback connections from higher visual areas, where neural signals are closely related to perception.

11:00
Superior sensitivity for horizontal but not vertical audio localization in sighted children
SPEAKER: unknown

ABSTRACT. Audio localization is a complex spatial ability that matures during development. Compared to visual stimuli, localization of audio stimuli is more complex, since the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields (Ahveninena et al., 2014). Here we study how the static audio localization is influenced by the presentation plane (horizontal and vertical) and by the response requested (verbal or motor) during development. The setup consisted of 16 aligned loudspeakers (4*4 cm each) covered by tactile sensors that could be positioned in the horizontal and in the vertical plane. We asked forty-five blindfolded children aged between 6 and 10 years to complete two audio tasks in which they had to indicate the position of a sound presented in the horizontal and the vertical plane with a verbal (i.e. saying the correspondent number) or motor (i.e. touching the box) response. We found that irrespective of the method employed (verbal or motor), all children were consistently more precise in the audio localization on the horizontal axis (P<0.01). These results suggest that humans have superior sensitivity for the audio localization in the horizontal plane already at young age.

11:00
Orientation discrimination is superior in individuals with autism spectrum conditions (ASC)
SPEAKER: unknown

ABSTRACT. Atypical perception, such hyper-sensitivity to some types of visual stimuli (Tavassoli et al., 2013), is commonly reported in individuals with autism spectrum conditions (ASC). In addition, several studies have found sensory discrimination to be altered in ASC. For instance, somatosensory discrimination (Blakemore et al., 2006) and pitch discrimination (Bonnel et al., 2003) have both been found to be enhanced in ASC. Here, we investigated whether orientation discrimination is also enhanced in ASC.

We measured oblique orientation discrimination in 48 individuals with ASC, and 48 control participants matched on age, gender, and non-verbal reasoning ability. Orientation discrimination thresholds were significantly lower in adults with ASC (M= 5.81, SD=2.26) than those without (M=6.88, SD=2.37; t(94)=-2.267, p=.026).

This study demonstrates that oblique orientation discrimination is superior in individuals with ASC. Determining the cause of atypical perception in ASC may help cast light on the neural underpinnings of the condition. As neural inhibition is closely implicated in the tuning of orientation selective neurons, our future work will address whether neural inhibition may also be atypical in individuals with ASC.

11:00
Lateralization of visual functions
SPEAKER: unknown

ABSTRACT. Lateralization of different visual functions have mostly been studied in isolation from one another. It remains unclear whether lateralization for one visual function relates to lateralization of another. Moreover, for a number of visual functions, lateralization was found by some researchers, but could not be replicated by others. Also in our lab, we were able to replicate lateralization of some visual functions (face perception and global/local perception), but did not find convincing results in favor of lateralization of other functions (categorical perception of colors (or ‘lateralized Whorf effect’), categorical and coordinate spatial relation processing), or only marginally so (spatial frequency perception). We hypothesize that, while some individuals can be strongly lateralized for a certain visual function, this is not necessarily the case for all people. Because statistics are generally done on group level, the participant group needs to be composed such that enough (strongly) lateralized participants are included, in order to find a significant lateralization effect. To circumvent this random element of group composition, we study lateralization of a number of visual functions on an individual level. This also enables us to examine correlations between lateralization quotients for these functions.

11:00
“Feeling by seeing”: Eliciting haptic sensing by a non-attentive visual method: Psychophysical haptic – visual transformation functions
SPEAKER: unknown

ABSTRACT. We investigate a sensory-substitution phenomenon, whereby unattended peripheral dynamic visual-stimuli elicit haptic sensation. Remotely operated systems such as surgical robots lack haptic feedback essential for operation, which limit their regular use. Participants were requested to maintain various stylus pressures while tracking routes on a pressure-sensing tablet or on a virtual surface above the tablet. Routes, feedback and stylus location was displayed on a wide-screen. Level of stylus pressure varied the route-trace color, and served as an “attended” feedback. A remote pulsating ellipse reflecting stylus pressure served as peripheral feedback (“unattended”). Following acquisition trials, performance was examined for different feedback conditions (color, peripheral, none). Results indicated better performance for unattended-peripheral feedback trials, compared to no-feedback trials. In study one, color and frequency feedback parameters were coupled to stylus pressure using a linear transformation function. Well-established psychophysical principles indicate that logarithmic transformation functions adapted to specific sensory modalities, elicit optimal perceptual responses. In study two, color and frequency feedback parameters were therefore coupled to stylus pressure using a logarithmic transformation function. Comparison of the two studies indicate significantly higher performance levels when using the logarithmic function (compared to the linear function), but faster learning, when using the linear function.

11:00
Eye movements during obstacle crossing in people with Parkinson’s disease who fall: Influence of disease severity and visual contrast
SPEAKER: unknown

ABSTRACT. INTRODUCTION: Negotiating obstacles is a complex task for people with Parkinson’s disease (PD) due to a plethora of motor symptoms which worsen with disease progression. Visual deficits in PD impede safe obstacle negotiation and increase the risk of falls (van der Marck et al., 2014, Parkinsonism and Related Disorders). Increasing the saliency of obstacles may improve the interpretation and negotiation of complex environments. AIMS: To quantify the association between eye movements and disease severity in PD participants who have previously fallen (PD-fallers) whilst negotiating obstacles of varying contrast. METHODS: 18 PD-fallers were asked to walk over an obstacle (HxWxD,15x60x2cm) of low and high contrast. Eye movements (number of saccades and fixation duration) were obtained using a mobile eye-tracker (Dikablis, 25Hz). Spearman correlations described the association between eye movements and disease severity (UPDRSIII). Adjusted significance was accepted at p<.01. RESULTS: UPDRSIII was negatively associated with the number of saccades irrespective of obstacle contrast (rho=-.66,p=.003) and positively associated with fixation duration when obstacle contrast was high (rho=.69,p=.002). DISCUSSION: Reduced visual exploration was associated with more severe PD motor symptoms. Improving obstacle saliency offers the potential for prolonging visual attention to task-relevant stimuli when motor deficits are high.

11:00
Investigating Sound Content in Early Visual Cortex
SPEAKER: unknown

ABSTRACT. V1 neurons receive non-feedforward input from lateral and top-down connections (Muckli & Petro, 2013). Top-down inputs to V1 originate from both visual and non-visual areas, such as auditory cortices. Auditory input to early visual cortex has been shown to contain category-specific information related to complex natural sounds (Vetter, Smith, Muckli, 2014). However, this categorical auditory information in early visual cortex was examined in the absence of visual input (i.e. subjects were blindfolded). Therefore the representation of categorical auditory information in visual cortex during concurrent visual stimulation remains unknown. Using functional brain imaging and multivoxel pattern analysis, we investigated if auditory information can be discriminated in V1 during an eyes-open fixation paradigm, while subjects were independently stimulated with complex aural and visual scenes. We also investigated similarities between auditory and visual stimuli in V1, to compare categorically-matched top-down auditory input with feedforward visual input. Lastly, we compared top-down auditory input to V1 with top-down visual input, by presenting visual scene stimuli with the lower-right quadrant occluded. We suggest that top-down expectations are shared between modalities and contain abstract categorical information. Such cross-modal information could facilitate spatial temporal expectations or more generally facilitate the brain’s inference about the external world (Mumford, 1991).

11:00
Evidence for attenuated predictive signalling in schizophrenia
SPEAKER: unknown

ABSTRACT. Positive symptoms of schizophrenia such as delusions and hallucinations are thought to arise from an alteration in predictive coding mechanisms that underlie perceptual inference. Here, we aimed to empirically test the hypothesized link between schizophrenia and perceptual inference. 20 patients with schizophrenia and 27 healthy controls matched for age and gender took part in a functional magnetic resonance imaging (fMRI) experiment that assessed the influence of beliefs on perception of an ambiguous structure-from-motion stimulus. Schizophrenia patients compared to healthy controls reported perception of the ambiguous stimulus to be less biased by beliefs. This effect was paralleled by weaker belief-related activity in orbitofrontal cortex, a region that has been previously been involved in the generation and maintenance of beliefs. Our results indicate that in schizophrenia the influence of higher-level predictions such as beliefs in perceptual inference might be weakened. We suggest that attenuated predictive signaling during perceptual inference may provide the starting point for the formation of positive symptoms in schizophrenia.

11:00
Framing can enhance the perceived depth of a picture
SPEAKER: unknown

ABSTRACT. We examined the effect of framing on a picture’s apparent depth. Sixteen observers rated the apparent depth of 60 pictures with a frame placed 13.0 cm in front of a display, while another 16 participants evaluated the same set of pictures without a frame. The pictures were rated on a 0–4 scale, with 4 indicating the greatest depth. The 32 observers were then presented the same picture with and without a frame, placed side-by-side, and they judged which of the two had greater depth for all 60 pictures. Observers performed depth ratings both before and after depth judgment. Before depth judgment, mean scores for the 15 higher-rated pictures with frames were higher than those for the same pictures without frames, but mean scores for the 45 lower-rated pictures with frames were almost the same as those for the same pictures without frames. After depth judgment, 82% of the pictures were scored higher when they were presented with frames. Moreover, the mean proportion of observers that chose framed pictures as having more depth was 75%. These results indicate that framing a picture enhances its perceived depth, suggesting that framing makes distance cues less reliable and pictorial depth cues more effective.

11:00
Asymmetric effects of stereoscopic depth on simultaneous lightness contrast
SPEAKER: unknown

ABSTRACT. As perception of lightness is modulated by relatively high level of stimulus configurations, it is surprising that stereoscopic depth of the test patches does not have strong effects on simultaneous lightness contrast (Gibbs & Lawson, 1974; Menshikova, 2013). Here we report results that demonstrate a partial but substantial effect of the depth configuration. A classical configuration of two grey patches on adjacent black and white surroundings was shown on a computer screen stereoscopically by using LCD shutter goggles (nVidia 3D Vision). The lightness of the patch on the black background was manipulated, and the point of subjective equality was measured by the method of constant stimuli. The lightness contrast was substantially enhanced when the patch on the black was behind the background, irrespective of the depth of the patch on the white. There was no such effect when the patch was in front of the black background. It was suggested that the patch appeared to be located in a dark room under the crucial condition. The reason for the marked asymmetry is not clear, but the results demonstrate a particular case where stereoscopic depth works as a configuration cue in simultaneous lightness contrast.

11:00
The Effect of Observation Distance on Space Configuration of Targets for Gaze Perception
SPEAKER: unknown

ABSTRACT. Gaze perception causes overestimation of amplitude underestimation of depth distance (Mori & Watanabe, 2014). We investigated what kind of influence observation distance has on space configuration of targets for gaze perception. Participants observed still images of life-size human faces displayed on the monitor from a distance of 1 m or 4 m, judged the gaze point from the still images, and put markers of the judged gaze points on the floor. Configuration of targets (gaze points) is defined as the amplitude and direct distance from the origin, which is the monitor. As a result of having estimated the linear regression equation between the physical configuration and the perceptual configuration based on amplitude and direct distance, it was shown that there was a higher tendency of overestimation of the amplitude, and underestimation of the direct distance when observation is made from a shorter distance. As a result of having estimated the magnification ratio of coordinate value with affine transformation, observation made from a shorter distance there was a tendency of underestimation of the depth distance, while lateral distance was not influenced by the observation distance. Those results suggest that observation distance affects depth of space configuration of targets for gaze perception.

11:00
The EEG correlates of stimulus-induced spatial attention shifts in healthy aging.
SPEAKER: unknown

ABSTRACT. Young adults typically display a processing advantage for the left side of space (“pseudoneglect”) but older adults display either no strongly lateralised bias or a preference towards the right (Benwell et al., 2014; Schmitz & Peigneux, 2011). We have previously reported an additive rightward shift in the spatial attention vector with decreasing landmark task line length and increasing age (Benwell et al., 2014). However there is very little neuroimaging evidence to show how this change is represented at a neural level. We tested 20 young (18-25) and 20 older (60-80) adults on long vs short landmark lines whilst recording activity using EEG. The peak “line length effect” (long vs short lines) was localised to the right parieto-occipital cortex (PO4) 137ms post-stimulus. Importantly, older adults showed additional involvement of left frontal regions (AF3: 386ms & F7: 387ms) for short lines only, which may represent the neural correlate of this rightward shift. These behavioural results align with the HAROLD model of aging (Cabeza, 2002) where brain activity becomes distributed across both hemispheres in older adults to support successful performance.

11:00
Invisible Aftereffects: is awareness necessary?
SPEAKER: unknown

ABSTRACT. One of the most intriguing questions is how awareness influences the processing of visual information. Conscious access to the properties of physical stimulation is considered to be a crucial factor in determining perception. However, their exact relationship remains elusive. One way to address this issue is to control whether perceptual aftereffects are fully developed once awareness is abolished during adaptation. Here, we manipulate the visibility of the adapting stimuli using crowding to investigate how diminution of awareness acts upon the magnitude of the dynamic motion aftereffect (MAE). Random dots displays, moving at various levels of coherence served as target, flanking, and adapting stimuli. To examine the interaction of perceptual and physical attributes of stimulation with perception, we tested MAE under full, high and low visibility adaptation conditions under crowding and no crowding. Psychometric measurements were based upon the observers’ performance on a directional motion discrimination task. Our results showed that crowding severely impaired motion discrimination ability and reduced MAE to a lesser extent (full adaptation condition). Physically identical and perceptually similar stimuli produced unequal in strength MAE (high visibility condition). In addition, MAE still persists even with invisible adaptors, indicating that both perceptual and physical factors influence perception.

11:00
Time contraction during delayed visual feedback of hand action
SPEAKER: unknown

ABSTRACT. Congruent visual feedback increases perceived duration of hand action (Press et al., 2014). Action-outcome congruence is fundamental to sense of agency (the feeling that I am causing an action) and contributes to time distortion (Haggard et al., 2002). We therefore hypothesized that sense of agency over visual feedback of the moving hand would increase perceived duration of action. Participants moved their hand to imitate models of hand poses. To manipulate sense of agency, we provided video feedback (3000 ms duration) of their hand movement, with spatio-temporal biases (spatial: upright or inverted; temporal: 50−1500 ms delays). Participants then judged whether the video was of short or long duration in comparison with videos presented in previous trials (including practice trials). They also reported whether they felt in control of the hand movement in the video. Delayed videos were judged as “short” and “no agency” more frequently than synchronous videos (50-ms delay). Our results showed subjective time contraction caused by delayed visual feedback of hand action, suggesting that sense of agency modulates time perception during action.

11:00
Shading Beats Binocular Disparity in Depth from Luminance Gradients
SPEAKER: unknown

ABSTRACT. We investigated how perceived depth can be determined by a shading and disparity cues. The target, designed to simulate a uniform corrugated surface under diffuse illumination, had a sinusoidal luminance modulation (1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency, with disparity amplitudes ranging from 0’-20’. The observers’ task was to adjust the binocular disparity of a comparison random-dot stereogram to match the perceived depth of the target. The perceived target depth increased with luminance contrast and was specific to luminance phase but was largely unaffected by the disparity modulation of the target. These results suggest that human observers can use the diffuse illumination assumption to perceive depth from luminance gradients alone without making an assumption of light direction. Remarkably, the observers gave a much greater perceived depth weighting to the luminance shading than to the disparity modulation of the targets, which cannot be explained by a Bayesian cue-combination model weighted in proportion to the variance of the measurements for each cue alone. Instead, they suggest that the visual system uses disjunctive mechanisms to process these two types of information rather than combining them according to their likelihood ratios.

11:00
Inducing the preferred retinal locus of fixation
SPEAKER: unknown

ABSTRACT. Patients suffering from central vision loss can still acquire visual information using parafoveal vision. They fixate an object eccentrically at a preferred retinal locus of fixation (PRL). Depending on the properties of the vision loss, or the nature of the visual task, PRL positions differ in their efficiency in acquiring visual information. Patients do not always choose the most efficient PRL position. The present study investigates whether a PRL can be induced at a specific position. Central vision loss of 6 deg is simulated in 10 healthy subjects, and PRL training is performed in a set of visual tasks in four one-hour training sessions, separated by at least 24 hours. Performance is tested in a reading task along the training. In five of those subjects (induced group), every time a target is placed in the right half of the visual field, the target is shifted to the left visual field, thus inducing a left visual field PRL position. After training subjects of both groups developed a PRL. Furthermore, induced group subjects placed a target in the left visual field, as intended by the inducing procedure. Thus, this study demonstrates that PRLs can be induced at a specific position.

11:00
Selectivity of face perception to horizontal information over lifespan (from 6 to 74 year old)
SPEAKER: unknown

ABSTRACT. Face recognition in young human adults preferentially relies on the processing of horizontally-oriented visual information. We addressed whether the horizontal tuning of face perception is modulated by the extensive experience humans acquire with faces over the lifespan, or whether it reflects an invariably prewired processing bias for this visual category. We tested 296 subjects aged from 6 to 74 years in a face matching task. Stimuli were upright and inverted faces filtered to preserve information in the horizontal or vertical orientation, or both (HV) ranges. The reliance on face-specific processing was inferred based on the face inversion effect (FIE). FIE size increased linearly until young adulthood in the horizontal but not the vertical orientation range of face information. These findings indicate that the protracted specialization of the face processing system relies on the extensive experience humans acquire at encoding the horizontal information conveyed by upright faces.

11:00
Spatial attention to graphemes in grapheme-color synesthesia
SPEAKER: unknown

ABSTRACT. Grapheme-color synesthesia is a rare perceptual phenomenon in which an individual’s perception of letters or numbers is associated with sensations of color. Using EEG alpha oscillations (9-11 Hz) and a spatial priming task, we investigated whether color-inducing graphemes attract attention in synesthetes. The participants were shown real-colored or achromatic color-inducing graphemes in either the left or the right visual field and performed an orientation judgment on a Gabor patch that was subsequently presented at the same or the opposite location. Achromatic non-color-inducing graphemes were shown as a baseline condition. Responses to both real-colored graphemes and color-inducing graphemes were faster than those in the baseline task. Color-inducing graphemes, but not real-colored graphemes, induced an asymmetric pattern of alpha activity, with a relative power decrease in left posterior areas and a corresponding increase over right posterior sites. This asymmetry did not depend on the presentation location of the grapheme. We discuss the alpha power modulations in the context of spatial shifts of attention.

11:00
Can substitution explain crowding? A study of error distribution in letter crowding.
SPEAKER: unknown

ABSTRACT. Introduction. Visual crowding refers to the impaired recognition of a flanked target, and was explained as flankers substituting the target. Crowding would alter the error distribution according to this substitution hypothesis, Here we measured letter confusion matrices (LCMs) and compared the dispersion of errors in uncrowded and crowded letter recognition. Methods. Thirty-three (Experiment 1) and 28 (Experiment 2) observers performed a 10-AFC letter identification task at 8° eccentricity in right visual field. Stimuli were 10 Sloan letters subtending 5° (Experiment 1) or 1° (Experiment 2). Two horizontal flankers were presented at 1° (Experiment 1) or 1.5° (Experiment 2) center-to-center distance in crowded conditions. Flankers were phase-intact letters in Experiment 1 and phase-perturbed letters in Experiment 2. Accuracy was maintained at ~30% by adjusting target contrast. Results. Error trials were tabulated in LCMs. Joint entropy, a measure of error dispersion, was significantly higher in crowded conditions (Experiment 1 = 6.17±.025 (Mean±SE); Experiment 2 = 6.23±.026) than in uncrowded conditions (Experiment 1 = 5.25±.052; Experiment 2 = 5.06±.059). Conclusion. Substitution could not explain the higher dispersion of error distribution in crowding with phase-perturbed flankers. Phase-perturbed flankers could have altered error distribution through compulsory averaging of low-level visual signals.

11:00
The reduced visual orientation discrimination in children with autism spectrum disorders (ASD) is specific for cardinal axis
SPEAKER: unknown

ABSTRACT. The better angular resolution along cardinal than oblique axis (‘oblique effect’) is a well known phenomenon in visual perception. GABA-ergic inhibitory circuits of visual cortex are of particular importance for orientation discrimination and its modulation by axis position. Individuals with ASD are characterised by deficits in the inhibitory circuitry, that may potentially affect their ability for visual orientation discrimination and magnitude of the ‘oblique effect’. These perceptual features have not been investigated in children with ASD. In the current study we examined ability for line orientation discrimination in 15 high-functional boys with ASD (age 7-15 years) and 21 age- and IQ- matched neuro-typicals (NT) boys. The orientation discrimination threshold was measured separately for the vertical (90º) and oblique (45º) axes orientations using circular gratings (diameter 7°; spatial frequency 3 cycles/degree; contrast 100%; mean luminance 3.3 Lux). We found reduced oblique effect in boys with ASD that was driven by their decreased sensitivity to orientations along cardinal axis. No group difference in oblique orientation threshold was detected. The oblique and cardinal orientation thresholds were correlated in ASD but not NT samples. Our results suggest specific impairment of mechanisms determining the cortical orientation anisotropy in ASD children.

11:00
Effect of Luminance Contrast on Perceived Depth from Disparity
SPEAKER: unknown

ABSTRACT. According to the disparity energy model (Fleet et al., 1996, Vision Res.), the disparity energy depends on both binocular disparity and luminance contrast of the stimulus. To test this prediction, we used rectangular random dot stereograms to investigate the effect of luminance contrast on perceived depth from disparity. The disparity between the left and the right patterns modulated horizontally in cosine wave (1 or 3 cy/deg) to create the percept of a corrugated surface. The maximum test disparity ranged from 0 to 20 arc min while the luminance contrast ranged from 5% to 80%. The observer adjusted the length of a horizontal line to match the perceived depth difference in the test. At each contrast, perceived depth increased with disparity up to ~10 arc min and then decreased with further increases in disparity. Both the maximum perceived depth and the slope of perceived depth change increased with luminance contrast. Our results show that luminance contrast profoundly affects perceived depth from binocular disparity. The data also suggest a soft threshold, such that perceived depth drops to zero below about 10% contrast and is independent of contrast above about 30% contrast. These effects are not compatible with a simple disparity energy concept.

11:00
Saccades towards targets of different somatosensory modalities
SPEAKER: unknown

ABSTRACT. Saccades to somatosensory targets have longer latencies, and are less accurate and precise than saccades to visual targets. But how do different somatosensory target modalities influence the planning and control of saccades? Participants fixated a start location and initiated a saccade as fast as possible in response to a touch of either the index or the middle fingertip of the left hand. In a static block, the hand remained at a target location in space for the entire block and the touch was applied at a fixed time after trial onset. In a moving block, the hand was first actively moved to the same target location and the touch was then applied immediately. Thus, in the moving block additional kinesthetic information about the target location was available. We found shorter latencies and faster saccades in the moving compared to the static block, but no differences in accuracy and precision of saccadic endpoints. The shorter latencies in the moving block were not due to the moment of the touch being predictable as was confirmed in a second experiment where the touch occurred unpredictably after trial onset. These findings suggest that kinesthetic information enhances saccade planning, but not control, towards tactile targets.

11:00
Individual variability in visual acuity improvement due to binocular fusion and accommodation training
SPEAKER: unknown

ABSTRACT. Visual acuity improvement as a result of fusion and accommodation training was analyzed in 118 patients aged 10-28 years. These patients constitute 4 different nosological groups: hypermetropic, myopic, and two strabismic – convergent with hypermetropia and divergent with myopia (groups H, M, CH, DM). The measurements were performed at the distances 0.3; 0.5; 1.0; 5.0 m before and after 10 sessions of functional treatment. In all groups, the effect of training was significant. A common feature of all the groups was dependence of the treatment effect on distance with a peak at 1 m. However, there were also distinctive differences between groups and between patients within each group evidently determined by the specifics of anomalies and by their power. Thus, in group DM, improvement was found in all patients and was of similar magnitude at all distances while, in several patients of group M, training had no effect at all or was revealed only at 1m. This difference could be due to significant development of binocular accommodation in group DM during training whereas in group M this capability was already close to its possible peak before training.

11:00
The Role of Stereoscopic Depth Cues in Place Recognition
SPEAKER: unknown

ABSTRACT. The recognition of places can be based on a variety of cues, such as landmark configuration or raw snapshot information (Gillner, Weiß, Mallot, 2008). Here we address the question if place recognition is also possible with pure depth information. To test this we designed a virtual environment that is presented as a dynamic pattern of random dots with limited lifetime. A mirror stereoscope was used to ensure that participants could perceive a 3D impression from stereoscopic and motion parallax cues. Presenting a stimulus in this way, all other cues but depth information were excluded. Participants did a ‘return-to-cued-location’ task in two conditions (rich, textured environment vs. depth only, random dot). Results show, that place recognition based on depth information is possible, but subjects’ performance improved when more cues were available. A pre-test for participants’ stereo vision (no motion parallax) showed no correlation in performance with the main experiment. In a second study we therefore tested participants in a monocular condition with motion parallax as the only available depth cue. The results indicate that place recognition is still possible but performance is markedly reduced as compared to the stereo condition. Motion parallax seems to play only a minor role.

11:00
Location of a visual object is processed in multiple frames of reference: An ERP study
SPEAKER: unknown

ABSTRACT. The location of an object in space is relative: We can say that a ball is in front us if we reference the location to ourselves(egocentric), or we can say that the ball is next to the window if we reference it to the room(allocentric). These two distinct frames of frame of reference are thought to work in parallel but so far studies have not shown how the encoding of object location unfolds over time in different frames of reference. For this purpose we designed an ERP experiment where 38 participants were placed in an immersive virtual cross maze and event-related brain potentials(ERPs) were measured. They had to collect reward objects by turning left or right from a starting point. The starting point was either the South or the North alley. This way we were able to contrast the egocentric and allocentric coding of reward object location in ERPs. Coding of object location was observable in the amplitudes of the P1–N1 complex. We found that allocentric coding started slightly earlier (85-110 msec) than egocentric coding (100-140 msec). These results show that indeed the two frames of reference activate almost at the same time and at early stages of object processing.

11:00
The effect of high resolution letters on legibility for persons with low vision
SPEAKER: unknown

ABSTRACT. Certain persons with low vision state that high-resolution displays are easily visible; although most of us can perceive the specifics of these detailed edges, some persons cannot. However, are high-resolution displays effective in helping persons who cannot perceive details? Ohnishi and Oda (2014) reported that a higher contrast in a fundamental frequency component for recognition (three cycles per letter (cpl); Solomon & Pelli, 1994) improved legibility in a high-resolution letter image. This study proposed to clarify the resolution’s effect on legibility for persons with low vision. Gray-scale images of letters sized 1.106 to 2.740 degrees of arc were presented to five participants with low vision at seven smoothness levels (6, 8, 12, 16, 24, 32, and 48 blocks/letters). Contrast thresholds for recognition were determined using the staircase method for each smoothness level. The ANOVA showed a significant primary effect of smoothness, and showed tendencies similar to those in result for people with normal vision (Ohnishi & Oda, 2014). Although the participants were unable to resolve the fine edges, their contrast thresholds for smooth letter images were lower than those for grainy images. This indicates that letter images with higher resolution were legible and beneficial for persons with low vision.

11:00
Orientation discrimination is not altered in children with autism spectrum conditions.
SPEAKER: unknown

ABSTRACT. Atypical sensory perception is common in both adults and children with autism spectrum conditions (ASC; Ben-Sasson et al., 2009). In addition, sensory discrimination thresholds, including somatosensory (Blakemore et al., 2006) and auditory (Bonnel et al., 2003) thresholds are reduced (enhanced) in individuals with ASC. In vision, adults with ASC have been found to have lower orientation discrimination thresholds (Dickinson et al., in prep). However, to the best of our knowledge, orientation discrimination has not been measured in children with ASC. As sensory symptoms are seen in both children and adults with ASC, we might expect to see a similar alteration in discrimination thresholds. We tested 52 children with ASC (mean= 12.54 years, SD=3.02) and 52 control participants (mean= 12.62 years, SD=2.87). Participants were matched on age, sex, and non-verbal reasoning ability. Orientation discrimination thresholds were measured using an adaptive staircase procedure. We found no significant difference in orientation discrimination thresholds between children with ASC (M=9.08, SD=4.03) and control participants (M=8.69, SD=3.41; t (102)=.52, p=.6). Therefore whilst enhanced orientation discrimination may be present in adults (Dickinson et al., in prep), using a very similar task, we do not find the same enhanced performance in children with ASC.

11:00
Deficits in visual and auditory Gestalt perception after stroke
SPEAKER: unknown

ABSTRACT. When perceiving the environment humans group single elements into Gestalts using so-called Gestalt principles which can be applied to visual as well as to auditory stimuli. In this study we investigated the relationship between visual and auditory Gestalt perception and categorization, additionally exploring the influence of both attention and working memory. Experiments were conducted with patients suffering unilateral middle cerebral artery (MCA) or temporal posterior cerebral artery (PCA) stroke and with healthy control subjects. They performed the Montreal Battery of Evaluation of Amusia (auditory Gestalt), a Gabor shape comparison (visual Gestalt), four categorization subtests (pictures, sounds, written words, spoken words), the D2 concentration endurance test and a memory task. The results showed strong correlations between a) visual and auditory Gestalt perception tests, b) music- and speech-related material and c) Gestalt perception and categorization skills for healthy subjects. For patients we found only minor correlations but significantly worse performance for attention, working memory and visual Gestalt perception. We propose that a network is responsible for building visual and auditory Gestalts and which is closely connected to higher processing areas, modulated by attention. In the injured brain the network seems to be weakened due to decreased attention and working memory capacities.

11:00
Contour-integration deficits in intact visual field of hemianopia patients
SPEAKER: unknown

ABSTRACT. We investigated Gestalt perception in the intact visual field (VF) of hemianopia patients. Three patients and matched controls performed a Yes-No figure detection task. Gabor patches of one orientation, making up the outline of a square, were embedded in randomly oriented “background” Gabor patches. Continuity of the square outline was modified by changing the orientation of 4 to 12 out of 16 Gabor patches. In addition, background density (BD) varied from low to high. Figure detection in the intact visual field was impaired in a patient with a temporal-parietal lesion but not in a patient with an occipital lesion. Both patients had frequent false positives when only the background elements or fragmented squares were presented. ‘Pathological completion’ occurred more frequently (i) for higher BDs, (ii) when the fragment ends faced the blind VF, and (iii) for central compared to peripheral presentation. The patient with an optical tract lesion had almost normal figure detection at low BD but impaired performance at higher BDs; no ‘pathological completion’ was observed. Our findings indicate that crowding engenders contour integration deficits in the “intact” VF of hemianopia patients. We attribute the deficits to malfunction of the cortical stage of processing in the visual system.

11:00
Comparing the role of concavities and convexities in haptics and in vision
SPEAKER: unknown

ABSTRACT. In vision it is generally easier to detect mirror-reflectional symmetry than translational symmetry. However, by altering figure-ground factors it is possible to make translation easier to detect than reflection. This finding has been interpreted as revealing the role of concavities and convexities in visual shape perception and part decomposition (Baylis & Driver, 1995). However, vision is not the only modality that we use to perceive shape. Our sense of active touch (haptics) also allows us to efficiently extract shape information, identify objects and detect symmetry. Haptics, though, acquires information in a slower and more serial manner than vision. Since vision and touch differ in how information is acquired, even if the same task and stimuli are used the final percept may be different. Here, we investigated how the assignment of contour as a concavity or a convexity influenced the detection of reflectional and translational symmetry across vision and touch. Our results suggest that concavities and convexities play different roles in symmetry perception as a result of differences in how information is extracted across the two modalities.

11:00
ERP evidence of reduced spatial selectivity in those with high levels of self-reported autistic traits
SPEAKER: unknown

ABSTRACT. A number of studies have shown that individual differences in visual cognition correlate with autistic traits in the general population. For example, individuals with high levels of autistic traits are more efficient visual searchers and have larger amplitude of an ERP component reflecting selective attention (N2pc) than individuals with lower levels of autistic traits. However, it is not clear whether this difference is associated with target detection, distractor suppression, or both. Therefore, we measured N2pc, PD (distractor suppression) and NT (target selection) amplitude alongside a self-report measure of autistic traits. Forty-five neurotypical students were recruited to take part. Participants had either high (N = 22, AQ ≥28) or low (N = 24, AQ ≤11) levels of autistic traits. We found a significantly larger N2pc in those with high levels of autistic traits. There was no difference in the amplitude of the NT, but PD amplitude was significantly reduced in the participants with high levels of autistic traits. These results suggest that the allocation of spatial attention differs in those with high levels of autistic traits compared to those with fewer autistic traits. Specifically, these data provide further evidence for reduced distractor suppression in those with high levels of autistic traits.

15:00-16:00 Session 26

Individual Growth & Difference (Disorders, Development & Aging) / Vision & Other Senses / Basic Visual Mechanisms (Binocular Vision, Depth Perception & Fovea vs. Periphery)

Location: Mountford Hall
15:00
An fMRI study investigating the contribution of visual features to an intersubject correlation (ISC) of brain activity during observation of a String Quartet

ABSTRACT. This study used fMRI to explore the brain response of 18 novice observers during audiovisual presentation of a string quartet with free viewing of the stimulus. The string quartet was presented visually as a group of four stick figures of the upper body, observed from a static viewpoint. The quartet ‘Quartetto di Cremona’ performed a 114 second segment of the allegro of String Quartet, No 14 in D minor by Schubert. These data were initially analysed using inter-subject correlation (ISC), which showed ISC across observers in auditory, visual and visual motion regions (BA37). The ISC in visual motion regions was unexpected due to the free viewing of observers but could be explained if the physical motion of every musician was correlated to brain activity in the visual motion region. To explore this, we regressed bow velocities of individual musicians and the average of all four bows against brain activity and results confirmed that each musician individually as well as the average correlated with the visual motion region. Finally, it was found that loudness, a feature known to covary with speed on stringed instruments also correlated with the visual motion region, raising the possibility of multisensory contributions to this visual activity.

15:00
Noise reveals abnormal global integration of motion and form in strabismic amblyopia
SPEAKER: unknown

ABSTRACT. Abnormal motion and form processing along the dorsal or ventral pathway has been reported in amblyopia. In the current study, we attempted to characterise visual processings in both pathways in amblyopia concurrently using equivalent stimuli in the presence of noise. Six anisometropes, six strabismics, and 12 visually normal observers monocularly discriminated the global direction of random dot kinematogram (motion) and orientation of Glass pattern (form) where individual direction or orientation of local elements were drawn from normal distributions with a range of variances that served as noise. Direction/orientation discrimination threshold without noise was measured first, followed by threshold variance measured at the multiples of the direction/orientation threshold. Overall, the form thresholds were higher than motion thresholds for all observers regardless of the noise levels. The thresholds were modelled to separate the effect of local and global processing in the respective pathways. The analyses showed that anisometropic performance for both form and motion were identical to normal (p > .5). Strabismic performance for both form and motion were poorer than the normal eyes (p < .01). Nested model testing suggested the poorer performance of the strabismic eyes were due to the deficits in global integration, reflected in the lower efficiency parameter.

15:00
James Jurin (1684–1750): A pioneer of crowding research?
SPEAKER: unknown

ABSTRACT. James Jurin wrote an extended essay on distinct and indistinct vision in 1738. In it he distinguished between ‘perfect’, ‘distinct’ and ‘indistinct vision’ as perceptual categories and his meticulous descriptions and analyses of perceptual phenomena contained observations which are akin to crowding. Remaining with the concepts of his day, however, he failed to recognize crowding as separate from spatial resolution. We present quotations from Jurin’s essay and place them in the context of the contemporary concerns with visual resolution and crowding.

15:00
Putting visual reference frames in conflict to study the horizontal effect: a visual anisotropy
SPEAKER: unknown

ABSTRACT. When viewing natural stimuli, people more easily perceive oblique orientations in contrast to horizontal orientations, with vertical in-between, known as “the horizontal effect”. Here we investigate if this process changes with head tilt, when the anisotropy in natural scenes changes with respect to retinal coordinates. We used a matched contrast paradigm in which participants changed the scalar of the oriented bandwidth of the test stimulus to match their perception of the bandwidth strength in the reference stimulus. Both test and reference stimuli were noise images constructed by combining random phase spectra with a 0.2-17cpd amplitude spectrum with slope -1, mirroring that in natural scenes. The 45° orientation bandwidth was centered on 0°, 45°, 90°, or 135° for the test stimulus and on 112.5° for the reference stimulus. Participants were seated upright with a head tilt of 45° to the right. Our results show that the horizontal effect can follow either retinocentric or geocentric coordinates with head tilt, depending on the subject. This suggests that the anisotropy of gain control in the horizontal effect is not ‘hard-wired’ in early visual orientation encoding, but rather is a plastic mechanism that is influenced by subjects’ perceptual response to head tilt.

15:00
A computational model for stereopsis and its relevance to binocular rivalry
SPEAKER: unknown

ABSTRACT. We present a global computational model for stereopsis, which derives from our earlier idea (Geier, 1998). One fundamental constraint is introduced to substitute for all the numerous constrains employed by conventional approaches (ordering, smoothness, etc.). A 3D object is searched directly that corresponds best to the texture of both the left and right images. This is similar to Gregory’s (1970) object hypothesis: the visual system hypothesises such possible 3D objects and selects the best-fitting one. This correspondence is measured by surface constraint: if the two retinal images of a given stereo-pair are projected to the surface of the original 3D object, then the projected images will perfectly overlap on the regions of the object’s surface. The computational goal is to find the 3D surface that provides the best satisfaction of surface constraint. Surface constraint provides an exact criterion for testing the solution of stereopsis: if and only if the true 3D surface is found, then the correlation between the two images projected on the surface is 100%. Therefore, no other constraints are necessary. If no such corresponding 3D surface is found, binocular rivalry occurs, which, in the present theoretical framework, is merely a side-effect of not finding the correct object-hypothesis.

15:00
Perception of Affect from Audiovisual Stimuli in Individuals with Subclinical Autistic-like Traits and Anxiety
SPEAKER: unknown

ABSTRACT. Despite growing evidence of the presence of anxiety in ASD, it remains unclear how this comorbidity modulates the perception of affect. To address this we studied the recognition of affect from face-voice stimuli in individuals with subclinical autistic traits and comorbid anxiety. Auditory (A) stimuli were selected from Montreal Affective Voices. Visual (V) stimuli with duration of around 3s were extracted from the BP4D Spontaneous Facial Expression database. Audio-visual (AV) stimuli were constructed by combining auditory and visual information. The BP4D face database is new and has not previously been paired with the Montreal Affective Voices. Participants were entered into the study based on their scores on the AQ for autistic traits and the STAI for anxiety. From this a 2 by 2 grouping of high and low autistic traits and high and low anxiety was formed, with low anxiety and low autistic traits as baseline. We examined the ability of participants to identify 7 different affects (neutral, happiness, sadness, fear, anger, surprise and disgust) from A, V and AV stimuli. Data collection is ongoing and preliminary results for the baseline group shows recognition rates of 87%, 57% and 84% respectively for the A, V and AV stimuli.

15:00
Saccadic adaptation in aging
SPEAKER: unknown

ABSTRACT. Saccades – rapid eye movements that bring objects of interest onto the fovea – are recalibrated such that if the target is shifted in mid-flight, amplitude adapts to compensate for the shift. This phenomenon is known as saccadic adaptation. In the present study we investigated the characteristics of saccadic adaptation in elderly people. Twenty-four healthy elderly subjects (aged 60-75 years) and twenty young controls (aged 18-32 years) participated in our experiments. We measured saccadic adaptation and saccadic suppression of displacement concurrently. Results showed longer latency and lower accuracy in the elderly, but no difference in trial by trial adaptation or perceptual performance between the two groups. This suggests that plasticity mechanisms survive despite general saccadic modifications with age.

15:00
Perceptual cancellation of stimulus saliency under dichoptic viewing conditions
SPEAKER: unknown

ABSTRACT. When different stimuli are presented to each eye the resulting percept is typically an unstable mix of different parts from each stimulus. Normally, the composition of the percept mutually exclusive between stimuli. One of the notable exceptions to this rule of mutual exclusivity is abnormal fusion (Wolfe, 1983), as observed with briefly presented stimuli. We use such abnormal fusion to hide target shapes in plain sight. Stimuli were composed from Gabor micropatterns, geometric singletons (square/circle), or coloured discs. Target stimuli contained a highly salient object shape, defined by feature contrast against homogeneous backgrounds. Feature contrast was, however, reversed between eyes. When presented dichoptically, perceptual fusion of both stimuli would therefore attenuate target saliency up until complete invisibility, whereas normal rivalry would leave the target shape visible. We find marked anisotropies in target salience for different between-eyes configurations of target definition. Anisotropies did not depend on presentation time (150ms/850ms). Target detection performance dropped to chance in many conditions even when each eye was presented with a perfectly visible target object. Our results can neither be explained by abnormal fusion nor by normal rivalry alone and point to higher-level influences on stimulus saliency under dichoptic viewing conditions.

15:00
The relationship between orienting and zooming of spatial attention in autism spectrum disorder
SPEAKER: unknown

ABSTRACT. Autism spectrum disorder (ASD) has been consistently associated with different types of dysfunctions in spatial attention. Previous studies demonstrated impairments in rapid orienting, in disengaging or in zooming-out of the attentional focus, independently. However, a more ecological examination of the deployment of visual attention should involve both orienting and zooming mechanisms. In the present study, we examined the relationship between orienting and zooming in children affected by ASD (n=22) and typically developing (TD) (n=22) peers. To this aim, we modified a classical spatial cuing paradigm, presenting two small or large cues at opposite sides of the visual hemifield. Subsequently, one of these cues was briefly flashed varying the cue-target delay in order to measure the time course of the attentional orienting. Results demonstrate that only for trials in which attention had initially to be zoomed-out, the ASD group manifested a sluggish attentional orienting toward the cued location. This evidence was supported also by a correlation between the individual rapid orienting ability in the large cue condition and clinical scores of autistic symptomatology. In conclusion, impairment in rapid orienting that affects individuals with ASD may reflect a primary difficulty in zooming-out the attentional focus size.

15:00
Contour Interpolation: Normal development and the effect of early visual deprivation
SPEAKER: unknown

ABSTRACT. We studied the development of contour interpolation by testing 6-year-olds, 9-year-olds, and adults on the interpolation of subjective and occluded contours across variations in size and support ratio (the ratio of physically present to interpolated contour length). Interpolation improved significantly with age and both types of contour were affected equally by spatial constraints during early childhood. However, while interpolation of occluded contours became more precise and less dependent on support ratio by adulthood, interpolation of subjective contours improved less and became even more tied to support ratio. To examine the role of early visual experience, we tested adults who been treated for bilateral congenital cataracts and adults who had suffered the additional disadvantage of uneven competition between the eyes because the cataracts were unilateral. Only the unilaterally deprived patients later showed deficits in contour interpolation. Together, these findings indicate that perceptual interpolation improves significantly with age and that early bilateral deprivation does not prevent the normal construction and/or preservation of the neural architecture underlying interpolation. In contrast, early unilateral deprivation for as little as the first 2–3 months of life is sufficient to compromise the architecture mediating contour interpolation.

15:00
Motivation modulates haptic softness exploration
SPEAKER: unknown

ABSTRACT. Haptic softness perception requires active movement control. Stereotypically, softness is judged by repeatedly indenting an objects surface. In an exploration without any constrains, participants freely control the number of indentations and the executed forces. We investigated the influence of motivation and task difficulty on these two parameters. Participants performed a 2AFC discrimination task for stimulus pairs, taken from two softness categories and with one of two difficulties. Half of the participants explored all stimulus pairs from one softness category in one block, allowing expectations on softness category, while the second half of participants explored all stimulus pairs in a random order. We manipulated the participants’ motivation by associating a monetary value to each correct response for half of the experiment. In the other half, performance was unrelated to the payment. We found higher exploratory forces in the high-motivation condition. The number of indentations was influenced by task difficulty and not by motivation. Furthermore, motivational effects were modulated by the existence of softness expectations. Taken together, these results indicate the existence of motivational effects on movement control in haptic softness exploration. Consequently, as executed movements influence the sensory intake, top-down signals affect how we gather bottom-up sensory information.

15:00
The Dimpled Horopter Explained by the Strategy of Binocular Fixation
SPEAKER: unknown

ABSTRACT. The deviation of the empirical Retinal Corresponding Points (RCP) from the geometrical ones has the perceptual role of easing the solution of the stereoscopic matching problem, improving binocular vision (Howard & Rogers, 2002; Schreiber et al., 2008). From this perspective, the actual perceived disparities are likely to play an important role in the development of these correspondences. Exploiting a purposely designed database of binocular fixations (disparity patterns and stereo-images), we obtained the Mean Disparity Pattern (MDP) experienced by a fixating observer in the near space. The MDP was exploited as a plausible RCP pattern to derive the 3D Empirical Horopter (EH). Uniform distributed fixations result in flat, tilted top away EH, according to experimental deviations. To analyze more natural fixation distributions, we obtained binocular saliency maps by eye movement recordings of three subjects during free exploration of the stereoimages. The EH derived from a “saliency-weighted” MDP preserves similar characteristics in the peripheral field of view, but exhibits a dimpling in its central part, in agreement with experimental observations (Fogt & Jones, 1998). The obtained result points out a possible influence of the fixation strategy on the RCP development, strengthening the local shift versus the flattening hypothesis (Hillis & Banks, 2001).

15:00
Spatiotemporal Visual Processing in school-age children and adults
SPEAKER: unknown

ABSTRACT. We compared performance in children and adults on detecting low contrast stimuli with spatiotemporal properties suitable to activate the dorsal stream at early level in order to assess the developmental timing of magnocellular over parvocellular component of the visual system. The spatial frequency of Gabor patches used as stimuli was either 0.5 or 4 cycles/deg. Gabors could be presented static, flickering (10Hz, 20Hz, 30Hz) or drifting (5-6 deg/sec). Contrast thresholds were estimated using a two interval forced choice task that according to a psychophysical adaptive procedure tracked 79% of the participant’s psychometric function. Results show reduced sensitivity in children for low spatial frequency static Gabors in respect to adults. In children, sensitivity for flickering Gabors is also reduced, selectively for the lowest spatial frequency and decreases as the temporal frequency is increased from 10 to 20Hz. Finally, with drifting Gabors, we find, only for children, higher contrast thresholds at short than long durations. Altogether, these results confirm the suggestion of a protracted development in visual processes dependent on the dorsal stream at early level, which is sensitive to very low spatial frequency, medium-high drifting speed and that is characterized by an high temporal modulation cut-off.

15:00
Is adding a new class of cones to the retina sufficient to cure colour blindness?
SPEAKER: unknown

ABSTRACT. New genetic methods have made it possible to substitute cone pigments in the retinas of adult non-human primates. Doing so influences the animals’ visual abilities, but so far no studies have unambiguously demonstrated that experimental animals can make new –higher dimensional– colour distinctions. Simply put, it has been shown that animals that underwent gene treatment can now – in addition to finding a red ball – also find a green ball (on a greyish background). However, it has not been shown that the animals can distinguish a red ball from a green one. For most people, that ability would be the primary reason for wanting to undergo a treatment for colour blindness in the first place, for instance because their colour blindness prevents them from pursuing a specific career. It is important to point out such possible limitations, to avoid unwarranted expectations in both clinicians and patients. To explain the origin of our concerns, we simulate how replacing the pigment of some cones is expected to influence the behavioural tests used so far. The simulations show that these tests do not provide conclusive evidence that the animals acquired the ability to make new higher dimensional chromatic distinctions.

15:00
Visual performance linked to cortical magnification differences in deaf and hearing adults
SPEAKER: unknown

ABSTRACT. Loss of one sensory modality can result in enhancement of the remaining senses. For example, congenitally deaf adults perform better than hearing adults in many peripheral visual tasks (Pavani & Bottari, 2012). Previous studies have linked visual performance with cortical magnification differences within primary visual cortex (V1) (Duncan & Boynton, 2003; Schwarzkopf & Rees, 2013). We hypothesised that eccentricity-dependent neural differences might also be evident in deaf adults. Participants included fifteen congenitally, profoundly deaf adults and fifteen age-matched hearing controls, all without visual deficits. Cortical magnification functions in V1 were measured in each individual using wide-field fMRI retinotopic mapping (±72°) to capture the far periphery. Visual performance was compared in the same participants using a 2AFC random dot motion detection task at varying eccentricities. Cortical magnification functions in V1 were shallower in deaf participants, representing relatively greater neural territory devoted to peripheral processing than in hearing participants. Deaf individuals were also more sensitive to motion in the periphery than hearing participants. Differences in the distribution of neural processing across the visual field in V1 may underpin behavioural differences in deaf and hearing adults. These findings provide evidence of plasticity in visual cortex following early profound hearing loss.

15:00
Neurodegeneration of the optic radiations and corpus callosum in Normal-Tension Glaucoma
SPEAKER: unknown

ABSTRACT. Purpose: There is debate about whether glaucoma is exclusively an eye disease or also a brain disease. To further examine this issue, we used Diffusion Tensor Imaging (DTI) to study white matter integrity in a Japanese glaucoma population. This population has a significantly higher incidence of normal-tension glaucoma, in which optic nerve damage occurs in the absence of the elevated eye pressure that characterizes the most common form of glaucoma. Methods: We performed DTI in 23 participants with normal-tension glaucoma and 31 age-matched healthy controls. We used voxel-wise tract-based spatial statistics (TBSS) to compare fractional anisotropy (FA) of the white matter of the brain between patients and control group. Results: FA was significantly lower in glaucoma patients: a cluster in the right occipital lobe (p<0.05; FWE-corrected). This cluster may comprise fibers of both the optic radiation (OR) and the forceps major. Additional explorative analysis confirmed involvement of the forceps major, the left and right OR's and the corpus callosum (p<0.09; FWE-corrected). Conclusions: Glaucoma in this specific population is associated with white matter abnormalities in the OR as well as the forceps major. In particular the latter finding suggests cortical neurodegeneration in glaucoma that is hard to reconcile with retinal neurodegeneration.

15:00
The contributions of various aspects of motion parallax when judging distances while standing still
SPEAKER: unknown

ABSTRACT. Even when standing still, moving one’s arm to indicate the position of an object causes head movements that provide cues about the distance to the object. Three cues can be identified: the eye-movements needed to fixate the object, the orientation of the object with respect to the line-of-sight, and the object’s position in the retinal image with respect to the background. These cues might be particularly useful in the absence of binocular vision. Is this so for any of them? Participants indicated the position of a virtual target that was either presented binocularly or monocularly. The position and size of the target changed across trials, with pairs of trials in which the same target was presented at the same location, except that one or more of the three above-mentioned cues indicated that the target was either 10 cm nearer or further away. Results indicate that presenting the three cues together explain about 11% of the judged distance under monocular conditions but only 2% under binocular conditions, although participants did not move their heads more in monocular conditions. For monocular targets, when the three cues were presented in isolation their combined effects were much smaller than when they were presented together.

15:00
Sensory limitation on reading speed: individual differences and experience-dependent changes
SPEAKER: Deyue Yu

ABSTRACT. Visual-span size imposes a bottom-up sensory limitation on reading speed (Legge, 2007). The supporting evidence was mainly obtained by varying stimulus properties and making within-subject comparisons. Here we investigate whether individual differences in reading speed for text with fixed stimulus properties can be attributed to sensory differences, and whether experience-dependent changes in visual-span size can account for corresponding changes in reading speed. Six groups of subjects participated (a no-training group and five groups trained on letter recognition with different procedures at 10° in the lower field). Reading speed and visual-span size were measured in a pre- and a post-test at 10° in both lower and upper fields. We found a significant correlation between pre-test visual-span size and reading speed for both locations (r ≥0.45, p ≤0.003). Recognizing one extra letter per fixation is associated with a 29% faster reading speed. There is also a significant relationship between post-pre changes of visual-span size and changes of reading speed (r≥0.37, p≤0.02). Reading speed increases by an extra 25% for each additional letter improvement in visual-span size. These results indicate that both individual differences and experience-dependent changes in reading speed can be partially accounted for by the differences or changes in sensory limitations.

15:00
The effect of viewing angle on visual crowding
SPEAKER: unknown

ABSTRACT. The ability to recognise objects is strongly influenced by the presence of neighbouring objects (flankers), a phenomenon known as visual crowding. Changes to the viewing angle of a crowded configuration can alter the relationship between the target and flankers in several ways; the relative disparity, perceived size and spatial separation between the target and flanker stimuli are all modified. Individually, these changes have previously been shown to differentially influence the degree of visual crowding. Here we investigate the combined effect of these factors by changing the orientation of the object surface. Visual acuity was measured using Tumbling E optotypes presented in isolation (isolated acuity-IA) or in a row of five (crowded acuity-CA). Isolated and crowded acuity were measured over a range of viewing angles (-60 to 60 deg) at fixation and 3 deg below fixation. Our results show that while isolated acuity, crowded acuity and the spatial extent of crowding increased with viewing angle, crowding ratios (CA/IA) were relatively invariant to the orientation of the object surface. The retinal image transformations that accompany changes in the orientation of the object surface provide a good account of our results.

15:00
Hearing through your eyes: modulation of the visually-evoked auditory response by transcranial electrical stimulation
SPEAKER: unknown

ABSTRACT. Evidence is emerging that some people can 'hear' visual events as sounds (Saenz & Koch, 2006). This ability may benefit performance of tasks such as judging whether pairs of 'Morse-code' sequences of flashes match or differ. Here we investigated the neurophysiological basis of this auditory-recoding ability.

Twenty participants, including musicians and synaesthetes, received 10Hz Transcranial Alternating Current stimulation (or sham stimulation) over Occipital versus Temporal scalp sites, while performing a sequence matching task for Visual versus Auditory pairs.

On average, occipital stimulation impaired Visual but benefited Auditory performance, relative to sham. Temporal stimulation had the opposite effect, albeit weaker. The above effect of Occipital stimulation was largest in individuals whose performance was most consistent with auditory recoding of flashes (i.e. better Visual relative to Auditory sequencing).

Our results suggest, counterintuitively, that the ability to recode flashes as sounds may depend more on occipital than temporal cortex. This is consistent with evidence that occipital stimulation may evoke non-visual phosphenes, such as tongue sensations (Kupers, 2006). This framework tentatively explains why here, occipital stimulation removes the benefit for encoding of visual sequences, but also benefits unimodal auditory sequencing by disrupting competing inputs from extra-auditory areas.

15:00
Blur adaptation, blur sensitivity and visual load
SPEAKER: unknown

ABSTRACT. Blur is an important dimension of the image quality. It is important to increase the knowledge about the perception of blur because of its relevance to visual acuity, control of accommodation and other visual functions. The aim of our study was to find out how the blur adaptation influences the blur sensitivity and to evaluate the effect of visual load on the blur sensitivity because of its connection to accommodative functions. We evaluated different blur perception thresholds (just noticeable blur, target recognition, clear image perception) as blur level was gradually changed before and after additional adaptation to optical defocus. Gaussian blur filter was used to simulate different blur levels. We compared the blur sensitivity before and after at least 5 hour long visual load at near distance (reading) to evaluate the effect of visual load to blur sensitivity in our study. The results showed that adaptation to optical blur (1.0 D simulated myopia) increased blur sensitivity. Thus it decreased blur thresholds by 10 - 48 % according to specific threshold and refractive group. We did not observed statistically significant change in the blur sensitivity after near distance visual load.

15:00
Difference of impressions on kimono fabrics by LCD view, actual view, and tactile feels
SPEAKER: unknown

ABSTRACT. Electronic commerce using web services has extended in the recent years. This includes sales in the textile industry. Since the commercial value of textile highly depends on its tactile feels, the expression of tactile feels on web sites is an important problem for this industry. This research investigates the difference of human impressions on kimono fabrics in the cases of viewing the digital images of the fabrics on a liquid crystal display, viewing the actual fabrics on a table, and touching the fabrics using their hands. Sixteen Japanese respondents, consisting of seven men and nine women, carried out an experiment for six different kimono fabrics. The respondents were requested to answer the degree of their impressions on heaviness, thickness, gorgeousness, fineness, for the three cases of viewing or touching. The results show the tendency that viewing some fabrics on an LCD yields significantly heavier and thicker impressions than the other two cases of viewing or touching the same fabrics. It suggests that the material feels of textiles are not sufficiently transmitted via web services and it should be compensated by some method in the electronic commerce of textile.

15:00
The Iterative Amsler Grid (IAG): A procedure to measure image distortions in Age-Related Macular Degeneration (AMD)
SPEAKER: unknown

ABSTRACT. Age-related macular degeneration is a major cause of severe visual dysfunction in the elderly. Early detection of the symptoms can be crucial for intervention. One manifestation of the AMD is metamorphopsia - a condition where straight lines are perceived as curvy and wavy. Assessment is usually made with a printed Amsler grid of straight horizontal and vertical lines, where perceptual irregularities indicate a macula problem. In order to quantify the location and severity of distortions, we recently developed a computer-based iterative procedure, where line segments at different grid locations were presented in isolation to be interactively adjusted until they are perceived as straight. The feasibility of the procedure was tested on control participants, who could reliably correct deformations simulating metamorphopsia, while maintaining central fixation. Fixation stability was a challenge, however, for AMD patients with foveal damage, leading to difficulties in completing the task with isolated test fields. In order to facilitate fixation, we developed a new variant of the IAG procedure, where a low-contrast grid – continuously updated to reflect the subjective corrections in each iteration step - is displayed behind the test field to serve a reference, and present some initial data collected with this revised method.

15:00
Characterization of an adaptive optics SLO based retinal display for cellular level visual psychophysics
SPEAKER: unknown

ABSTRACT. Adaptive optics scanning laser ophthalmoscopy (AOSLO) can image single photoreceptors in vivo. Due to its scanning nature, visual stimuli can be encoded into the imaging beam with high-speed acousto-optic modulation, thereby creating an acutely focused visual display directly on the retina. Modulation accuracy for benchmark stimuli in a multi-wavelength AOSLO (imaging: 840 nm; stimulation: 543 nm light) was measured using a high-speed Si analog photodetector sampled at 1.25 - 5 Gigahertz. The smallest full contrast stimuli presentable were on the order of 3 pixels across in raster scanning coordinates. Optical modelling confirms that this size would place almost all light within the dimensions of a single cone inner segment diameter. Maximum light intensity contrast for extended stimuli achieved in our setup was ~0.99 (Michelson, or about 355:1), a level limited by the extinction ratio of the acousto-optic device used for optical switching. Residual light leak (~4.3 cd/m2 at 543 nm, ~4100 rhodopsin isomerizations per second) through these switches likely saturates any rod photoreceptor contribution. Therefore, AOSLO-based microstimulation has enough spatial resolution to drive individual cone photoreceptors in the living eye, allowing investigators to probe the relationship between retinal structure and visual function on single cell level.

15:00
Auditory Working Memory Modulates Unconscious Visual Processing
SPEAKER: unknown

ABSTRACT. The modulation effects of working memory (WM) on attention and selection have mainly been investigated within visual modality. It remains unknown whether similar WM effects can occur across different sensory modalities. Here we probed this issue by introducing auditory stimuli to a modified delayed match-to-sample paradigm. Participants first held a sound of an animal (dog or bird) in WM and then detected an image of an animal that was rendered temporally invisible by utilizing the continuous flash suppression method. We showed that the animal images matched the sound held in WM emerged from suppression into awareness significantly faster than those did not match. This effect could not be explained by cross-modal priming, as passively listening to these sounds failed to affect the unconscious visual processing. Moreover, the modulation effect was not found when the visual words (“dog” or “bird”) rather than the sounds were held in WM, indicating that the observed cross-modal effect might not be mediated by the semantic associations between the auditory and the visual stimuli. Our findings together suggest that the top-down modulation of WM on information processing can operate across different modalities (i.e., vision and audition) independent of semantic association and conscious awareness.

15:00
The Oblique Effect is Not Altered By Aging
SPEAKER: unknown

ABSTRACT. Neurophysiological studies in monkeys have found that aging significantly degrades the orientation selectivity of visual cortical neurons (Schmolesky et al., 2000; Yu et al., 2006). However, psychophysical studies of human observers have found that aging has very small effects on orientation perception. Specifically, orientation discrimination thresholds and the orientation-selectivity of pattern masking are very similar in older and younger adults (Betts et al., 2007; Delahunt et al., 2008; Govenlock et al., 2009). However, those psychophysical studies used horizontal and vertical gratings, not oblique gratings. Our perception of oblique contours differs in many respects from our perception of horizontal and vertical contours (Appelle, 1972), and the cortical mechanisms that encode oblique and cardinal orientations appear to differ (e.g., Edden et al., 2009). Hence, it is possible that age-related changes in orientation coding may be greater for oblique contours. To test this hypothesis, we measured orientation discrimination thresholds in 20 younger (20-28 years) and 20 older (70-79 years) adults with high contrast (80%) vertical and oblique 3 cy/deg sine wave gratings. Mean discrimination thresholds for both vertical and oblique gratings, and the magnitude of the oblique effect, did not differ between age groups. Thus, aging appears to not alter the oblique effect.

15:00
Investigating scene feedback to foveal and peripheral V1 using fMRI
SPEAKER: unknown

ABSTRACT. Object and scene information can be recovered from non-stimulated foveal and peripheral V1 cortex, respectively (Williams et al., 2008; Smith & Muckli, 2010). These findings demonstrate feedback to V1. Scene feedback to foveal rather than peripheral cortex remains unexplored. In an fMRI experiment, we presented three scenes, with either the central or lower-right portion occluded by a mask, or both. This prevented scene-specific feed-forward stimulation of foveal, peripheral, or both cortical regions respectively. Therefore, we could examine feedback to these regions. Using SVM classifiers, we decoded scene information in non-stimulated foveal and peripheral regions, either with one or both portions occluded. We replicated scene decoding in non-stimulated peripheral V1. We could also decode scenes in non-stimulated foveal cortex. In both regions, decoding was possible with the double occluder. Further, the presence of the foveal occluder could be classified in occluded peripheral V1. Nonetheless, double and single occluder patterns remained generalizable, as training the classifier in peripheral V1 with a single occluder enabled successful scene decoding with a double occluder. We suggest that both foveal cortex and peripheral V1 receive scene feedback, and that feedback to peripheral V1 is influenced by, but is not critically dependent upon, processing in foveal cortex.

15:00
A role of cutaneous inputs in self-motion perception (1) : Is perceived self-motion equal to an actual motion ?
SPEAKER: unknown

ABSTRACT. We recently confirmed that self-motion perception could be elicited by cutaneous stimulation accompanied by vestibular inputs (Murata et al., 2014). The present study further investigated functional characteristics of such cutaneous induced self-motion, particularly with a focus on the influence of actual transfer movement that stimulates the vestibular system. The experimental conditions were: two wind conditions [with or without] serving as a cutaneous stimulation applied to participants’ face and two transfer conditions [with (i.e., floor was made to move front) or without (i.e., vibration alone)] serving as a vestibular stimulation applied to participants’ body. A pedestal fan and a DC motor were used for stimulating each system. In experiment 1, participants were asked to press a response button when they perceived self-motion. Onset latency and the accumulated duration of self-motion were measured for each trial. The self-motion was observed in all conditions, with stronger effect for “with” conditions compared to “without” conditions. In experiment 2, participants were asked to point to the perceived direction where they were transferred. The perceived direction was consistent with actual body transfer, coupled with the wind, while it was variable in the “without” transfer. An important role of cutaneous inputs for eliciting forward self-motion was highlighted.

15:00
Distinct mechanisms of audio-visual asynchrony adaptation operate over different timescales
SPEAKER: unknown

ABSTRACT. To maintain a coherent percept of the external environment, the brain combines information encoded by different sensory systems. This process however, is complicated by temporal uncertainties, due to different delays in sensory pathways. One way in which the brain appears to compensate for these uncertainties is through temporal recalibration. Both prolonged and rapid adaptation to a regular inter-modal delay alters the point at which subsequently presented stimuli are perceived to be simultaneous. However, it remains unknown how the magnitude and duration of this aftereffect changes with adapting duration. Here, we adapted subjects to a fixed temporal delay (150ms) between pairs of brief co-localised auditory and visual stimuli (1-512 pairs). Subjects’ estimates of perceived asynchrony on subsequent test trials revealed that both the magnitude and duration of induced biases increased with adapting duration. Using an ‘adapt-deadapt’ procedure, we also investigated whether asynchrony adaptation reflects a single mechanism, or multiple mechanisms operating over different timescales. The perceptual aftereffects of asynchrony adaptation to 512 stimulus pairs were initially cancelled by 32 pairs of deadaptation to an equal but opposite asynchrony, but subsequently reappeared with further testing. Our results suggest that adaptation to asynchronous auditory-visual stimuli involves multiple, distinct mechanisms operating over different timescales.

15:00
A Mixed-Method Analysis Assessing the Effects of Wearing organic light emitting diode (OLED) sleep mask on Sleep and Psychological Wellbeing
SPEAKER: unknown

ABSTRACT. Organic light emitting diode (OLED) sleep masks are of potential use for slowing the progress of diabetic retinopathy (Czanner et al., 2015) but little is known about their effect on sleep quality and psychological wellbeing. The purpose of this study was to use mixed-methods, using sleep diaries and standardised questionnaires (CESD, GHQ, PSQI) to examine these effects. We ask two questions. 1. What are the similarities and differences between data collected in diaries on sleep quality and quantity, with questionnaire data on sleep, and with automatically recorded mask wearing hours? 2. Does mask wear influence psychological wellbeing and sleep quality recorded in the diaries? We find broad similarities between diary, self-report and mask data. Our preliminary analysis shows: 1. Mask use was not associated with changes in wellbeing as assessed with questionnaires. 2. The sleep diary showed that mask comfort rather than light was the most important factor in sleep quality. 3. Sleep quality, as recorded in the diaries is affected by mask use in a small number of participants. Our study demonstrates the value of combining sleep diaries with objective and self-report measures of wellbeing, sleep quality and mask usage in evaluating sleep mask acceptability by complementing sleep questionnaires.

15:00
Changes in amplitude of accommodation for school-age children during the day
SPEAKER: unknown

ABSTRACT. School-aged children spend a lot of time doing near work. Long hours of reading could influence some visual functions, for example, accommodation system and can lead to aesthenopia and fatigue. Other studies showed that reading distance is shorter for younger than older children (Wang, Bao et al, 2013). It means that younger children accommodate more than older children when reading. Taking in account these data we could expect that during the day amplitude of accommodation is reduced more for younger than older children. We wanted to test this hypothesis. Amplitude of accommodation was measured for 7- 15 years old children using subjective push-up technique. For measurements RAF Near Point Rule - a road with movable target - was used. Measurements were done before and after lessons. Distance visual acuity also was measured in both sessions. Results showed that visual acuity does not change significantly during the day. However amplitude of accommodation reduced during the day, in average by 0.8D. Changes were larger for younger than older children. We can conclude that most of children have significant visual fatigue during the day and it is important to control that they take regular visual breaks to rest their eyes.

15:00
The impact of active shutter glasses viewing upon horizontal motor fusion amplitudes
SPEAKER: unknown

ABSTRACT. Much of the literature evaluating the impact of viewing stereoscopic displays upon binocular functions has focused upon passive methods of presentation (Emoto et al., 2004, Fortuin et al., 2011, Wee et al., 2012). The aim for this study was to evaluate any change in fusional amplitudes under dichoptic viewing conditions through active shutter glasses (ASG), compared to free space and to conditions of reduced luminance without active shutter alternation. 15 visually normal participants (mean age 22.56 ± 1.16 years) had their horizontal motor fusion break and recovery amplitudes measured at 75cm, while viewing a superimposed white H (right eye) and E (left eye) letters in free space, through ASG, and through 5% LTF spectacles.

Base in (BI) fusional break/recovery amplitudes were reduced when wearing ASG or 5% LTF spectacles (median decrease = 4∆, p = 0.005 for active shutter glasses, p = 0.006 for 5% LTF spectacles). Amplitudes did not significantly vary between the ASG and 5% LTF spectacles viewing conditions. Increasing esophoria is associated with poorer BI fusional amplitudes (rho = -0.663, p = 0.007). Esophoric individuals may therefore be likely to have visual discomfort in association with stereoscopic viewing via this medium.

15:00
Luminance signals interfere with echolocation in sighted people
SPEAKER: unknown

ABSTRACT. Echolocation is the ability to perceive the environment by making sonar emissions and listening to returning echoes. For people, it has been suggested that echolocation may not only draw on auditory processing, but also recruitment of ‘visual’ cortex (Arnott, Thaler, Milne, Kish, & Goodale, 2013; Thaler, Arnott, & Goodale, 2011). The recruitment of ‘visual’ cortex for echolocation may be driven by neuroplastic changes associated with vision loss and/or by an individuals’ ability to recruit ‘visual’ cortex for processing of non-retinal input (Thaler, Wilson, & Gee, 2014). Here we used an interference paradigm to further explore the role of ‘visual’ cortex in echolocation. Specifically, if echolocation relies on co-opting ‘visual’ cortex, we expect retinal visual signals to interfere with echolocation. Twelve sighted echo-naive participants used mouth-click based echolocation to discriminate sizes of objects. Participants wore black-out goggles fitted with LEDs. The goggles blacked out vision of the environment at all times, but LEDs when switched on also provided luminance input. Participants’ echolocation accuracy scores were significantly reduced in conditions where luminance input had been provided, as compared to conditions where it had not been provided. This result is consistent with the idea that echolocation may draw on recruitment of ‘visual’ cortex.

15:00
Mixed percepts within binocular rivalry for luminance- and contrast-modulated gratings
SPEAKER: unknown

ABSTRACT. During binocular rivalry, contrast-modulated (CM) gratings with correlated binary noise generate mainly mixed rather than exclusive percepts (Skerswetat et al. 2014, VSS-poster). Mixed states comprise piecemeal and superimposed percepts, which may reflect different levels of binocular integration. We investigated the composition of the mixed percepts and the effects of different noise correlations for binocular rivalry of luminance-modulated (LM) and CM gratings. Orthogonal LM and CM gratings were presented dichoptically with correlated, uncorrelated, and anti-correlated binary noise. The stimuli were 2 degrees in diameter and the spatial frequency was 2c/deg. Participants had to indicate via button presses whether an exclusive, piecemeal, or superimposed percept was seen. Visual exclusivity for LM-stimuli was significantly higher [p<0.05] for all noise conditions compared to CM-stimuli. Visual exclusivity was also higher with uncorrelated and anti-correlated noise than with correlated noise [p<0.05], especially for CM-stimuli. The proportion of piecemeal (rivalry) within the mixed percepts was higher for LM than for CM-stimuli. The highest proportion of superimposed percepts, which imply binocular integration rather than rivalry, was found for CM-stimuli with correlated noise. The results suggest that CM-stimuli might be processed in a visual area that receives more binocular input than the site processing LM-stimuli.

15:00
The relationship between visual functions and reading performance in children with reading difficulties

ABSTRACT. Learning to read is a milestone during primary school and the foundation for lifelong learning as well as participation in society (OECD, 2010). From a visual perspective reading is a very complex task und it requires the effortless and rapid processing of fine visual details (Hyvärinen, 2013; Trauzettel-Klosinski, 2002). This study aims to explore how children with reading difficulties (3rd – to 5th grade) face reading demands and to assess their visual preconditions for reading. The focus is to examine whether they show specific strategies, which relate to their distinct visual conditions and the properties of the presented reading task. The collection of data covers three dimensions combining quantitative and qualitative measures: vision, reading and strategy use. The analysis of data focuses on exploring intrapersonal as well as interpersonal differences in reading with regard to the assessed visual functions and the specific reading texts. The collected data shows that the visual preconditions of children have to be taken into account when reading difficulties occur. Vision is commonly taken for granted and the influence of visual functioning on reading is underestimated.

15:00
The contribution of motion to shape-from-specularities
SPEAKER: unknown

ABSTRACT. To infer the 3D shape of an object, the visual system often relies on 2D retinal input. For ideal specular surfaces, the retinal image is a deformation of the surrounding environment. Since many configurations of shape and environment can potentially generate the same retinal image, 3D specular shape estimation is ambiguous. A relative motion between object, observer and environment produces dynamic information on the retina, called specular flow. From a computational point of view, this specular flow may diminish perceptual ambiguities. For this research, two novel, smooth shapes were rendered with two different environment maps (a forest and a city) and under two motion conditions (static and dynamic). In the dynamic condition, the surface and the observer were kept relatively static, but the surrounding environment map was rotated at sinusoidal speed around the vertical axis, generating ‘flowing’ reflections on the surface. Eight observers performed the gauge figure task (attitude probe) with these stimuli. The analysis of variance indicated that both inter-observer correlations and correlations with the 3D input model were higher for the static presentation than for the dynamic one. Results imply that specular flow, despite offering a computational advantage, is not beneficially used by the human visual system.

15:00
What is the underlying nature of the perceptual deficit in adult poor readers?
SPEAKER: unknown

ABSTRACT. It has been suggested that an impairment of the dorsal stream leads to a selective deficit in perceiving global motion, relative to global form, in poor readers (Braddick et al., 2003). However the underlying nature of the perceptual deficit is unclear. It may reflect a difficulty with motion detection, temporal processing, or the integration of local information both across space and over time. To resolve this issue we administered four, diagnostic, motion and form tasks to a large (N=150), undifferentiated sample of adult readers to characterise their perceptual abilities. A composite reading score was used to identify groups of relatively good and poor readers (the upper and lower thirds of our sample, respectively). Results showed that poor readers’ coherence thresholds were significantly higher than those of good readers on a random-dot global motion task, but not a static global form task nor a spatially one-dimensional global motion task. Crucially, poor readers were significantly worse than good readers on a temporally-defined global form task, requiring integration across two spatial dimensions and over time. Thus the perceptual deficit in poor readers may be indicative of a difficulty combining local visual information across multiple dimensions, rather than a motion or temporal processing impairment.

15:00
Visual-spatial-motor integration in a cross-section of primary-aged children: implications for assessing risk of dyslexia
SPEAKER: unknown

ABSTRACT. Dyslexia is a common condition characterized by difficulties with reading and writing despite adequate intelligence, education and motivation. Many individuals with dyslexia also have problems integrating visual information over space and time, and /or motor control: however, whether sensory and motor deficits underlie phonological difficulties in dyslexia, or merely co-exist with them, remains a topic of debate. We used a novel, computer-based “dot-to-dot” (DtD) task to explore visual-motor integration in 253 children (aged 4 – 10 yrs, m=5.69; 114 females) from three primary schools in Edinburgh, UK, and its relationship with phonological and cognitive skills known to be compromised in dyslexia. We found that: (1) DtD accuracy was significantly correlated with Phonological Awareness, Rapid Automatized Naming (RAN) and Digit Span (arguably the best predictors of dyslexia); (2) DtD accuracy was the single most predictive variable of phonological awareness out of all the measures (accounting for 41% of the variance), more than RAN and Digit Span; (3) children deemed at “high” risk of dyslexia according to existing screening tools (e.g. LUCID-Rapid) performed significantly less accurately than those deemed at “low” risk. Follow-up testing of the youngest, pre-reading children will indicate whether or not poor visual-motor integration performance predicts later reading problems.

15:00
Multimodal effects of color and aroma on predicted palatability of semisolid and liquid milk beverages
SPEAKER: unknown

ABSTRACT. We have revealed the effect of color and aroma on "predicted palatability" quantitatively before drinking tea. To reveal the effect of jellies and milk beverages, we used four types of liquids (milk beverages) and four types of semisolids (jellies made of the milk beverages) present on the market as visual stimuli: milk, strawberry milk, green tea soy-milk and coffee with milk. As Olfactory stimuli, we used four types of flavor samples as olfactory stimuli: vanilla, strawberry, green powdered tea and chocolate. These stimuli were evaluated by twenty participants in their twenties. Each visual stimulus was in a plastic-wrapped glass, and each olfactory stimulus was soaked into absorbent cotton in a brown bottle. In the visual evaluation experiment, participants observed one milk or jelly without any olfactory stimulus. In the olfactory evaluation experiment, they smelled a flavor sample without any visual stimulus. Finally, they observed one of the milks or jellies while smelling an olfactory sample in the visual-olfactory evaluation experiment. Evaluated items were "predicted sweetness, sourness, bitterness, umami taste, saltiness", and "predicted palatability". The results show that the weighting factor of color on evaluating "predicted palatability" of milk beverages was smaller than on evaluating "predicted palatability" of jellies.

15:00
What crowding tells about schizophrenia
SPEAKER: unknown

ABSTRACT. Visual paradigms are versatile tools to investigate the pathophysiology of schizophrenia. Contextual modulation refers to a class of paradigms where a target is flanked by neighboring elements, which either deteriorate or facilitate target perception. It is often proposed that contextual modulation is weakened in schizophrenia, i.e., facilitating contexts are less facilitating and deteriorating contexts are less deteriorating compared to controls. However, results are mixed. In addition, facilitating and deteriorating effects are usually determined in different paradigms, making comparisons difficult. Here, we used a recently established crowding paradigm in which both facilitation and deterioration effects can be determined all together. We found a main effect of group, i.e., patients performed worse in all conditions compared to controls. However, when we discounted for this main effect, facilitation and deterioration were well comparable to controls. Our results indicate that contextual modulation can be intact in schizophrenia patients.

15:00
Simulation Fidelity Affects Perceived Comfort.
SPEAKER: unknown

ABSTRACT. The effects of change in the qualities of sounds upon comfort are well known, however the effect of fidelity within a simulated environment upon sound comfort is less well documented. This study provides an insight into the effects of fidelity on sound comfort, employing two types of fidelity Audio-Functional and Environmental. Participants carried out a tracking task as described in Meyer, Wong, Timson, Perfect, and White (2012), and filled out a questionnaire to assess presence and comfort. The first hypothesis stated both presence and comfort increase in higher fidelity settings. The second hypothesis put forward that performance in the tracking task would also be affected by the level of fidelity experienced by the participant. An increased level of fidelity did indeed have a positive effect upon presence and comfort ratings both in Audio-Functional and Environmental fidelity. An effect of fidelity was also found on performance with Environmental but not with Audio-Functional Fidelity. This would indicate that comfort is significantly affected by the fidelity of the setting in which audio stimuli are presented. These results have implications for research testing audio comfort furthering the concept that there is more to assess than only the qualities of the audio in question.

15:00
Attachment style dimensions are associated with neural activation during projective activity
SPEAKER: unknown

ABSTRACT. Numerous studies show the existence of a neural circuit, frontal and parietal-temporal, involved in the projection, remembering the past and understand the intention of others (Theory of Mind). A recent study, using the tables of the Rorschach, shows the existence of a fronto-parietal neural circuit during the attribution of meaning to these stimuli. However, numerous studies show that the styles of attachment may occur in selective biases toward certain types of emotional information environment. The hypothesis of present study is that the neural correlate in response to stimuli not-structured is modulated by the scores of the attachment dimensions. Electroencephalography activity of 27 subjects was recorded with a 256-HydroCelGeodesicSensor-Net during visual task consists of structured and not-structured figures. Subsequently, the subjects were administered Attachment Style Questionnaire (ASQ). During the presentation of the not-structured figures score of the “Relationships as secondary” was negatively associated with the activation of the limbic areas and the “Need for approval” was negatively correlated with the activation of the prefrontal cortex and limbic areas, no association was found during the presentation of the structured figures The findings demonstrate as attachment style can modulate brain activation during the projective activity in the Rorschach test.

15:00
Closed loop accommodation response to step changes in disparity vergence upon stereoscopic displays
SPEAKER: unknown

ABSTRACT. Stereoscopic displays present unnatural stimulus demands for accommodation and vergence. The aim of the present study was to measure the closed loop accommodation response to step changes in disparity vergence on a stereoscopic display. Ten young adult subjects (mean age: 21.3±2.4 years) with normal visual function participated. The stereoscopic display (Zalman ZM215W) presented a high contrast Maltese cross target at 40cm with disparity vergence demands of 6Δ of convergence or divergence presented in a random order at 10s intervals. Vergence responses were measured continuously using an infrared limbal reflection eyetracker (Skalar IRIS 6500 Simulink) while accommodation responses were recorded continuously and simultaneously with a specially modified infrared, open field autorefractor (Shin-Nippon SRW-5000). A small (mean±sd 0.49±0.17D) but significant (p<0.01) step response of accommodation was noted for both convergent and divergent disparity stimuli. During the subsequent 10s fixation period, the accommodation response drifted back to the original level in both conditions. Disparity vergence responses induce changes in phasic accommodation in the absence of any accommodation stimulus. When the vergence response is maintained, the accommodation system returns to the response level required for the display position. Stereoscopic displays produce paradoxical responses in the accommodation system through the vergence accommodation crosslink.

15:00
Unfaithful mirror: A new procedure to decrease the sense of ownership and agency of one's own face.
SPEAKER: unknown

ABSTRACT. Previous studies have demonstrated that one's own body representation is based on integration of sensory inputs. Illusions emerge from the manipulation of these inputs, such as the Rubber Hand Illusion (Botvinick & Cohen, 1998) and the Enfacement illusion (Sforza et al., 2010). The synchronization between different stimulations (tactile, visual and proprioceptive) is particularly important for these two illusions. The present study introduces a new paradigm verifying if an asynchronous stimulation between sight, touch and proprioception could induce a decrease of the sense of ownership (SoO) and agency (SoA) in relation to one's own face. In this study we compared different types of inconsistency between modalities. Additionally we collected measures of depersonalisation, locus of control, hallucination-proneness and delusional beliefs. Results (N=60) suggest that the SoO and SoA over one's own face is decreased when visual feedback is presented with a delay of a few seconds. Secondly results suggest that an inconsistency between proprioception and sight induces a stronger decrease of the sense of ownership than the inconsistency between touch and sight. Thirdly, contrarily to expectancies, participants high on a depersonalisation continuum seem to have a stronger sense of ownership of their face independently of type of feedback.

15:00
Characterising individual differences with Bayesian Models: an example using autistic traits and motion perception
SPEAKER: unknown

ABSTRACT. Bayesian models describe how perceptual experience arises from the optimal combination of noisy sensory information and prior knowledge about the world. Here we probed whether individual differences in perceptual experience are best explained by differences in prior or sensory sensitivity. Priors cannot be assessed directly, but Pelicano & Burr (2011) suggested that individuals with Autism Spectrum Disorders may have flatter prior distributions, so we used this as a potential proxy. Trait measures were collected alongside psychophysical assessment of the perceived slowing of moving stimuli pursued by eye (Aubert-Fleischl phenomenon, AFP), because AFP can be modelled using Bayesian principles (Freeman et al 2010). Autistic traits were negatively correlated with AFP, and this relationship was strengthened after controlling for variance associated with motion thresholds during pursuit and fixation. The correlation between threshold differences and AFP was also significant after controlling for variance in autistic traits. Finally, thresholds and autistic traits did not correlate. Taken together, these results suggest that individual differences in both thresholds and prior distributions contribute separately to differences in perceptual experience. This supports Pelicano & Burr’s hypothesis, but also suggests that differences in sensory sensitivity could sometimes mask the relationship between priors and perceptual experience and vice versa.

15:00
Both character size and spacing affect readability of Japanese
SPEAKER: unknown

ABSTRACT. Purpose: To investigate the effects of character size and spacing in reading Japanese texts by manipulating aspect ratio and spacing of characters. Methods: Twenty five participants read aloud 30-letter long easy-to-read Japanese sentences printed with a variety of fonts as quickly and precisely as possible. Stimulus fonts were based on the IWATA UD Gothic and its width-to-bounding box ratio was varied from 64% to 100% as well as the edge-to-edge character spacing being varied from 0% to 20%. Reading speed was measured for a range of print sizes and three readability indexes, ie, maximum reading speed(MRS), critical print size(CPS), reading acuity(RA) were calculated and compared among different font settings. Results and Discussion: All three indexes were largely affected by character spacing and weakly by the aspect ratio. In large, character spacing influenced RA and CPS positively, but MRS negatively. When center-to-center spacing was fixed, aspect ratio showed no effect on RA except 64% beyond which size seemed effective. Moreover, our finding showed systematic effect by edge-to-edge spacing rather than center-to-center spacing. Results indicated not only spacing but also size affect readability.

15:00
Reformulating Motion Parallax as a source of 3-D information
SPEAKER: Brian Rogers

ABSTRACT. Reverspectives provide compelling evidence that perspective information can overrule disparities in the perception of 3-D structure (Rogers and Gyani, 2010) but the role of motion parallax in Reverspectives is unclear. The similarity of motion parallax and binocular stereopsis as sources of 3-D information suggests that the parallax created by Reverspectives is similarly overruled by perspective (Papathomas, 2007) but this assumes that objects remain stationary during side-to-side head movements. Using ‘virtual’ Reverspectives, where we can independently manipulate both disparities and parallax motions, we found that the visual system makes no such ‘stationarity’ assumption. However, parallax motions are not merely subordinated by perspective information. Rather, they increase both the amount and vividness of the perceived depth, compared to static viewing. Moreover, when the perspective information is weak and stationary observers perceive the 3-D structure specified by the disparities, the depth switches as soon as the observer begins to move and observers perceive the 3-D structure specified by the weak perspective. These results suggest that motion parallax is not just a powerful source of information about 3-D structure but is better understood as a variant of the KDE rather than a strict analogue of binocular stereopsis.

15:00
Fusional demand and stereoacuity
SPEAKER: unknown

ABSTRACT. Fine stereopsis is dependent on a number of factors, including the requirement that both eyes share a common visual direction, controlled using motor fusion. Varying the level of fusional demand may affect stereoacuity; we aimed to investigate this by adjusting visual alignment using a clinically used mirror stereoscope (synoptophore). A synoptophore was modified by adding LCD monitors in order to assess dynamic and static stereopsis under zero and controlled fusional stress. A 4-spatial AFC adaptive procedure was used to measure thresholds for foveally presented stimuli (1s duration). Stereoacuity was defined by fitting a psychometric function (Weibull) to the data, using the 72.41% correct response as threshold. Fourteen Subjects (aged 18-28) out of 40 assessed were able to provide a reliable result in each condition tested and included in analysis. Mean (SD) in arc seconds for Static and Dynamic in the unstressed conditions were: 99(56) and 37(19), and in the stressed conditions, 80(21) and 31(21). Our main finding was fusional stress did not affect stereoacuity level (p=0.154 (2-Way ANOVA)), and dynamic cues lead to better thresholds than static cues (p<0.001). There was no significant interaction (p=0.44). If an individual’s eye alignment is well controlled, it is unlikely to affect stereoacuity levels.

15:00
The effects of 2-D and 3-D configuration on stereoscopic depth magnitude percepts
SPEAKER: unknown

ABSTRACT. The perception of closure in a stereoscopic figure degrades suprathreshold depth estimates; this disruption can be eliminated by manipulating grouping cues that contribute to the percept of a closed figure. Here, we systematically evaluate the impact of specific 2-D contextual properties including connectedness, collinearity, and proximity to understand their role in this phenomenon. Concurrently, we assessed their interaction with a new stereoscopic grouping principle ‘good stereoscopic continuation’. We used closed rectangular stimuli and systematically varied 2-D and 3-D contextual cues. In all conditions, observers estimated the amount of depth between the vertical edges of the rectangle viewed in a stereoscope. Our results show that the dramatic effects of closure on perceived depth do not require that the lines are physically connected, but critically depend on the degree of collinearity and location of the horizontal connector. Both collinearity and corners (L-junctions) are necessary to create the percept of a closed object, and reduce depth estimates. Importantly, these effects critically depend on the presence of good stereoscopic continuation (a 3-D grouping cue). Taken together, our results highlight the often-neglected role of stereoscopic depth in the perception of object form.

15:00
Changing depth cue reliance by playing videogames
SPEAKER: unknown

ABSTRACT. We tested the cue reliance of regular videogame players (VGP, N=21) and those who do not play (NVGP, N=13). We used stimuli where stereo, texture and outline cues sometimes conflicted (Buckley & Frisby, 1993) and found VGPs relied more on stereo cues in cue-conflict stimuli than NVGP. This finding seems counterintuitive, as videogames are rich in monocular not binocular depth cues. We then took a group of NVGP (N=14) and measured their cue reliance before and after playing videogames, one game was rich in monocular cues (QUAKE) the other was not (TETRIS). Games were played on a monitor at the same distance as the cue reliance test. We found that 30 minutes of playing either game changed the cue reliance in the direction of VGPs. The cue content of the games appeared unimportant and we suggest the changes are simply due to adaptations to the display distance. We discuss our findings in context of Shah et al (2003) who found stereo cue reliance similar to our VGP group in laparoscopic surgeons and Rosser et al (2012) who found improved performance on a laparoscopic surgery trainer if preceded by 6 minutes of videogame play.

15:00
The development of the perception of visuotactile simultaneity
SPEAKER: unknown

ABSTRACT. We measured the typical developmental trajectory of the visuotactile simultaneity window by testing adults and four age groups of children (7, 9, 11, and 13 years of age). We presented a visual flash on a LCD screen and a tactile tap to the right index finger separated by various SOAs; participants reported whether the two stimuli were simultaneous or not. Compared to adults, 7- and 9-year-olds made more simultaneous responses when the tap led the flash by 300 ms or more and when the tap lagged the flash by 200 ms or more, and they made fewer simultaneous responses at the 0 ms SOA. The point of subjective simultaneity occurred on the tactile-leading side, as in adults, by 7 years of age. However, the window of visuotactile simultaneity became progressively narrower from 7 to 11 years of age, at which point it was adult-like. Together, the results demonstrate that the adult-like precision of perceiving visuotactile simultaneity develops after 9 years of age. This developmental pattern is similar to that found for the perception of audiovisual simultaneity (Chen, Shore, Lewis, & Maurer, 2015) except that adult-like performance is reached by age 11 rather than 9 years of age.

15:00
Spatial-frequency tuning of the steady-state pattern-onset visual evoked potential: The topography of the “notch”
SPEAKER: unknown

ABSTRACT. Steady-state checkerboard-onset visual evoked potentials are a popular tool for objective testing of visual acuity. Their amplitude-versus-checksize tuning exhibits a so-called “notch” in many subjects, i.e. a reduced amplitude at intermediate check sizes despite the pattern being easily visible. Explanations include superposition of responses from separate cortical sources, resulting in a spatially variable cancellation of the scalp potential. We tested the assumption that some combinations of reference and active electrode should avoid the notch, or a notch at one location might be counterpoised by an increased response at a different location (‘anti-notch’). We recorded steady-state VEPs to 7 check sizes with a multi-channel EEG system, computed all pairwise bipolar derivations, and performed frequency-space response extraction. Sizeable notches occurred in 11 out of 41 subjects. Anti-notches were mostly outside the occipital half of the scalp, were small and did not cluster spatially, or check-sizes of notch and anti-notch did not match. Anti-notches thus neither individually nor collectively can serve as a compensation of the notches. Re-referencing, or selecting specific bipolar derivations, does not provide a solution to the notch problem. Assuming the validity of the superposition hypothesis, the results favor global surface cancellation caused by very local combination of cortical sources.

15:00
Differential sensitivity to surface curvature polarity in 3D objects is not modulated by stereo disparity
SPEAKER: unknown

ABSTRACT. It has previously been shown that observers are more sensitive to detecting changes in concave relative to convex curvature in the bounding contour of 2D shapes. Here we examined two related issues: (1) Whether this differential sensitivity to curvature polarity extends to the surfaces of three-dimensional (3D) objects, and (2) whether the detection of surface curvature polarity is modulated by stereo disparity. We created 3D rendered 'asteroid like' stimuli, keeping the silhouette constant but modifying part of the object surface by either introducing, removing, extending or reducing a new concave or convex region. In two separate experiments, we asked participants to discriminate between two sequentially presented 3D shapes under either mono or stereo viewing conditions. The results showed that, analogous to curvature detection in 2D bounding contour, participants are significantly better at discriminating between objects if changes occur in a concave region compared to a convex one. We also found observers to be significantly more accurate at detecting changes when curved regions were introduced or removed in comparison to when these were extended or reduced in magnitude. Surprisingly, we found no viewing condition effect. These findings provide further evidence of the functional status of concave regions in 3D shape representation.

15:00
Exogenous Spatial Attention in Adults with ADHD is Intact
SPEAKER: unknown

ABSTRACT. ADHD is a disorder characterized by maladaptive hyperactive, inattentive, and impulsive behavior. We explored whether exogenous (transient) attention improves orientation discrimination in ADHD and whether it differentially affects processing at locations across the visual field. We tested 14 adults with ADHD and 14 controls (age- and gender-matched) binocularly on a 2-AFC orientation discrimination task. Four tilted Gabor patches appeared briefly along the vertical and horizontal meridians. To manipulate attention, participants were presented with either a valid-cue or a neutral-cue (one or four peripheral precues). Participants reported the orientation of the Gabor indicated by a response cue. Performance was significantly higher and faster for the valid- than the neutral-cue condition for both groups. The magnitude of the attentional benefit did not differ between the two groups. Moreover, both groups exhibited canonical performance fields–better performance along the horizontal than vertical meridian and at the lower than upper vertical meridian–and similar effects of attention at all locations. These results illustrate that exogenous attention remains functionally intact in adults with ADHD.

15:00
Bimodal perceptual integration facilitates representation in working memory
SPEAKER: unknown

ABSTRACT. Cue integration incorporating multiple sensory modalities is a common process when perceiving natural (multimodal) stimuli. While the perceptual facilitation of multimodal stimuli is thought to result from combining the unimodal cues, the integrative mechanism of multimodal attention and the built-up of a memory representation remain unclear. The aim of the current study was to investigate the supporting effect of bimodal (visual-auditive) stimulation on the representation in a working memory stage. To assess memory operations quantitatively, we tested participants in a 2-back task with five repetitions (blocks) per modality using a complete within subject design (3 modalities x 5 blocks). In the unimodal conditions, unfamiliar visual patterns (containing six randomly distributed black dots) and auditive stimuli (chords recorded from a piano and a guitar) were presented in a sequence and participants had to compare each stimulus with the second last one to indicate if they were identical or not. In the bimodal task, both kinds of unimodal stimuli were presented simultaneously. Comparing d-prime values, we found significant improvement (compared to both unimodal conditions) of working memory processing when bimodal stimuli were presented. Further, data show an improvement of task performance over time in each condition.

15:00
The role of crowding in Object Substitution Masking.
SPEAKER: unknown

ABSTRACT. Object Substitution Masking (OSM) is a phenomenon wherein a surrounding mask (typically four dots) that onsets with a target but lingers after its offset can significantly reduce target perceptibility. OSM was originally claimed to be effective only when the target was not the focus of attention, i.e. when presented in large set-size displays (Di Lollo et al., 2000). The increased number of distractors was argued to influence time taken for focal attention to reach the target. More recent evidence however found OSM to be independent of set-size once constrained performance in the smallest set-size condition was accounted for (Argyropoulos et al., 2013; Filmer et al., 2014a). We recently replicated the set-size effect in OSM; importantly though this effect was found to be an artefact of crowding at larger set-sizes (Camp & Pilling, ECVP 2014). Here we further explore this crowding effect. In four experiments we show that crowding interacts with OSM irrespective of set size; and scales with eccentricity as is characteristic of crowding. The pattern of the crowding interaction with OSM shows parallels with the phenomenon of “supercrowding” (Vickery et al., 2009). The findings are discussed in terms of the position of OSM within the hierarchy of object processing.

15:00
The dissociation of different measures of cortical inhibition in the visual system, and their use for non-invasive monitoring of epilepsy susceptibility
SPEAKER: unknown

ABSTRACT. Epilepsy is believed to arise in many cases from deficits in cortical inhibitory mechanisms. Recently, it has been suggested that the psychophysical phenomenon known as surround suppression may reflect cortical inhibition. This raises the possibility that visual psychophysics could offer a non-invasive way of assessing disease state in epilepsy. One theory suggests that the timing of seizures may reflect fluctuations in the quality of cortical inhibition. We recruited 153 healthy volunteer controls and 50 patients with clinically confirmed epilepsy. Two different stimulus paradigms (motion direction discrimination (Tadin et al, 2003) and contrast detection task (Serrano-Pedraza et al, 2012)) were used to derive surround suppression indices. Motion direction discrimination showed a highly significant reduction with increasing age in both groups, but not for the contrast detection task. The two measures of surround suppression were not correlated across populations. Surround suppression indices derived from the motion direction discrimination, but not the contrast detection task, show a significant difference between controls and patients with frequent seizures. Patients with higher frequency of seizures showed higher indices of cortical inhibition. This suggests that the motion discrimination version of surround suppression does indeed capture some aspect of the pathology in epilepsy.

15:00
Luminance prevailed over disparity in depth discrimination
SPEAKER: unknown

ABSTRACT. To understand how the visual system combines monocular and binocular cues in depth perception, we investigated the effect of luminance gradient (monocular cue) on disparity (binocular cue) discrimination. The stimuli, designed to simulate a uniform corrugated surface under diffuse illumination, had a sinusoidal luminance modulation (0.26 or 1.8 cy/deg, contrast 20%-80%) modulated either in-phase or in opposite phase with a sinusoidal disparity of the same corrugation frequency. The disparity amplitudes ranged from 0’-20’. We used a two-alternative forced-choice paradigm, in which two stimuli with different disparities were presented simultaneously in each trial. The participants were to choose the one with greater depth contrast. Depth discrimination threshold was measured at 75% accuracy with a dynamic staircase algorithm (Kontsevich & Tyler, 1999). In the no luminance cue condition, which used a random dot stereogram rather than a sinusoid grating, thresholds increased monotonically with disparity. When luminance gradient was introduced, thresholds were greater than the no-luminance condition and stayed the same across all disparity levels. This result suggests either that the presence of the luminance cue inhibited the disparity mechanism or that the observers used a combination of luminance and disparity for depth discrimination with the luminance cue in the dominating role.

15:00
The effect of age on confidence and accuracy in eye-witness judgements
SPEAKER: Helen Kaye

ABSTRACT. Groups of younger (under 40) and older (over 40) adults viewed a film of a staged incident involving two protagonists and were subsequently presented with separate video line-ups for each target. One line-up was target present (TP) and the other target absent (TA). Participants reported their confidence that each line-up member had been seen in the film. Consistent with previous findings that older participants make more false identifications than younger participants (Memon et al 2003), the older group produced higher confidence ratings to foils in the TA condition than did the younger. In the TP condition most participants gave the highest confidence rating to the target, however older participants gave significantly higher ratings to foils seen before the target than did the younger. There was no difference between the groups in confidence ratings to foils seen after the target. After identifying the target older eyewitnesses expressed much less confidence in the foils actually being the target. The results suggest that older eyewitnesses may have different expectations or be more susceptible to social pressure to make a positive identification.

15:00
Investigating visual integration in Autism Spectrum Disorders using collinear facilitation with temporal masking
SPEAKER: unknown

ABSTRACT. Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by impaired social interaction. Altered perceptual experiences are also common, possibly due to reduced integration of local information into a global percept. To explore this claim we investigated visual integration in adults with ASD using a collinear facilitation (CF) task. In CF, a faint Gabor target is easier to detect when flanked by high contrast, co-aligned Gabors compared to a no flanker condition. CF is mediated by propagation of excitatory signals from flanker to target cells along horizontal connections in V1 as well as feedback from higher visual areas. To investigate whether horizontal or feedback signals are altered in ASD we manipulated the timing of the flankers relative to the target so that the flankers could either occur before, simultaneously or following the target. CF in the latter condition is more reliant on feedback than the other timing conditions. Both ASD and controls (n=24) showed significant CF for all three timings, which was maximal when flankers and target occurred simultaneously. There were no differences between the two participant groups. These results suggest that for these CF conditions, horizontal and feedback contributions are similar for ASD and control participants.

15:00
Separate neural representations of visual and haptic object size
SPEAKER: unknown

ABSTRACT. The brain must interpret sensory input from diverse receptor systems to produce explicit estimates of object properties. Consider estimating size from vision and haptics: signals at the retinas, and stretch receptors in the hand, must both be transformed into reportable size estimates. We tested for commonalities and distinctions in the neural representations supporting this. Using fMRI, we measured brain activity during size judgements, while participants either felt objects with the left or right hand, or viewed the same sizes on a screen. Stimulus discriminability was controlled with psychophysical pilot data. Separate MVPA searchlight analyses for vision and haptics identified brain regions meeting three criteria: 1) different sizes produced reliably different patterns of activity; 2) more similar sizes produced more similar patterns of activity; 3) these criteria held across hands and hemifields. Haptic and visual tasks engaged separate but adjacent frontoparietal regions. Specifically, right parietal cortex represented haptic size irrespective of the hand used, while visual size was more strongly represented in left parietal cortex. These findings reveal object size representations that are abstract (generalised across hand and hemifield), yet specific to each modality. These regions may hold mid-level object representations that mediate between peripheral signals and fully abstract size coding.

15:00
Dual focus contact lenses produce inaccurate steady state accommodation responses in emmetropic subjects
SPEAKER: unknown

ABSTRACT. The progression of myopia may be inhibited by manipulating peripheral retinal blur using dual focus contact lenses (DF). Previous studies have shown deficiencies in the accommodation response in myopic subjects although the effect of DF upon the accommodation response has not been examined in detail. The aim of this study is to assess the accuracy of steady-state accommodation responses with DF lenses. Ten emmetropic subjects (mean age 22.3±2.0 years) were fitted with a concentric DF with plano power in the central 3mm and +2.50D in the peripheral zone. The accommodation response was measured continuously with a specially modified infrared autorefractor (SRW-5000, Shin-Nippon) at stimulus levels of 0D, 2D and 4D, while subjects viewed a high contrast Maltese cross target with and without the DF. The steady state accommodation response showed significant variability in the DF condition for the 0D (p<0.01) stimulus level. Frequency spectrum analysis revealed this was due to significantly larger low frequency drift in the accommodation response with the DF. The use of dual focus contact lenses causes inaccuracy in the steady-state accommodation response in emmetropic subjects. It is important that future work examines the accommodation response in myopic subjects when wearing these contact lenses.

15:00
Visual search in migraine
SPEAKER: Louise O'Hare

ABSTRACT. Migraine is a visual disorder, showing differences in visual performance across a range of tasks between attacks. Lack of inhibition has been suggested as a possible reason for these differences. Specifically, migraine groups are more susceptible to the effects of external noise compared to healthy volunteers (e.g. Tibber et al., 2014). Although visual search should depend on inhibitory processing, migraine groups have not shown a deficit on this task (e.g. Wray et al., 1995, Palmer and Chronicle, 1998; Conlon and Humphreys, 2001), possibly as pop-out visual search tasks may not have sufficient noise to cause difficulties for migraine groups. Using a conjunction feature search task based on Zhaoping and May (2007), the effect of adding task-irrelevant noise to the stimulus on performance between migraine and control groups was explored.

15:00
Ocular Accommodation and Depth Position in Depth-Fused 3D Visual Perception
SPEAKER: unknown

ABSTRACT. Stereoscopic image perception is due to physiological factors of depth perception, such as binocular parallax, convergence, accommodation and motion parallax. For a natural 3D display, we have found depth-fused 3D (DFD) visual perception (Suyama et al., 2004). In DFD perception, two images that differ only in luminance and are displayed at the front and rear planes are perceived as a single image at one depth. We perceive a continuous depth change when the luminance ratio between the two images is continuously changed according to the 3D image depth. We perceive 3D images between the two planes. It is known that DFD images are fatigueless (Ishigure et al., 2004) and cause motion parallax (Takada et al., 2007). However, the accommodation has not been clarified. In this study, we examined the accommodation mechanism and verified that it is used during viewing of DFD images. We measured accommodation in subjects who gazed at DFD images. As a result, it is suggested that the changes in luminance were perceived such that the DFD image approached and moved the accommodation focus. This trend was seen in the young and middle aged subjects. In the future, we will develop a natural telecommunication system using DFD perception.

15:00
Eigen-Adaptation and Distributed Representation of 2-D Phase, Energy, Scale and Orientation in Spatial Vision
SPEAKER: unknown

ABSTRACT. Distributed representations (DR) of cortical channels are pervasive in models of spatio-temporal vision. A central idea that underpins current innovations of DR stems from the extension of 1-D phase into 2-D images. Neurophysiological evidence, however, provides tenuous support for a quadrature representation in the visual cortex, since even phase visual units are associated with broader orientation tuning than odd phase visual units (J.Neurophys.,88,455–463, 2002). We demonstrate that the application of the steering theorems to a 2-D definition of phase afforded by the Riesz Transform (IEEE Trans. Sig. Proc., 49, 3136–3144), to include a Scale Transform, allows one to smoothly interpolate across 2-D phase and pass from circularly symmetric to orientation tuned visual units, and from more narrowly tuned odd symmetric units to even ones. Steering across 2-D phase and scale can be orthogonalized via a linearizing transformation. Using the tilt-after effect as an example, we argue that effects of visual adaptation can be better explained by via an orthogonal rather than channel specific representation of visual units. This is because of the ability to explicitly account for isotropic and cross-orientation adaptation effect from the orthogonal representation from which both direct and indirect tilt after-effects can be explained.

15:00
Visual and haptic detection of mirror-reflected contours and repeated contours within one object versus across two objects
SPEAKER: unknown

ABSTRACT. For vision, detecting mirror-reflectional symmetry across two contours is usually easier if both belong to the same object rather than to two different objects whereas the opposite occurs for repetition (Koning & Wagemans, 2009). We investigated whether the same interaction is found when novel, planar shapes are explored by active touch (haptics). We varied modality (haptics or vision), regularity (mirror-reflection or repetition), objects (one or two) and axis orientation (vertical or horizontal). For both modalities, performance was better overall for mirror-reflection than for repetition. For vision we replicated the interaction between type of regularity and number of objects for both stereoscopic and monocular presentation. In contrast, for haptics there was a one object advantage for repetition as well as for mirror-reflection. Thus the perception of regularity appears to differ across vision and haptics depending on whether the regularity is found within a single object or across different objects. Other modality-specific factors were also important, such as whether the visual stimuli were shown from top-down versus angled views, and whether one versus two hands were used to feel the haptic stimuli. These findings demonstrate the powerful influence of how we acquire information and explore stimuli on our perception of regularity.

15:00
Ultra-rapid categorization of meaningful real-life scenes in people with and without ASD
SPEAKER: unknown

ABSTRACT. The Reverse Hierarchy Theory (RHT) by Hochstein and Ahissar (2002) assumes that ultra-rapid categorization of visual information (paradigm developed by Thorpe and colleagues, 1996) involves the extraction of the global gist of an image. People with Autism Spectrum Disorder (ASD) are generally outperformed by the typically developing population in tasks that require such global information processing. We tested a group of high-functioning adults with and without ASD on an explorative test battery of different ultra-rapid categorization tasks. As already reported in a previous study (Vanmarcke & Wagemans, 2015) gender differences (women better than men) were clearly present on all different categorization tasks (except for the motor baseline task). But, in contrast to our expectations, people with ASD only performed worse when the stimuli required the categorization of scenes depicting social interactions. These results argue against a general deficit in ultra-rapid gist perception of visual information in people with ASD. Instead of supporting the Weak Central Coherence (WCC) hypothesis (Frith & Happé, 1994; Happé & Frith, 2006), arguing for a slower, more effortful extraction of global meaning in people with ASD, this finding suggests a more specific problem with the fast processing of social relations.

15:00
Stimulating the Aberrant Brain: Predisposition to Anomalous Visual Distortions Reflects increased Cortical Hyperexcitability in those prone to Hallucinations: Evidence from a tDCS Brain Stimulation Study.
SPEAKER: unknown

ABSTRACT. Clinical and neurological research has suggested that increased predisposition to anomalous perceptual experience can result from increases in cortical hyperexcitability. However, such studies are often based on subjective questionnaire measures alone. The present study examined the role of cortical hyperexcitability in healthy individuals predisposed to anomalous hallucinatory visual experiences by manipulating the level of excitability in the visual cortex via transcranial direct current stimulation (tDCS). Sixty participants completed questionnaire measures indexing their predispositions to anomalous perceptions. They also took part in a computer based pattern-glare task (view irritable gratings) across three separate tDCS sessions (sham/anodal/cathodal) applied over the visual cortex. Participants reported the number of phantom visual and somatic distortions experienced during the viewing these highly irritable gratings. Those predisposed to anomalous experiences, reported more visual distortions as a result of viewing the grating stimuli even under sham conditions. In addition, these individuals responded more strongly to excitatory stimulation of the visual cortex (reporting more visual distortions as a result of such stimulation), yet more weakly to inhibitory stimulation of the same brain regions. Collectively, these findings are consistent with a hyperexcitable cortex being associated with proneness to report more visual distortions and hallucinations even in non-clinical samples.

15:00
Auditively induced Kuleshov Effect. On multisensory integration in movie perception.
SPEAKER: unknown

ABSTRACT. Almost a hundred years ago, the Russian filmmaker Lev Kuleshov conducted his now famous editing experiment in which an audience was presented with film scenes alternating between a neutral face and several objects/people. It is said that the audience interpreted the unchanged facial expression differently depending on the object. For instance, the face appeared to look sad when juxtaposed with a coffin but hungry when presented next to a soup bowl. This interaction effect has been replicated and dubbed “Kuleshov effect”. In this study we explored the role of sound on the evaluation of facial expression in a movie. Thirty participants watched different clips of faces that were intercut with neutral scenes. The neutral scenes either featured happy music, sad music, or no music at all. This was crossed with the facial expressions of happy, sad, or neutral. We found that while sad music did only have a weak effect on the evaluation of the facial expression, happy music lead participants to judge the face as significantly more happy. We conclude that music can be used as an additional cue to evaluate movie scenes and give meaning to ambiguous situations.

16:00-18:00 Session 27A: Multisensory perception

.

Location: B
16:00
White matter connections of the vestibular and visual-vestibular insular cortex
SPEAKER: Anton Beer

ABSTRACT. The parieto-insular vestibular cortex (PIVC) is a central area of the human cortical vestibular network. Moreover, the posterior insular cortex (PIC), posterior to PIVC, seems to be relevant for visual-vestibular interactions. Here, we investigated the structural connectivity of PIVC and PIC using probabilistic fiber tracking based on diffusion-weighted magnetic resonance imaging (MRI) of 14 healthy people. Both brain regions were identified in each hemisphere by functional MRI. Bithermal caloric stimulation served to identify PIVC, whereas PIC was defined by its response to visual motion. Probabilistic tracking based on diffusion-weighted MRI was performed with seeds in PIVC and PIC, respectively. Track terminations were mapped to the cortical surface of a standard brain. White matter connections of PIVC and PIC showed overlapping track terminations to each other, the insular/lateral sulcus, superior temporal cortex, and inferior frontal gyrus. However, we also observed significant differences in the connectivity fingerprint of PIVC and PIC. PIVC tracks primarily projected to the posterior parietal cortex, the frontal operculum, and the Heschl's gyrus, whereas PIC tracks primarily terminated in the temporo-parietal junction, superior temporal sulcus and the inferior frontal cortex. These results suggest that PIVC and PIC have partially overlapping and partially distinct profiles of white matter connectivity.

16:15
Prioritizing speed over accuracy in audiovisual integration of threatening stimuli
SPEAKER: Karin Petrini

ABSTRACT. Non-verbal communication is essential to survival and successful social behavior. Integrating multisensory non-verbal signals can improve the accuracy and the speed with which humans perceive and react to others’ emotions (Collignon et al., 2008). Whether these aspects of multisensory facilitation are intrinsically linked together or one may prevail over the other depending on the social situation is unknown. We asked 16 participants to discriminate as quickly as possible the level of angriness between two clips under three different sensory conditions: visual (biological motion of a walker), auditory (sound of footsteps), and audiovisual (both). We tested whether there was a reduction in participants’ response variability and reaction time (RT) for the audiovisual condition, as predicted by the maximum likelihood estimation and violation of race model inequality, respectively. While no evidence of audiovisual reduction in response variability was found, we did find evidence of audiovisual reduction in RT when compared to either auditory or visual condition. This reduction exceeded that predicted by the race model for the fastest quantiles of the RT distribution, pointing to a real interaction between modalities. This indicates that under threatening social situations a multisensory mechanism facilitating a speeded reaction is prioritized over one facilitating a robust percept.

16:30
Vision shares spatial attentional resources with haptics and audition, yet attentional load does not disrupt visuotactile or audiovisual integration.
SPEAKER: Basil Wahn

ABSTRACT. Human information processing is limited by attentional resources. Two questions discussed in multisensory research are 1) whether there are separate spatial attentional resources for each sensory modality and 2) whether multisensory integration is influenced by attentional load. In two experiments, we investigated these questions using a dual task paradigm: Participants performed two spatial tasks (a multiple object tracking task and a localization task) either separately (single task condition) or simultaneously (dual task condition). In the localization task, we presented the localization cues in different sensory modalities: In Exp. 1, participants received either visual, tactile, or redundant visual and tactile location cues, whereas in Exp. 2, they received either visual, auditory, or redundant visual and auditory cues. In both experiments, we found a substantial decrease in participants’ performance in the dual task condition relative to the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues’ modality. Furthermore, participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that 1) vision shares spatial attentional resources with haptics and audition and 2) visuotactile and audiovisual integration occurs at a pre-attentive processing stage.

16:45
Visual-haptic cue combination in adults with autism spectrum condition
SPEAKER: Daniel Poole

ABSTRACT. Performance in several multisensory tasks is predicted by a statistically optimal maximum likelihood estimation (MLE) model, in which unisensory cues are combined additively with the weight of each cue determined by its reliability. This combination rule ensures that variance in the multisensory estimate is minimised (Ernst & Banks, 2002). In the present study we investigated visual-haptic cue-combination in participants with autism spectrum condition (ASC) as there is evidence that multisensory processing is altered in this group (e.g. Stevenson et al, 2014). Thirteen adults with ASC and a matched neurotypical (NT) control group took part in a visual-haptic cue integration task (Gori, Del Viva, Sandini & Burr, 2008). Participants judged the height of blocks presented visually, haptically, or via both senses (multisensory). Multisensory performance was compared to predictions from MLE and a model (SCS) in which participants switch stochastically between cues from trial-to-trial (Nardini, Jones, Bedford & Braddick, 2008). Multisensory variability estimates were significantly higher than predicted by MLE for both ASC and NT groups. However, for both groups variance was well predicted by the SCS model. The failure to replicate optimal multisensory cue combination in NT participants and possible directions for future research in ASC will be discussed.

17:00
Impaired audio spatial abilities in blind adults: a behavioural and electrophysiological study

ABSTRACT. The role of visual information in the development of spatial auditory abilities is still debated. Several studies have suggested that blind people can partially compensate their visual impairment with greater sensitivity of the other senses (Lessard, 1998; Roder, 1999). However, in previous studies (Gori, 2013) we have shown that early visual deprivation can impact negatively on spatial but not temporal bisection auditory tasks. Here we investigate neural correlates of these impairments: we study cortical activations by comparing behavioural and electrophysiological parameters reflecting spatial and temporal perception in 16 congenitally blind and 16 normally sighted. Specifically, we test the hypothesis that visual deprivation might affect more neural processing of audio-spatial than audio-temporal information. On one side, we confirm (Gori, 2013) that blind participants have good temporal bisection performance, but lower spatial bisection abilities compared with sighted controls. On the other side, electrophysiological data reveal differences in the scalp distribution of brain electrical activity between the two groups reflecting lower tuning of early spatial processing mechanisms in the blind subjects. Therefore, joint behavioural and electrophysiological differences suggest that compromised functionality of brain areas in the blind may contribute to an impairment of auditory spatial skill such as those required in audio-spatial bisection.

17:15
How well do we know whether we are seeing or hearing? On the robustness of modality discrimination.

ABSTRACT. Although perception is underpinned by multimodal integration, we are normally aware of the modality of each element of experience. If sensory signals are “tagged” as visual, auditory etc, how robust is the modal tag? We tested whether the modalities of individual visual and auditory stimuli, randomly interleaved, can be distinguished close to detection thresholds. We compared detection of stimuli and identification of their modality, using a 2-Alternative Forced Choice localization task and a simple detection task (with confidence rating as the criterion for detection). Results showed that: 1) detection performance for each stimulus is entirely unaffected by random interleaving of modalities; 2) surprisingly, identification of modality is, if anything, more reliable than stimulus detection. We also examined the effect of backward visual masking on the detection of visual and auditory stimuli, and the identification of modality. Visual masking raised visual but not auditory thresholds, and did not affect the ability to identify modality. These results suggest that: 1) attentional capacity is not shared between sensory modalities; 2) perceptual distinction between modalities is extraordinarily robust, right down to threshold. This implies that modality "tagging" occurs at a very early stage in sensory processing.

17:30
We all live in the anisotropic submarine – differences in perceived distance anisotropy trough senses.

ABSTRACT. People tend to perceive distances above them as larger than physically same distances in front of them, which is called perceived distance anisotropy. Aim of this research was to examine possible differences in this anisotropy, between various sensory modalities. We compared results from five experiments in which participants matched distances of stimuli, on horizontal and vertical directions. In first 3 experiments participants (14+13+15) matched distances visually: in experiments 1 (upright position) and 2 (lying on the left side) they changed viewing direction by moving their head, while in the third one, by moving their body. In the forth experiment (upright position) 15 participants matched distances by their hand (proprioceptive information), while in the fifth 16 participants matched distances auditory. We computed ratio between vertical and horizontal estimated distances, which was used as dependent variable. Results show that there is a significant effect of experiment (sensory modality), stimuli distance, and interaction of the two. When we combine visual or auditory information with proprioceptive and vestibular, anisotropy is larger than when we combine visual information with prorioceptive or vestibular only, or when we use only proprioceptive information. Anisotropy enlarges when system combines both, proprioceptive and vestibular information with other sensory modalities.

17:45
Neural population codes, decisional confidence and cross-modal facilitation
SPEAKER: Derek Arnold

ABSTRACT. When making perceptual decisions, humans evidently have a reliable trial-by-trial estimate of encoding success, as reported confidence typically correlates well with objective performance in the absence of feedback, missed trials, or concentration lapses. When making perceptual decisions, people also benefit from encoding physically redundant information in separate sensory modalities. We explored the possibilities that these observations might be inter-related, and driven by neural population coding. For global direction and orientation judgments, we found performance can be equated for stimuli containing different ranges of physical signals (by modulating mean signal magnitudes), but this failed to equate confidence, which was disproportionately undermined by increasing signal range. For audio-visual rate judgments, our data suggest cross-modal coding benefits are contingent on unimodal signals having elicited different levels of confidence. We contend that these observations are explicable in terms of encoded signal precision being estimated on a trial-by-trial basis from the range of differently-tuned mechanisms active during evidence accumulation. This has a disproportionate influence on confidence relative to sensitivity, demonstrating relatively independent transforms of sensory information for the underlying computations. Moreover, encoded signal range can inform decisions when more than one cue is encoded in different sensory modalities, explaining why cues can be optimally weighted.

16:00-18:00 Session 27B: Networks and coding

.

Location: A
16:00
Mesolimbic confidence signals guide perceptual learning in the absence of external feedback

ABSTRACT. It is well established that perceptual learning can occur without external feedback. Current theories of learning, according to which we learn from the consequences of our actions, have difficulties to explain such perceptual learning without feedback. Here we tested the hypothesis that perceptual learning may be guided by self-generated confidence signals that serve as internal feedback. Functional magnetic resonance imaging (fMRI) was conducted while human participants performed a challenging visual perceptual learning task. They did not receive feedback on their performance, but reported their confidence after each trial. We devised a novel computational model in which perceptual learning was guided by the combination of a confidence-based reinforcement signal and Hebbian plasticity. Model-based fMRI data analysis showed that activation in mesolimbic brain areas reflected pre-stimulus anticipation of confidence and signaled a subsequent stimulus-related confidence prediction error, revealing a striking similarity in the neural coding of internal confidence-based and external reward-based feedback. Importantly, mesolimbic confidence prediction error modulation predicted individual learning success, establishing the behavioral relevance of these self-generated feedback signals. Together, our results provide evidence for an important role of confidence-based mesolimbic feedback signals in perceptual learning and extend reinforcement-based models of learning to cases where no external feedback is available.

16:15
Statistical determinants of sequential visual decision-making
SPEAKER: József Arato

ABSTRACT. Apart from the raw visual input, people’s perception of temporally varying ambiguous visual stimuli is strongly influenced by earlier and recent summary statistics of the sequence, by its repetition/alternation structures, and by the subject’s earlier decisions and internal biases. Surprisingly, neither a thorough exploration of these effects nor a framework relating those effects exist in the literature. To separate the main underlying factors, we ran a series of nine 2-AFC visual decision making experiments. Subjects identified serially appearing abstract shapes in varying level of Gaussian noise (uncertainty), appearance probabilities and repetition-alternation ratio. We found a) an orderly relationship between appearance probabilities on different time-scales, the ambiguity of stimuli and perceptual decisions; b) an independent repetition/alternation effect, and c) a separation of bias effects on RT and decision, suggesting that only the latter is appropriate for measuring cognitive effects. We confirmed our main human results with behaving rats making choices based on luminance between stimuli appearing at different locations. These findings are compatible with a probabilistic model of human and animal perceptual decision making, in which not only decisions are taken so that short-term summary statistics resemble long-term probabilities, but higher order salient structures of the stimulus sequence are also encoded.

16:30
Connective field mapping in a hemispherectomized patient

ABSTRACT. Background An interesting patient group to study whether and when visual field maps and their connections change after damage to the visual system are hemispherectomy patients. We studied a 16-year-old girl in whom the left hemisphere was removed at age three. Using population receptive field (pRF) mapping (Dumoulin & Wandell 2008) Haak et al. (2014) found normal visual field maps in the early visual cortex of this patient, but an enlarged foveal representation and much smaller population receptive fields in extrastriate cortical areas, compared to normal. Method Here, we applied connective field modeling (Haak et al., 2013) to the functional Magnetic Resonance Imaging data of this patient and three controls. This method models the activity of voxels in one part of the brain (e.g., V2, target area) as a function of activity in another part of the brain (V1, source area). Results Our results indicate that connective fields in the early visual cortex of the hemispherectomized patient appear normal. In contrast, those in extrastriate regions are –on average– relatively small compared to those in the controls. Conclusion This finding suggests that the origin of the smaller extrastriate pRFs found in the previous study may lie in a deviant cortico-cortical connectivity.

16:45
Increased stimulation of the non-classical receptive field region results in more information in occluded V1.
SPEAKER: Yulia Revina

ABSTRACT. Most input to V1 is non-feedforward, instead originating from lateral and feedback connections. Using functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA), Smith & Muckli (2010) showed that non-feedforward stimulated regions of V1 (i.e. responding to an occluded portion of a scene) contain contextual information about the surrounding natural scene, fed back from higher visual areas.

We investigated how much of the surrounding scene needs to be visible to induce meaningful feedback to V1. Participants viewed two natural scenes, in feedback (occluded lower right quadrant) and feedforward (corresponding quadrant visible) conditions. We modulated the visibility of the surrounding scene with a grey mask punctured with Gaussian bubbles of varying sizes over the surround (Gosselin & Schyns, 2001). Using V1 voxel patterns responding to the quadrant, we decoded the two scenes in the different conditions.

We found that a large amount of surrounding scene needs to be visible for meaningful feedback to non-stimulated V1. Secondly, feedforward MVPA classification is better when more surround information is available. Lastly, showing the full image throughout the experiment enhances feedback on a particular trial, if enough spatial context is available, supporting the importance of both spatial and temporal context.

17:00
Bayesian models of perception

ABSTRACT. The notion that perception involves Bayesian inference is an increasingly popular position taken by many researchers. While the approach provides great insight into perceptual processes, it has also received strong criticism. In order to evaluate the sometimes grandiose claims made by advocates of Bayesian methods it is crucial to cut through the seemingly complex methods and focus upon the core theoretical claims being made. These claims will be introduced and misconceptions dispelled. Probabilistic graphical models are presented as a concise yet powerful visual notation with which to express Bayesian explanations of perception. The concepts are exemplified with the alternative-forced-choice and yes/no tasks, and a set of resources are provided for those inspired to apply Bayesian modelling to other perceptual phenomena.

17:15
Individual scene, category and depth information is fed back to retinotopically non-stimulated subsections of early visual cortex.

ABSTRACT. Even without meaningful direct feedforward input, early visual cortex contains information about context, suggesting that other brain areas provide context via feedback (Smith & Muckli, 2010; Petro et al. 2014). However, the level of specificity contextual feedback provides is unclear. Activity in non-stimulated portions of early visual cortex may represent feature predictions, which would allow for the discrimination of individual scenes. Alternatively, this activity may only provide information about more abstract higher-level scene groupings. To investigate these possibilities, we blocked feedforward input to subsections of retinotopic visual cortex while participants viewed 24 scenes. Scenes spanned six categories and two depths – higher-level properties which group scene representations in early visual cortex during feedforward stimulation (Walther, et al. 2009; Kravitz, et al. 2011). We examined response patterns in non-stimulated V1 and V2 using fMRI and multi-voxel pattern analyses. Individual scenes, category, and depth were all decoded from non-stimulated areas, and scene decoding errors were uniformly distributed, not concentrated within category or depth. These results indicate that non-feedforward processing in early visual cortex is specific to individual scenes, while retaining higher-level structure – ruling out the possibility that feedback to V1 is only higher-level information.

17:30
Learning Disparity tuned complex-cell like models using Independent Subspace Analysis
SPEAKER: David Hunter

ABSTRACT. Using a simple linear model, simple cell-like receptive fields can be learned by a variety of statistical techniques (Hyvärinen, Hurri, & Hoyer, 2009; Olshausen, 1996). These techniques produce models with sparse responses to natural images(Olshausen, 1996). Complex cells in the visual cortex cannot be classified with a simple linear model, but can be characterised as an additive combination of simple linear models (Schwartz, Pillow, Rust, & Simoncelli, 2006). Hyvärinen and Hoyer (2000) showed that similar models can be learned using a subspace analysis technique, where the space of image patches is divided into independent spaces. The model attempts to learn a set of features such that responses within the subspace are distributed orthogonally and responses between subspaces are sparse. We used this technique to learn a set of binocularly tuned ‘complex-cell’ models, using samples from natural binocular image patches (Hibbard, 2008). We found that many of the ‘complex-cells’ learned could be classified as disparity tuned, exhibiting invariance to phase in each eye, and a preference for a particular phase difference between the two eyes. We conclude that subspace analysis can learn models tuned for disparity, but also that other, non-disparity tuned cells emerge whose properties are not yet full understood.

17:45
Attention as Gibbs sampling

ABSTRACT. Much work has been done on the how, what, and where of attention, but less on the why. Here we propose a framework, based on the information theory of gambling, that attempts to both answer how and importantly why. We argue that attention should optimise the rate of reward, and given certain knowledge of the probability of reward, this predicts we fixated with a density proportional to that probability. Given uncertainty in this probability, we instead sample from a model of this probability, and then assume this sample is correct (Thompson sampling). There are two main problems with this proposal. Firstly the chicken and egg problem: the probability a given feature is associated with reward is dependent on the task; the probability a given task is rewarding is based on the features present. Secondly, many sources of relevant information are represented by different cortical areas, and these areas each have different "views" of the world. We show that by treating cortical areas as (Dirichlet) variables, we can use Gibbs sampling to sample from the full joint reward distribution, and that inhibition of return speeds convergence. We relate the predictions of this model with the results of visual search experiments.

16:00-17:45 Session 27C: 3D vision, depth and stereo

.

Location: C
16:00
The finite depth of visual space inferred from perspective angles

ABSTRACT. Retinal images are perspective projections of the visual environment. Despite this, it is not self-evident that visual space is a perspective representation of physical space. Analysis of underlying spatial transformations shows that visual space is perspective only if its depth is finite. Three subjects judged perspective angles, i.e. angles perceived between parallel lines in physical space, between real rails of a straight, disused, railway track. The subjects also judged the perspective angle from pictures taken from the same point of view. Perspective angles between real and depicted rails ranged from 27% to 83% of their angular size in the retinal image. Perspective angles prescribe the distance of vanishing points in visual space. Computed distances were all shorter than six meters! The extent of a hypothetical space inferred from perspective angles does not match the depth of visual space, as it is perceived. The incongruity between perspective angles and depth of visual space is huge but apparently so unobtrusive in human vision that it has remained unnoticed. The current results argue against methods that have been used to measure the geometry of visual space. The mismatch between perceived angles and distances casts doubt on the concept of a consistent visual space.

16:15
3-D Perception from Anomalous Motion perceived in Still Figures

ABSTRACT. Human vision has ability to perceive 3-D from motion. Recently, as a new category, volume was added to the previously reported depth, shape, structure and surface (Cheng, 2010, Optical Review 17-5 439-442; Cheng, 2011, Optical Review 18-4 297-300). We found that 3-D perception could be obtained not only in the continuous real motion but also in the velocity field produced by the repetition of one stroke apparent motion or a piece of motion in time course with suitable ISI (Cheng, 2014,Psychology Research 4-9 685-692). The anomalous motion perceived in still figure (http://www.ritsumei.ac.jp/~akitaoka/) was explained as the similar mechanism that in the velocity field (Idesawa, 2010, Optical Review 17-6 557-561); then, 3-D perception could be expected from the anomalous motion perceived in still figures. We examined the picture of suitably distributed anomalous motion elements with different properties in direction and strength so that the other cues ( perspective, occlusion, pictorial and shading ) were almost removed; then the 3-D perception excepting volume perception were proved. Using the Poggendorff probe( Wang,2008, Appl. Phys. Express 1 078001), it was proved that surface perception could be obtained with the anomalous motion but could not be obtained without the anomalous motion.

16:30
Insect stereo vision demonstrated using virtual 3D stimuli

ABSTRACT. Insect stereo vision demonstrated using virtual 3D stimuli

Stereo or 3D vision is a marvellous visual ability but the mechanisms that underlie it have only been studied in vertebrates. Stereopsis has been demonstrated in only one invertebrate - the praying mantis – but no detailed investigation has been possible because of a lack of techniques. We developed stereo displays for insects ('insect 3D cinemas') using a polarization-based approach and a spectral content based approach and tested the utility of both at delivering an illusion of 3D to praying mantises. We find that a polarization-based approach failed to deliver an illusion of 3D to the mantis but a spectral content based approach clearly managed to do this. With the latter approach, mantises struck at targets that were 10 cm away from them when the disparity of the target stimuli simulated a distance of 2.5 cm but not when the disparity was reversed or was zero. We thus definitely demonstrate insect stereopsis and open up novel avenues of research into different invertebrate-specific algorithms of depth perception and the parallel evolution of stereo computation.

16:45
The interaction between familiar size and stereoscopic depth cues
SPEAKER: Paul Hands

ABSTRACT. Stereoscopic three-dimensional (S3D) technology is a rapidly advancing area in the entertainment and medical field. As with all fast developing technology, problems are found which need solving. One important issue regarding S3D is cue conflicts: the brain receiving different depth information from different signals. One conflict: differing familiar size and stereo information, may lead to the belief that something is incorrectly shown. In industry this issue is widely referred to as miniaturisation. Although the conflict is well known in the commercial world of S3D, it has not been studied much in research. We consider whether humans prefer a familiar size or a stereo depth cue. Using a credit card displayed in S3D, we varied size and disparity information to test which cue was preferred. The subjects were required to make a decision as to whether the card appeared bigger or smaller than a standard credit card. Analysis using probability heat maps revealed humans prefer familiar size cues, and often ignored disparity. Mathematical modelling verified heavier weighting toward the familiar size cue, which was reflected in the weaker reliability of the disparity cue. This could have repercussions for medical operations that use S3D technology, if the image shown has size distortions.

17:00
A stereoscopic look at frequency tagging: Is a single frequency enough?

ABSTRACT. Stereopsis has been extensively researched primarily by psychophysics in humans, and by electrophysiological recordings in animals. Comparatively few human studies are employing Electroencephalography(EEG). An exception is a study by Norcia&Tyler(1984), which used frequency tagging to isolate cortical mechanisms sensitive to binocular disparity. We now return to this technique using high-density EEG which will admit source localisation. Our display is a mirror-stereoscope with two identical CRT monitors; the stimuli were a dynamic random-dot stereogram (white and black dots on a grey background) updated at every 8.3ms, presented in 4s trials. The disparity shows a horizontal grating with an amplitude of 0.065°, with a spatial frequency of 0.25cycles/deg. The tagging frequency is half the grating inversion rate. Grating trials were interleaved with control trials, consisting of two transparent planes at +/-0.065° disparity. From 11 participants providing about 67 trials, our preliminary analysis indicates significantly higher coherency at integer harmonics of the tagging frequency, but does not indicate significant difference in spectral amplitude, when compared to control trials.

Based on our results, we suggest that a single tagged frequency component is enough for reliable analysis, when a high-level function such as disparity processing is investigated.

17:15
The perception of straightness and parallelism in extended lines
SPEAKER: Olga Naumenko

ABSTRACT. Rogers and Naumenko (2014) reported that observers’ judgements of the alignment of three artificial ‘stars’ projected onto a planetarium dome was biased substantially when the angular separation of the outer stars was > 60˚ (horizontal azimuth). In the first of three new experiments, observers adjusted the curvature of extended (90˚ horizontal azimuth) lines projected onto the planetarium dome until they appeared to be straight. There were substantial biases away from the veridical, great circle loci. Biases decreased with increasing elevation of the lines and were almost eliminated when the horizon was not visible. These results are consistent with the curved appearance of straight-line jet-trails across the sky and our explanation of the New Moon illusion (Rogers and Anstis, 2015). Judgements of when a pair of lines appeared to be both straight and parallel were also biased by the presence of the horizon: their separation being biased towards a constant angular separation. A similar pattern of results was found when observers adjusted the curvature of a set of multiple vertical or horizontal lines. These results show that the perceived straightness and parallelism of extended lines depends crucially on the perceived distance of the surface on which those lines are seen.

17:30
The functional significance of stereopsis does not follow a developmental trajectory

ABSTRACT. Accurate judgement of depth plays a fundamental role in a number of activities. Furthermore, an advantage for binocular viewing and good stereoacuity has been demonstrated in hand-eye coordination tasks. It is unknown whether this functional significance of stereopsis follows a developmental trajectory.

We sought to determine how motor performance is impacted by (a) monocular vs. binocular viewing, (b) stereoacuity and (c) age. Seventy-two children, aged 4 - 11 years, performed three different motor tasks (ball-catching, bead-threading and balancing on a beam) both binocularly and monocularly. Crossed and uncrossed stereoacuity thresholds were measured using the TNO stereotest. The scores for each activity (balls caught, beads threaded and foot touchdowns) were standardised and analysed using a linear mixed model.

The relative utility of binocular viewing was most important for catching (average z-score difference of .95 between binocular and monocular viewing) and least important for balance (z-score difference of .26). However, stereoacuity only affected balance, with individuals with poor stereopsis demonstrating worse postural stability. Performance on all motor tasks improved with age, but there was no age-dependent effect of binocular vs. monocular viewing or stereoacuity, indicating that the functional significance of stereopsis is not moderated by age.

18:00-23:00 Session : Goodbye Party
Location: St Luke's Church