ECVP2015: EUROPEAN CONFERENCE ON VISUAL PERCEPTION
PROGRAM FOR MONDAY, AUGUST 24TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-11:00 Session 5A: Beyond Veridicalism: alternatives to conventional vision theory

.

Location: A
09:00
Alternatives to veridicalism in vision theory

ABSTRACT. Conventional vision theory is based on the assumption that the visual system provides the organism with a representation of the outside world which is as veridical as possible. Examples of this widespread (but sometimes hidden) assumption are found in formulations of vision as inverse optics (e.g., emphasis on the recovery problem) and Bayesian inference (e.g., optimality of cue combination). In addition to suffering from deeply rooted ‘physics envy’ and misguided philosophical assumptions about realism versus idealism and objectivism versus subjectivism, this line of reasoning denies a crucial problem posed by our phenomenal awareness: Often the world around us does not look like we know it from physics textbooks. Fortunately, phenomenology provides a respectable philosophical alternative, which has also inspired serious contenders to conventional vision theory (e.g., Gestalt psychology, user-interface theory). Many of these alternatives do better justice to how we experience the world, without denying its evolutionary origin (e.g., ecological fitness of behavior, perception-action coupling). This introduction to the symposium provides the necessary theoretical background to understand what is at stake. It also offers a preview to the theoretical alternatives to veridicalism, and how these can inspire new empirical work on a variety of topics (e.g., depth perception, amodal completion).

09:20
Template structure of visual awareness

ABSTRACT. The animal biology of visual awareness has been a major topic of ethology from the early twentieth century on (Nobel Prize 1973 for Konrad Lorenz, Nikolaas Tinbergen and Karl von Frisch, as immediate successors of Jakob von Uexküll). For psychology the work on vertebrates is perhaps of primary interest. Here we concentrate on the topic of releasers. Animals often react in stereotypical ways that – from an anthropocentric perspective – appear really odd in view of their visual acuities. For instance, a bird may “take a brick for an egg’’ or “take fishes for its chicks”. Such “releasers” apparently act as all powerful templates that fully overrule “inverse optics”. Are humans unique among vertebrates in seeing things “as they are”? As we found the answer is perhaps humiliating from an anthropocentric perspective. Human visual awareness is rife of “template” objects that are hugely unlike the “correct” interpretation of the optical data. We show a number of examples that might well be experienced as shocking. Such findings necessarily result from experimental phenomenological methods, as psychophysics proper will not reveal them, probably the reason why the textbooks do not mention them.

09:40
Where do we see?

ABSTRACT. Many theories of visual perception assume there is an external, independent reality full of pre-existing objects. The supposed function of vision is to veridically represent this reality so we are aware, internally, of these objects and their properties. But a perceptually independent reality contains no point of view; it is perspectiveless and lacks perceptual properties. Once subject to an act of perception it ceases to be independent, and so is never truly perceived. This poses an obvious and serious problem for any veridical account of perception. The problem is avoided, however, if we renounce the assumed distinction between internal perception and external reality. The implications for vision science would be profound, and potentially beneficial. Rather than internally perceiving an external object, the object is understood to be located internally and externally at the same time, as is the associated act of perception. The object and its perception become identical and spatiotemporally distributed. The conventional causal chain, in which the object gives rise to the percept, can equally well be inverted: the percept gives rise to the object, thus rendering the inverse problem in optics redundant. Although contrary to current scientific orthodoxy, this approach nevertheless may have explanatory value.

10:00
Perception, Inverse optics, and Probabilistic inference
SPEAKER: Manish Singh

ABSTRACT. Visual perception is commonly treated as inverse optics—its supposed goal being to “undo” the effects of optical projection/rendering and “recover” the “true” world properties. The standard Bayesian framework for vision (SBFV) models inverse optics by inverting conditional probabilities: if one knows p(Image | Scene), one can estimate p(Scene | Image), assuming some knowledge of p(Scene). SBFV has intuitive appeal; however it makes some strong assumptions. It effectively assumes that the space of perceptual interpretations is identical with (or isomorphic to) the objective world. The posterior mapping can then be assumed to be inverting the rendering map. These assumptions are ultimately unjustifiable, however. First, there are various perceptual phenomena that cannot be understood simply as inverting optics/physics. Second, a consideration of evolving perceptual systems makes it clear that any general model of perception must allow for different possible relationships between perceptual spaces and the “objective world”. Probabilistic inference is still an appropriate and powerful tool for modeling perception, but it now takes place strictly between representational spaces. For biological organisms, perception thus acts as a user interface that allows effective (i.e. fitness enhancing) interaction with a world that remains fundamentally unknown to the organism. [Joint work with: Donald Hoffman; Jacob Feldman]

10:20
The reliability of experience and the experience of reliability
SPEAKER: Paul Hibbard

ABSTRACT. Conventional theories of depth perception make a number of assumptions about how cues are encoded, combined and used. Firstly, they assume that estimates are unbiased, and that noise is Gaussian and uncorrelated. Secondly, they assume that the goal of combining cues is to minimise the variance in the resulting estimate. Finally, they assume that all cues provide the same kind of information about 3D shape. While typical psychophysical methods cannot determine whether perception is veridical, other methods have demonstrated significant perceptual biases. Psychophysical and physiological results have also revealed a variety of representations of many aspects of depth. This raises the question of how information from a multitude of qualitatively different representations gives rise to our unitary perceptual experience, and how this information is used in perceptual and behavioural tasks. I will outline how such an array of representations can be combined in the perception of depth, in a way that links perception and action, and places reliability at the heart of phenomenology.

10:40
The immediate visual quality of visibility: how the visual system communicates confidence

ABSTRACT. As has been well appreciated since the early days of Gestalt psychology, visual perception is a highly non-local process in which context plays a pivotal role. In this regard, there is no fundamental difference between the visual perception of occluded and non-occluded object regions. The former (amodal perception) is just a particularly extreme case where the visual system creates representations of objectively invisible object regions based on context information alone. Why then, are unoccluded object regions, but not occluded ones, experienced as visible? I will argue that the seemingly obvious answer – that unoccluded image regions are experienced as visible because they are visible (and vice versa for occluded image regions) is misleading. This conclusion is based on experimental evidence showing that occluded image regions can be experienced as directly visible if the sensory evidence is sufficiently conclusive. Instead, I propose that the immediate visual experience of visibility is better understood as an “icon” in the “user interface” furnished by the perceptual system, which informs the organism about the trustworthiness of the current sensory evidence available for perceptual inferences. This explains, amongst other things, how “kinesthesis can make an invisible hand visible” (Dieter et al., 2014, Psych. Sci. 25(1), 66-75).

09:00-11:00 Session 5B: Visual perception research for use in the vision clinic

.

Location: B
09:00
Cortical Organization for Binocular Pattern Vision
SPEAKER: Andrew Parker

ABSTRACT. The 30 or more visual cortical areas form a sophisticated pattern recognition system, able to extract structure flexibly from the retinal inputs. The system is quick and automatic, but demands large resources within the cerebral cortex. It is argued that binocular stereoscopic vision must regard stereo as not just concerned with delivering a sensation of depth but also as an embedded component of cortical pattern vision. Unlike colour or motion, binocular disparity signals are widely distributed throughout many cortical visual areas. There is a formal similarity between the feature pairings that define stereo depth and those that enable detection of symmetry or repeated-element patterns. Proposal of a common cortical circuitry suggests an explanation for the observation that clinical patients with amblyopia often have deficits in pattern vision that accompany the better-known losses of binocularity. High-resolution cortical imaging with 7-Tesla MRI in neurotypical humans reveals locations within early visual cortical areas (V2, V3, V3a) with highly reproducible BOLD responses to binocular disparity. The size and shapes of these regions are comparable with scaled versions of structures that have been identified anatomically and physiologically in the monkey visual system. These findings are relevant to any attempts to restore binocular function.

09:20
ASTEROID: Accurate STEReoacuity measurement in the eye clinic
SPEAKER: Jenny Read

ABSTRACT. Stereoacuity measurements are regarded as the gold standard for assessing binocular vision function (Elliott & Shafiq, 2013). Accordingly, children with binocular visual disorders such as strabismus and amblyopia routinely have their stereoacuity measured with one of the clinical stereotests currently available, e.g. the Frisby, Randot or Titmus stereotests. These tests all suffer from severe limitations: They offer only a limited number of discrete test levels; viewing distance has to be controlled by the clinician; they are unengaging so children are typically unwilling to complete very many trials. The ASTEROID project aims to develop a new stereotest on a 3D tablet computer. The tablet uses autostereo (parallax barrier) technology so that no 3D glasses are required. The test uses a dynamic random-dot stereogram to avoid monocular cues and present the most rigorous test of stereopsis. The computer controls task difficulty via a Bayesian adaptive staircase, while the front camera is used to monitor viewing distance. Most importantly, the stereotest task will be embedded in a fun game which uses colours, sounds and animation to keep children engaged and responsive. In this way, we aim to measure stereoacuity with unprecedented accuracy and precision, providing clinicians with high-quality information about their patients’ vision.

09:40
Short-term monocular deprivation alters early components of Visual Evoked Potentials

ABSTRACT. It has recently been shown that 150 minutes of monocular deprivation alters visual perception in adult humans, causing the deprived eye to dominate the dynamics of binocular rivalry. We investigated the neural mechanisms underlying this homeostatic plasticity by measuring transient visual evoked potentials (VEPs) on the scalp of adult humans during monocular stimulation (visual stimuli: horizontal sinusoidal gratings, size 4°, S.F. 2 cpd, contrast 64%, presentation duration 100 ms, inter-stimulus interval 500-1000 ms) before and after 150 minutes of monocular deprivation. Deprivation strongly affected the amplitude of the earliest component of the VEP (C1), which originates in primary visual cortex (confirmed by source localization analysis). C1 amplitude increased (+66%) for the deprived eye, while it decreased (-45%) for the non-deprived eye. We further report that following monocular deprivation, the amplitude of the peak of the evoked alpha power increased on average by 34% for the deprived eye. These results indicate that short-term monocular deprivation strongly alters ocular balance in the primary visual cortex of adult humans, demonstrating a high level of residual homeostatic plasticity, likely mediated by a change of cortical excitability.

10:00
How does inattentiveness affect threshold estimates in children?

ABSTRACT. When assessing the visual abilities of children, researchers tend to use psychophysical techniques that have been designed for use with adults. Yet, children’s poorer attentiveness might bias the thresholds obtained by these methods. Here, we quantified inattentiveness in children aged 6- to 7-years-old (n = 31), 8- to 9-years-old (n = 39) and adults (n = 19), and used simulations to assess its effect on speed discrimination thresholds collected using three psychophysical techniques: Method of Constant Stimuli (MCS), QUEST and a 1-up 2-down Levitt staircase. As expected, children had more attentional lapses than adults, across all techniques. Lower speed discrimination thresholds were obtained using QUEST compared to MCS and Levitt staircases, but the difference between techniques did not interact with age group. Next, we used Monte Carlo simulations to model the effect of different levels of inattentiveness on thresholds obtained with each technique. Consistent with our empirical data, MCS and Levitt staircases underestimated sensitivity when lapse rates were increased above 0%. QUEST was more robust to lapses, providing more accurate estimates of sensitivity. These results demonstrate that threshold estimation techniques vary in their robustness to inattentiveness, which has important implications for assessing the visual perception of children and clinical groups.

10:20
CVIT 3-6, a screening test for cerebral visual impairment in young children

ABSTRACT. The lack of tailored neuropsychological tests for Cerebral Visual Impairment (CVI) makes diagnosis and rehabilitation complex. The available instruments often measure only one aspect of visual functioning, results are confounded by comorbid cognitive or motor difficulties, and tests are often too complex for young children. To address these problems, we developed the Cerebral Visual Impairment Test for 3 to 6 year olds (CVIT 3-6). Test development reflected a cross-talk between vision research and clinical relevance. Six pilot studies with 100 children resulted in 14 subtests covering object recognition, degraded object recognition, motion perception and local-global processing. Norm data were collected from 250 children. Based on this sample, age-specific percentile scores can be calculated for each new patient entry. Validity and reliability was evaluated in children with CVI, intellectual disabilities, low acuity or typical development. Test-retest reliability and Cronbach’s alpha were determined. Internal validity was evaluated by confirmatory factor analyses. Discriminant validity was assessed by comparing performance between groups and correlating CVIT 3-6 performance with visual acuity, intelligence and autism. Convergent validity was evaluated by correlating CVIT 3-6 performance with the other measures of visual functions.

10:40
Spatial mapping of retinal correspondence in strabismus
SPEAKER: Zahra Hussain

ABSTRACT. Strabismus (misalignment of the visual axes of the eyes) can lead to visual confusion and diplopia. To avoid these problems, stimuli projecting onto a region of the deviating retina can be actively inhibited (inter-ocular suppression) or realigned with corresponding retinal positions in the fixating eye (anomalous retinal correspondence). Here we describe a dichoptic matching method for mapping the degree of retinal correspondence between the eyes at multiple locations in the visual field in normal and strabismic observers. Two dots, each viewed by a different eye, are positioned at corresponding locations in the visual field with reference to a central fixation cross, seen by both eyes. This positional mapping method provides good quantification of the degree of retinal correspondence in strabismus. We will present evidence for: (1) systematic shifts in positional localization that correspond to the subjective angle of squint; (2) magnified biases in regions of the central visual field that are consistent with classic patterns of suppression; and (3) a left-right hemifield asymmetry in positional correspondence that is well-correlated with the direction of squint. We will discuss these binocular distortions of visual space with reference to the associated cause of early visual deprivation.

09:00-11:00 Session 5C: Ecological validity in social eye movement research

.

Chair:
Location: C
09:00
Temporal dynamics of social attention in face-to-face situations
SPEAKER: Megan Freeth

ABSTRACT. In face to face eye-tracking studies, researchers often use total fixation times on areas of interest to assess social attention. However, other measures can also be used to investigate potentially more subtle differences in social attention strategy. Eye-tracking data from a face to face interaction study will be presented which aimed to assess potential differences in social attention strategy between individuals who were classified as being high or low in autistic traits. No differences in overall fixations in various areas of interest were observed. However, there were clear differences in temporal dynamics of eye-movements. Individuals who were high in autistic traits exhibited reduced visual exploration overall, as demonstrated by shorter and less frequent saccades during the face to face interaction. Differences were not accounted for by social anxiety. Thus it is proposed that multiple eye-tracking measures should be used to understand more of the subtleties of visual attention strategy. Such measures may be less under conscious control and therefore less susceptible to modification in response to observer effects.

09:15
Context dependence of attentional capture
SPEAKER: Amelia Hunt

ABSTRACT. A critical issue in attention research is the precise degree to which measures of human orienting behaviour in laboratory tasks scale up to more naturalistic settings. We have been running a series of experiments that attempt to bridge this gap by measuring attentional biases in each participant in two different contexts: 1) speeded responses to stimuli presented repeatedly on a computer screen, and 2) fixation behaviour in an unconstrained and task-free context (for example, while they are sitting in the waiting room, ostensibly before the formal experiment begins). The aim is to evaluate the stability of attentional biases in a group of participants within one context or the other, as well as in the same individual across both contexts. In experiments examining the effect of faces and facial expressions of emotion, statistically robust attentional biases can be measured in both the lab and the waiting room context. However, the stimulus categories that produce stable biases differ across contexts, and a bias in a particular individual does not generalize from one context to another. The results illustrate the context-sensitivity of attention and advise caution in over-generalizing from lab to life.

09:30
The social presence effect of wearing an eye tracker: Now you see it, now you don't

ABSTRACT. People often behave differently when they know they are being watched. It was reported previously that wearing an eye tracker can serve as an implied social presence and cause individuals to avoid looking at particular stimuli. This presents a methodological challenge to researchers who wish to use eye trackers to understand the function of human attention in real world settings. Our recent work reveals that the implied social presence effect of eye trackers can dissipate quickly, in less than 10 min. However, drawing an individual's attention back to the fact that they are wearing an eye tracker can reactivate its social presence effect. This suggests that eye trackers induce a transient social presence effect, which is rendered dormant when attention is shifted away from the source of implied presence.

09:45
The interpretation of gaze in two-way social interactions
SPEAKER: Tom Foulsham

ABSTRACT. Visual attention allows us to select important stimuli in a complex and social environment. One way to study this in ecologically valid social settings is to present (increasingly-complex) stimuli, such as images and video of people, and record what people look at. Another way is to record attention in real interactions, but these may lack experimental control. I will describe a different approach, where we ask participants to make inferences while they watch the eye and head movements of others. For example, in one study, observers were able to correctly guess which of four items a participant preferred by watching a replay of their eye movements. Moreover, when participants knew that they were being watched and were asked to mislead the observer, they spontaneously changed their behaviour. Thus we can investigate the communicative function of gaze in a two-way, but controlled, situation. Recent extensions of this approach show that participants modulate their eye and head movements differently when hiding their preferences from an observer, presumably because head movements are a more salient cue. Visual behaviour depends on the social context, and studying this across effectors is a rich avenue for future research.

10:00
How attention is shaped by beliefs about other people

ABSTRACT. We are highly tuned to each other’s visual attention. Gaze cueing experiments, for example, show that a face looking in one direction will trigger attention in that direction. However, it is not clear whether the effect of such social cues is due to the appearance of a face, or the belief that face represents the attentional focus of another person. We investigated this question by changing participants' beliefs about the social nature of a cue in an inhibition of return paradigm. Participants believed that a red dot reflected either the attentional focus of another person sat behind them, or was randomly placed by the computer. We found that inhibition of return effects were amplified when the cue had a social meaning. Moreover, we found that these inhibition of return effects were modulated top-down by participants' beliefs about their partners' competence level and the task they were engaged in. Despite previous claims that attentional inhibition is ‘blind’ to such factors, when a cue was imbued with a social context it exerted a stronger influence over low-level visual attention.

10:15
Looking at people in real life: Methods for investigating social attention

ABSTRACT. The study of visual attention often relies on eye tracking technologies in order to measure where and when participants look. Using eye trackers poses some challenges, however, not the least of which is that they can make participants aware that their looking behaviour is being monitored and this may change how people behave, especially in social settings. In this talk, we report findings which demonstrate that participants look at another person in real life very differently than when the same person is presented in a pre-recorded video. This suggests that using potentially obtrusive computer-based methodologies to measure attention to images of people may fall short of capturing real-life attentional behaviour. To measure everyday social attention nonintrusively, we recently implemented a simple but powerful paradigm using only a confederate and a hidden video camera. In so doing, we demonstrate that pedestrian will covertly attend to nearby others, and modify their overt reactions based on social norms. These findings open new doors for attention researchers to study how people behave in complex situations and demonstrate the utility of both new technologies and creative, accessible methodologies.

11:00-12:00 Session 6: Vision & Cognition; The Human Face; Visual Art, Attraction & Emotion

Vision & Cognition (Expertise, Learning, Memory & Decisions) / The Human Face (Detection, Discrimination & Expression) / Visual Art, Attraction & Emotion.

Location: Mountford Hall
11:00
Preferential inputs of luminance signals for visual working memory.
SPEAKER: unknown

ABSTRACT. Working memory (WM) is the ability to encode and temporarily maintain information. There is some evidence that early perceptual processes make an important contribution to successful WM performance (Haenschel et al., 2007). However, perceptual contributions to WM are not yet fully understood. In several experiments we investigated whether stimuli specifically designed to engage luminance or the cone-opponent chromatic mechanisms in the visual system would differentially influence WM performance. We obtained detection thresholds and contrast thresholds necessary to remember the stimuli at three WM loads and used this to calculate WM ratios. Participants performed a delayed discrimination task in which they had to remember up to 3 subsequently presented abstract shapes, while we recorded event-related potentials (ERPs). The intensity of the stimuli was fixed at a suprathreshold level based on the baseline measurements of discrimination thresholds. The resulting ratio of load 3 to load 1 contrast threshold was lower for luminance compared to chromatic contrast. Performance on the delayed discrimination task decreased with WM load but this decrease was lower at higher WM loads for luminance isolating stimuli. The results point to the differential contribution of different cone signals to WM performance, highlighting the importance of early encoding in these tasks.

11:00
Spatial vision research in contemporary art practice: No room for 'perceptual errors'
SPEAKER: unknown

ABSTRACT. An unusual and misleading lack of correspondence between a perceptual object and an object in the external world is often referred to as 'perceptual error', or as 'non-veridical' perception. On a phenomenological level, there is no clear difference between veridical and non-veridical perception – usually, measurements or stimulus manipulations of some kind are necessary to reveal the deceptive nature of a perceptual experience. Here, we present a range of art installations that address the concept of 'perceptual error'. In particular, these installations are characterized by a reduced availability of potentially corresponding external world categories, enabling observers to focus on perceptual experience itself. For example, viewing high intensity, flickering lights with the eyes closed gives rise to rich perception of patterns and objects of varying levels of abstraction. The arrangement of decontextualized light spots in dark spaces triggers the perception of vivid, however, highly mysterious objects. Small spatio-temporal changes of line patterns yield rapid perceptual switching between 2D and 3D objects. By highlighting perceptual experience without corresponding external world categories, these works of art challenge notions of 'perceptual error' and 'veridicality'. We suggest that the installations are useful for experiential and theoretical education, investigating vision and phenomenal reality.

11:00
It is more difficult to judge global properties of shapes described by vertices than by curvature extrema
SPEAKER: unknown

ABSTRACT. Contours are important to perceive solid shape. Along contours extrema of curvatures specify surface curvature. Vertices of polygons are a special case of extrema: when a vertex is perceived as convex (or concave) it is processed as a positive maximum (or negative minimum). A corner enhancement phenomenon would predict faster responses to probes located near vertices. We used polygons and their smoothed versions to compare vertices and extrema in two tasks involving global properties of shape. In Experiment 1 observers discriminated stimuli with bilateral symmetry from random stimuli. The contours were either closed forming a single object, or faced each other forming two separate objects. In Experiment 2 observers indicated whether a pair of stimuli were identical (translation) or different. In both experiments the presence of vertices or curvature extrema was task irrelevant. Because the visual system is tuned to processing smooth curvature, we expected lower performance on RT, accuracy and sensitivity (d prime) for polygons. In both Experiments when stimuli were regular smooth contours led to better performance. Perception of global shape from contours was harder when the convexities and concavities were vertices as opposed to curvature extrema. These findings are discussed in relation to theories of shape representation.

11:00
Visual recognition memory for aerial photographs
SPEAKER: unknown

ABSTRACT. People are able to memorize a large set of natural scenes and real-world objects (e.g., Konkle et al., 2010), for which there exists a massive stored knowledge base. In comparison, poorer memory performance can be expected for stimuli, such as aerial photographs, with which most people have only little experience. We have examined visual recognition memory for orthogonal (generally, less familiar scenes) and oblique (more familiar scenes) aerial images in expert and untrained groups of participants. The participants first memorized images of urban environments. Afterward, they were shown pairs of images and indicated which of the two they had seen. The results show that experts who use aerial photographs on a daily basis can significantly better extract domain-relevant information than untrained viewers. Moreover, experts not only better remember the gist of the scenes portrayed, but they also more efficiently encode and recall specific details about aerial photographs. The same data pattern was found for all types of land use and for all scene scales. In comparison, there was no significant difference in performance between first-year geography students and first-year psychology students.

11:00
Averaging effects in spatial working memory do not depend on stored ensemble statistics
SPEAKER: unknown

ABSTRACT. Recall from visual working memory shows averaging affects. For example, the recalled position of a memorised item is biased toward the average location of all items in a memory array. A recent suggestion is that averaging reflects an attempt to optimise single-item recall by exploiting ensemble statistics. This proposal predicts that the average location is memorised independently from that of individual items. We compared normal subjects' perceptual estimates of the centre of mass (COM) of three-stimulus dot arrays, COM from recall and single items from recall. Perceptual estimates of COM showed a systematic bias toward the array's incenter, COM recall did not show this bias. The precision of COM recall was lower than COM perceptual estimates and higher than single item recall. In a right hemisphere patient with left hemianopia and neglect, COM perceptual estimates were systematically biased contralesionally, while COM recalls were biased ipsilesionally, confirming the dissociation between perception and recall. These findings suggest that COM is recalled by averaging the memorised items' positions rather than by retrieving its memorised perceptual estimate. Averaging in spatial recall may arise instead from a reference frame transformation, ensuring that the relative position of the item in the sample array is recalled.

11:00
Processing of Depth-Inversion Illusions: The special case of faces
SPEAKER: unknown

ABSTRACT. For the class of depth-inversion illusions (DII), perceived depth is opposite to stimulus physical depth - distant points on the stimulus appear closer than near points; thus convexities/concavities are perceived as concavities/convexities. Examples: hollow masks, “Termespheres” and reverse-perspectives, where painted scenes elicit a 3-D percept whose depth is opposite to the 3-D painted surface [Papathomas; Spatial Vision, 2007]. Possible explanation for DIIs: top-down influences, either specific knowledge of objects (such as 3-D faces), or general knowledge embodied as rules (such as perspective, or bias for convexity), influence the final percept [Gregory; Phil. Trans. R. Soc. B, 2005]. Interesting question: Can humans overcome such top-down influences and obtain depth-inverted percepts for stimuli in which top-down influences impede, rather than facilitate, depth inversion? Examples of such stimuli are normal, convex 3-D faces and “proper-perspectives”, in which the depicted perspective cues are consistent with the depth of the physical surface. Answer: Humans can overcome such top-down influences for proper-perspectives and Termespheres, but not for human faces. Together with evidence that human masks display a stronger inversion effect than perspective scenes [Papathomas, Bono; Perception, 2004], these results argue that 3-D faces are represented and processed differently than non-face 3-D stimuli.

11:00
Negative emotional objects cause pupil dilation despite low signal-to-noise conditions.
SPEAKER: unknown

ABSTRACT. Semantically-relevant regions attract attention especially in case of emotional images, even in low signal-to-noise conditions. However, if visual noise is high, the semantic meaning of an image is not recognized. The emotional response measured as pupil dilation might precede the recognition of semantic meaning. Semantic regions of interest (ROIs) in neutral, positive, and negative images were selected by observers. Images were transformed by adding pink noise in several proportions to be presented in a sequence in a free-viewing paradigm. Pupil dilation was compared between fixations in the semantic ROIs and fixations in the background. Pupil dilation differed between fixations in semantic ROIs and in background, depending on valence, with larger pupil in case of negative than neutral and positive images. In semantic ROIs the significant difference between negative and other images emerged in lower signal-to-noise condition (80% of noise) than in case of background (70% of noise). At the level of 80% of noise the average accuracy of scene identification was only 13% above chance level, indicating that the content was not fully recognized. The results show that negative images are affecting physiological functioning even when their conscious identification is prevented by obscuring factors such as pink noise.

11:00
Impaired identity discrimination in developmental prosopagnosia as measured with steady state visual evoked potentials in an oddball task.
SPEAKER: unknown

ABSTRACT. Individuals with developmental prosopagnosia (DP) have severe face recognition deficits, without any history of neurological damage. A reduced ability to discriminate between different faces could suggest that this deficit is in part due to a disruption of face processing at the level of the structural encoding of face identity. To test whether DPs have impaired identity discrimination of unfamiliar faces, a fast oddball paradigm was used to see if periodic changes of identity elicited steady state visual evoked potentials (SSVEPs) as recorded with EEG. Faces were presented at a frequency of 5.88Hz in 60 second sequences. At every fifth face in the sequence the identity of the face changed, thus the identity oddball discrimination frequency was 1.18Hz. EEG frequency analysis at occipito-temporal electrodes demonstrated increased power the 1.18Hz oddball frequency and it’s harmonics in the control group, suggesting that identity changes were discriminated. DPs demonstrated a significantly attenuated oddball discrimination response at the same electrode sites, suggesting reduced detection of the identity changes. The presentation rate of the faces (approximately 170ms) and the topography of the discrimination response suggests that this identity change discrimination impairment occurs at early structural encoding stages of the faces processing hierarchy.

11:00
Avoid Fishing: Data-driven selection of regions-of-interest in EEG/MEG studies that avoids inflating false positive rates
SPEAKER: unknown

ABSTRACT. Visual phenomena and their neural mechanisms are commonly studied with EEG measurements (e.g., N170 face-sensitive component). In analysis of EEG/MEG data, it is often difficult to know, a priori, precisely where effects will occur on the scalp and in time and frequency. To overcome this, researchers often identify regions-of-interest (ROIs) for testing, but have been criticized for sometimes using biased, data-driven methods and thereby inflating Type I error rates. Using simulations, we demonstrate an ROI-selection method which is data-driven but nonetheless maintains Type I error rates at five percent. Furthermore, it avoids statistical corrections and reduces the need for precise a priori information. We identify the ROI using the aggregated-grand average (AGA) wave, which is orthogonal to the experimental contrast. Importantly, we show that common methods for computing orthogonal waveforms for ROI selection can inflate Type I error rate and demonstrate how to overcome this with the AGA. Finally, we show that using the AGA has statistical power that can exceed common ROI selection methods that are based on using a priori information from independent studies. Our results demonstrate a simple, unbiased and data driven ROI selection method which is relevant for many visual EEG/MEG studies.

11:00
Three-dimensional Reconstruction of Traditional Chinese Calligraphy Arts for Realistic Perception
SPEAKER: unknown

ABSTRACT. Calligraphy of East Asian Characters is an important and appreciated aspect of East Asian culture, especially in the traditional Chinese culture. However, this art are often neglected nowadays. Since 3D details can be used to realistically present real-world surface structure of an object under various illuminations and viewing angles, we propose a 3D surface texture reconstruction method based on Photometric Stereo to provide users with a more realistic perception of digital art. Calligraphy of characters normally comprise various reflectance properties and rough surface geometry. With the presence of detailed 3D surface geometry of the Calligraphy Arts, Calligraphy fans, students and academic researchers can investigate the style of Calligraphy Arts more easily and effectively. Experiments have been performed on traditional Chinese Calligraphy Arts from the different historical periods and the reconstructed 3D results provide a convenient way and unique perception of art to the community.

11:00
The Own-Race Bias for Face Recognition in Malaysians and Whites
SPEAKER: unknown

ABSTRACT. The own-race bias (ORB) is the phenomenon whereby people's ability to recognise faces from their own race is better than for faces from other races. Studies of this phenomenon (Meissner & Brigham, 2001) have mostly employed White and Black participants and stimuli. We explored the ORB in Malaysian and White observers by comparing their recognition performance when performing an old/new face recognition task involving own- and other-race faces. Participants viewed a number of faces to remember during the learning phase and then subsequently viewed half of the previously presented faces intermixed with distracter faces. During the recognition phase, participants were required to determine whether each face had been seen in the learning phase. 94 young adults’ (26 Malaysian Chinese, 23 Malaysian Malay, 22 Malaysian Indian, and 23 Western Caucasian) recognition accuracy, sensitivity and response bias were measured to examine their face-processing ability. Broadly in line with findings from previous studies on ORB, the results generally showed that young adults had superior face recognition performance for own-race faces across different ethnic groups. However, faces of Indian Malaysians did not seem to produce an ORB, whereas the Chinese-White ORB was particularly pronounced.

11:00
Oscillatory mechanisms involved in the coding of temporal errors
SPEAKER: unknown

ABSTRACT. We constantly learn and update our beliefs about when events we cause will occur. Several studies have shown a link between frontocentral theta oscillations and errors. Here, we investigated whether similar mechanisms can be used to code for temporal errors. In each trial, two audiovisual cues were sequentially presented with an interval of 1s. Participants (n=17) were instructed to produce a third stimulus following the same rhythm, by pressing a button at the moment of their choice. Their action caused the presentation of a tone after a brief delay (50 ms) that was increased after a random trial to 350ms-750ms and remained constant until the end of the block. Our results showed that the error in the first trial after a delay change was significantly larger than the error in the subsequent trials within each block (F(5,45)=100.9, p<0.001). However, this error rapidly decreased as a function of trials. We found a strong theta power and phase reset in frontocentral electrodes. However, only theta phase-reset significantly reduced as a function of trial (cluster-stat 24.55, cluster-p<0.001). Our results suggest an almost instantaneous adaptation to temporal changes between actions and consequences and a dissociation between phase and power mechanisms in error coding.

11:00
Can Taiwanese political parties be categorized by face, even without external contour and mouth?
SPEAKER: unknown

ABSTRACT. Rule and Ambady (2010) found that Republicans and Democrats can be differentiated by face. The present study aims to replicate and extend. In Experiment 1, university students differentiated 100 gray-scale full-face photos of candidates of the two major political parties in Taiwan, the Kuomingtang (KMT) and the Democratic Progressive Party (DPP), 50 each; meanwhile the responses on candidates they had known the party of were removed. An open question “How did you guess?” was asked at the very end. Experiment 2 recruited participants aged between 25 and 55 with identical stimuli. Experiment 3 tested another group of university students with cropped-face photos, Experiment 4, with photos devoid of the mouth and chin area. Based on d’-corrected accuracy, the present study found that 1) KMT and DPP could also be differentiated by face, but 2) cropping the face made it difficult; 3) removing the mouth and chin area had no effects; 4) “Good guessers” made face-to-trait inferences from sociopolitical heritage, while “bad guessers” resorted only to observable features such as gender, age, and smile. 5) Only good guessers benefited from age. In sum, Rule and Ambady’s results were replicated, and aspects about identifying perceptually ambiguous social groups were being discovered.

11:00
How well is Emotion recognized in faces 15degrees in the periphery, and where do people look when allowed to fixate the face?
SPEAKER: unknown

ABSTRACT. Twenty participants were presented with Ekman and NimStim emotional faces centrally and 15degrees in the periphery in 4 locations around fixation. The face emotion was verbally named out of seven emotions, while fixation was maintained, so the face was in the periphery. Participants were permitted to free view the face and fixate where they liked and change their named emotion if they wished. Eye movements were monitored with ASL eye tracker. Accuracy for emotion discrimination in the periphery was around 60% and was around 80% for central viewing. Happy was always well recognized while Fear, Surprise, Anger, Disgust, Sadness and Neutral tended to be confused. When participants were allowed to look at the face directly they mostly tend to fixate first fairly centrally and then look to the eyes or mouth depending on the Emotion. These finding concur with those of Bayle, Schoendorff, Henaff, & Krolak-Salmon (2011) and Palermo and Coltheart (2004).

11:00
Unsupervised visual statistical learning in the newborn chick (Gallus gallus)
SPEAKER: unknown

ABSTRACT. Statistical learning is the ability to track probabilistic structures from the sensory input, in order to organize and interpret the environment. For instance, human infants are capable of extracting statistical information from both linguistic (e.g. artificial languages) and nonlinguistic inputs (e.g. streams of shapes). Besides being robust enough to operate across domains and modalities, statistical learning has also been reported in some nonhuman species, reinforcing the idea of a domain-general learning process. In the present study, we exposed visually-naïve newborn chicks to a visual computer-presented stream of shapes whose ordering was defined by transitional probabilities within/between shape-pairs. No reward has been provided to the animals, enabling us to investigate statistical learning as a form of unsupervised learning. Afterward, we tested chicks’ discrimination of the familiar sequence from a random presentation (Exp.1) or from a novel combination (Exp.2) of the familiar shapes. In both experiments, chicks recognized their familiar stimulus suggesting that this species presents an early sensitivity to the probabilistic structure underlying complex visual stimuli. Our results provide the first evidence of visual statistical learning in an avian species, highlighting similar abilities with respect to human infants and promoting the idea of statistical learning as the result of convergent evolution.

11:00
Unsolvable, yet insightful: The appeal of indeterminate and ambiguous artworks
SPEAKER: unknown

ABSTRACT. Indeterminate, ambiguous, or hidden images defy automatic identification but sometimes offer rewarding insights — such as the emergence of a familiar object within a random pattern in hidden images, the so-called “Aesthetic Aha” (Muth & Carbon, 2013). Do insights affect the appreciation of artworks as well — even if they never provide a determinate, final interpretation? Here we report a study which aimed at differentiating between effects of ambiguity, solvability of ambiguity, and strength of insights on appreciation. Via multidimensional measurement of appreciation and a subsequent multilevel modelling analysis we revealed that modern and contemporary artworks were preferred with regard to liking, interest, and affect if they featured a high degree of ambiguity and if they provided a potential for strong insights. The estimated solvability of ambiguity, however, did not affect liking but was actually negatively related to interest and powerfulness of affect. We suggest that art perception differs from progressive problem solving in that it is qualified by repeated changes in semantical instability during elaboration — some of these dynamics are marked by rewarding gains of insights. Such dynamics might be crucial for a long-term fascination and appreciation for artworks that are not easy-on-the-mind.

11:00
Extracting social information from the visual image of bodies
SPEAKER: unknown

ABSTRACT. A critical aspect of everyday social interactions is understanding who other people are and how we might expect them to behave. To date, neuroimaging studies have focussed on characterising the function of segregated patches of cortex along the ventral visual stream during person perception. It remains largely unknown, however, how such “body patches” functionally couple with other brain regions. Using fMRI and functional connectivity analyses, we investigated the hypothesis that person perception involves a distributed neural circuit, extending beyond the ventral visual stream. Silhouettes of neutral and trait-implying (muscular or overweight) bodies were presented to participants. When observing bodies that give rise to a social inference compared to neutral bodies, univariate analyses showed greater engagement of extrastriate and fusiform body areas (EBA, FBA). Additionally, there was stronger functional coupling between right FBA and right EBA, as well as between bilateral body patches and posterior parietal cortex. The results suggest that when extracting social information from another’s body, there is increased connectivity within the person perception network, as well as between person perception network and a dorsal attention network. These findings underscore the importance of considering functional interactions within an extended neural network when investigating functionality of the ventral visual stream.

11:00
Metacognitive sensitivity in visual working memory is determined by more than the integrity of the original memory trace
SPEAKER: unknown

ABSTRACT. There is a vast amount of research on metacognition in long-term memory (Koriat, 2007), but not much is known about metacognitive performance in visual working memory. In two experiments we investigated the effect of either response time or contextual congruency on metacognitive performance in a visual short-term memory (VSTM) task. Participants were tested on their memory for the orientation of Gabor patches over 5 seconds delay periods. After their initial response, participants rated on a 4-point scale either vividness of their memory, or confidence in their response. Metacognitive sensitivity (i.e. the difference between introspective ratings for accurate – inaccurate trials) decreased for conditions in which participants RTs were artificially speeded or delayed compared to baseline, most likely because of the additional working memory load of having to remember an additional task rule. Moreover, confidence ratings showed increased metacognitive sensitivity when the context (i.e. a colored ring surrounding the stimulus) changed compared to when the context at encoding and retrieval remained identical, possibly due to the alerting effect of the contextual change. However, accuracies were unaffected. These results indicate that metacognitive sensitivity in VSTM depends on more factors than just the integrity of the original memory trace.

11:00
Effects of noise and scene-target spatial congruence through visual exploration and target identification in a perceptual decision-making process.
SPEAKER: unknown

ABSTRACT. In visual search, visual exploration and target's identification depend on both noisy sensory input and prior knowledge such as the location of objects in the environment. These processes can be seen as an evidence collection step leading to decide if the target is present. We focused on how these two processes take part in the global decision-making process when searching in real-world visual scenes. We used a mouse-tracking methodology capturing components of the decision-making process through mouse trajectories (Freeman & Ambady, 2010), combined with eye-tracking measures reflecting exploration (scanning time) and identification (verification time). Indoor scenes with additive noise or no noise were presented. They included a target at a probable or an improbable location, or no target. The participants had to respond to the target's presence or absence. Our results indicate degraded response times and mouse trajectories biased toward the “target absent” response for improbable target locations, these effects being mainly mediated by increased scanning times. The degradation of the scene also impaired the response times but this effects was mediated by increased verification times. This illustrates how distinct perceptual processes influence the global decision-making process by integrating differently the available sources of information.

11:00
Effects of Using Detailed Illustrations in Textbook Design on Science Learning
SPEAKER: unknown

ABSTRACT. In recent years, an increasing number of textbooks have incorporated illustrations to accompany the text. Although colorful and realistic designs are considered entertaining, some studies have indicated that colors and realistic details in illustrations may be distracting to readers. This study examined the influence of illustrations on reading behaviors and learning outcomes. In Experiment 1, the subjective ratings of participants on learning interests showed a preference for illustrations over simple pictures. In Experiment 2, participants read 8 human anatomy lessons: 4 lessons contained detailed illustrations and 4 lessons contained simple pictures. Participants completed a comprehension test and an evaluation questionnaire after reading each lesson. Eye-tracking data indicated that the participants generally started viewing pictures earlier and spent more time on the detailed illustrations than they did on the simple pictures. However, the participants did not obtain high test results or give high ratings on enjoyment when they read texts with detailed illustrations. By contrast, the participants exhibited high learning outcomes when they spent more time on the simple pictures. Spending more time on detailed illustrations did not cause high learning outcomes. The results suggested that although using detailed illustrations might be visually appealing, a simpler design may facilitate effective learning.

11:00
The development of face and object processing in childhood
SPEAKER: unknown

ABSTRACT. There has been much debate over whether face and object recognition develop at a different rates during childhood. Some researchers suggest that face processing is not mature until the early teenage years, far later than object recognition; others suggest that face recognition is mature relatively early (by 5 yrs), and subsequent improvements reflect more general cognitive development. Recently, it has been suggested that both ideas are correct, but that face memory and face matching develop at different rates. This study addressed this hypothesis by examining face and object (bicycle) matching and memory in primary school children (5-11 yrs). In the memory test, children were asked to learn 4 or 6 faces and bikes; in the matching test, children completed a 3AFC simultaneous matching task. Both memory (N = 134) and matching (N = 432) showed significant improvements with age, but neither task showed an interaction between age and object – that is, memory and perception of faces showed a similar developmental trajectory to general object memory and perception. This suggests that both face memory and face matching are mature early in childhood (< 5yrs), and subsequent increases in performance are related to general cognitive development rather than face-specific processes.

11:00
Kitsch: Is it better than its reputation? Comparing explicit and implicit aesthetic processing
SPEAKER: unknown

ABSTRACT. Explicit and implicit attitudes coexist in the human mind (Wilson, Lindsey, & Schooler, 2000). Due to impression management and social desirability conscious and unconscious representations may differ (Crowne & Marlowe, 1960). Discrepancies are particularly likely for judgmental or controversial constructs, e.g. kitsch. The word “kitsch” is used to scorn something as sentimental, simplistic and aesthetically worthless (Kulka, 1996). A study (N=31) was conducted to compare implicit and explicit kitsch judgements: Participants explicitly rated 20 pictures showing either richly decorated cups or plain bowls on three dimensions: kitschy-plain, beautiful-ugly and valuable-worthless. Subsequently, the same set of stimuli was used in a multi-dimensional Implicit Association Test (md-IAT) targeting the same three dimensions (Gattol, Sääksjärvi, & Carbon, 2011). As expected, cups (kitschy objects) were explicitly rated more kitschy, more ugly and less valuable than bowls (plain objects). In the md-IAT only the dimension kitschy-plain was selective in terms of cups and bowls. Surprisingly, neither beauty nor value were equally associated with both object categories (kitschy cups or plain bowls). Assuming that impression management accounts for such discrepancies, we speculate that participants were reluctant to admit that they felt susceptible to kitschy objects. Ultimately, this could imply that kitsch is better than its reputation.

11:00
Differential effects of task anticipation on liking of familiar surfaces
SPEAKER: unknown

ABSTRACT. Repeated exposure to a stimulus results in an increased affect towards the stimulus (the mere exposure effect). However effects of task anticipation during exposure are not well known. Furthermore, although the mere exposure effect has been shown to occur for a variety of stimulus types, it has not been shown for surface images. We investigated the influence of anticipating two different tasks during repeated exposure to photographs of wood surfaces as stimuli. Prior to stimulus exposure, half of the subjects were instructed to expect a recognition task, and the other half to expect a liking rating task. All subjects were then exposed to the stimuli. Subsequently, subjects were asked to give liking and recognition confidence ratings to novel stimuli and stimuli that they viewed once, three times or nine times. Preliminary results showed that stimuli were liked more and recognized better as they were seen more often. Furthermore, the results suggest that for the novel images, liking ratings of the two groups did not differ. However, for images seen only once, the group anticipating a recognition task revealed less liking than the group anticipating a liking task. This difference disappeared with increased exposure. The implications of the results are discussed.

11:00
Intensity of the facial expressions influences the aftereffect of facial expressions
SPEAKER: unknown

ABSTRACT. Previous studies have shown that adaptation to a facial expression leads to impairment of recognition of the same expression. In two separate studies, we investigated whether facial expression aftereffects depend on their intensity. Across experiments participants were presented an adaptation stimulus for 5 seconds, followed by 100 ms blank and 200 ms test stimulus. Adaptation stimuli consisted of expressions (anger, fear, sadness, and happy) at three intensity levels (low, medium, and high) and test stimuli consistently portrayed low intensity expressions. Intensity levels were produced by morphing expressive and neutral images of the same individual. Participants were asked to categorize the expression of the test stimulus. In experiment 1 identity was consistent across adaptation and test stimuli, whilst in experiment 2 identity was changed between the adaptation and test phase. Results replicated that expression leads to impairment when intensity levels are maintained across the adaptation and test phase, even when intensity levels are low. More importantly we indicate that higher expressive intensity increased impairment for recognition of the test stimuli. This suggests that expressive intensity is critical to recognizing facial expressions, and that the impairment is not caused by contrast of the intensities of adaptation and test stimuli.

11:00
How do emotions affect visual semantic search?
SPEAKER: unknown

ABSTRACT. The influence of the emotional valence of words on the search among the apparently meaningless sets of elements was investigated. The subjects (n=28) were to identify and name lexical units which were located in the matrix (15x15) filled with letters. The hidden words were emotionally positive, negative or neutral (the emotional valence of the words was established in the preliminary study). Each matrix contained 10 words with close emotional valence. The order of the matrices varied, and the subjects didn't know the emotional status of the matrix in advance. The number of correctly named words, errors, and basic indicators of the eye movement were recorded. The results show that the positive emotional coloring of words enhances the effectiveness of the search: on average, subjects found one word more in an emotionally positive matrix. It was also found that the dwell time on the relevant areas of interest was longer as well as the total number of fixations and number of regressive saccades was higher when dealing with positive matrices rather than with negative and neutral ones. Evidently, the finding of the first word created the emotional priming effect that affected the organization of the further search.

11:00
Either of vertical or horizontal stripes on clothing makes the wearer look slimmer.
SPEAKER: unknown

ABSTRACT. There is a widespread belief that wearing horizontal stripes makes individuals look fatter. Thompson and Mikellidou (2011) showed that the Helmholtz and Oppel-Kundt illusions persist when used on cylinders and pictures of a body, suggesting that the belief is wrong. Swami (2012), however, reported that the body size was recalled larger when wearing horizontal stripes. We would like to investigate how stripes affect judgements about body size when bodies are being presented in a real space. Observers (n = 197 and 407 for Exp. 1 and 2, respectively) watched mannequins dressed either in horizontal, vertical, or diagonal stripes. We asked observers to rank the order according to perceived slimness. In experiment 1, the mannequin appeared the slimmest for horizontal stripes and looked less slim for diagonal stripes than for other stripes. In experiment 2, where the no-stripe condition was included, the mannequins in striped clothing looked slimmer than the mannequin in uniform grey clothing. The results suggest that any stripes on clothing make the wearer look slimmer. This could be because the curves of the stripes on a body serve as a cue to depth perception, enhance the perceived depth of a body, and lower the relative body width.

11:00
Top-down and bottom-up effects on the visual N1 category differences.
SPEAKER: unknown

ABSTRACT. Visual ERPs are modulated by both top-down factors (e.g. attention) and low-level differences (e.g. amplitude spectrum). However, it is not clear how these factors modulate the animal/non-animal category differences appearing in the visual N1 component. The aim of our study was to examine the role of amplitude spectrum and the effect of categorization task on the N1 component. Stimuli were images of animals and vehicles containing either an "X" or an "O" in the background. The stimulus set was presented in two versions: they were equalised either only for luminance or both for luminance and amplitude spectrum. Thirty-five participants had to categorize whether images depicted an animal or a vehicle or whether they contained an "X" or an "O". The N1 amplitude was larger for vehicles compared to animals but only for the stimuli that were equalised for both luminance and amplitude spectrum. This indicates that amplitude spectrum differences between categories did not affect the N1 category difference. Furthermore, the categorization task modulated the N1 amplitudes for animals and vehicles. Therefore, the results indicate that both top-down and bottom-up processes modulate the N1 category difference. However, the amplitude spectrum does not play an important role in the animal-vehicle categorization.

11:00
Gender difference in 3D face recognition
SPEAKER: Jisien Yang

ABSTRACT. Considerable evidence reveals that women outperform men in face recognition, while numerous studies also show that men excel women in the cognitive ability of three-dimensional mental rotation. This study addresses the gender difference by examining the performances of different genders in face recognition where 3D mental rotation is required. In four experiments, participants are required to match a front-view face to its corresponding depth-rotated (90 degree) image. Four different kinds of stimuli were used respectively. Experiment 1 uses hand stimuli to obtain the baseline of mental-rotation ability between men and women. Experiments 2 to 4 adopt face profile, face silhouette, and negative faces images, respectively, to explore the 3D face representation between males and females. The results show that women are better in recognizing the depth-rotated profiles then men. However, the gender differences disappear when depth-rotated silhouettes are used. The performance of men and women are comparable when they are required to match the front facts to their depth-rotated silhouettes. It suggests that the advantage of mental rotation ability in males vanishes when stimuli are favorable to females. The results draw doubts to the predominant notion that males are generally better then female in the ability of 3D mental rotation.

11:00
The effect of head orientation on face detection in natural images as evidenced by fast periodic visual stimulation
SPEAKER: unknown

ABSTRACT. The speed and accuracy of face detection may depend on higher-order variations, e.g., head orientation. Using fast periodic visual stimulation (FPVS; Rossion et al., 2015), face-detection responses at high-level cortical areas were compared for full-front vs. 3/4 head views. High-density electroencephalogram (EEG) was recorded from 16 observers presented with 12 40-s sequences containing natural images of objects flickering at 12.0 Hz (F). Natural face images were introduced at F/9=1.33 Hz (‘oddballs’). In Conditions 1 and 2, faces were posed all full-front, or all at 3/4 views, respectively. In Condition 3, the oddball alternated between full-front and 3/4 views (F/18=0.67 Hz). In all conditions, significant responses were recorded at 1.33 Hz and its harmonics, mainly over the right occipito-temporal cortex, confirming high-level face-detection responses. Interestingly, Condition 3 also showed significant responses at 0.67 Hz and its harmonics over the same cortical region, implying differentiable face-detection responses to full-front vs. 3/4 views. Time domain analysis revealed a sequence of face-selective components, with peak latencies ~12 ms earlier for full-front than 3/4 views, emerging at 130–150 ms. These findings indicate that a full-front view presents an advantage in face detection, arising partly from a faster high-level brain response.

11:00
Inhibitory mechanisms for visual learning in the human brain
SPEAKER: unknown

ABSTRACT. Successful interactions in our environments entail extracting targets from cluttered scenes and discriminating highly similar objects. Previous fMRI studies show differential activation patterns for learning to detect signal-in-noise vs. discriminate fine feature differences. However, fMRI does not allow us to discriminate excitatory from inhibitory contributions to learning. Recent Magnetic Resonance Spectroscopy studies link GABA, the main inhibitory neurotransmitter, to perceptual and learning processes. Here, we test the role of GABA in visual learning tasks. We trained observers to discriminate radial from concentric Glass patterns that were either presented in background noise or were highly similar to each other. We then correlated behavioural improvement in these tasks with GABA measurements in the ventral visual cortex. Our results show a significant positive correlation of GABA with behavioural improvement for the fine feature task, while a significant negative correlation for the signal-in-noise task. These high vs. low GABA concentrations related to learning-dependent improvements may account for decreased vs. increased fMRI signals previously observed for the respective learning tasks. Thus, our findings suggest that learning to discriminate fine feature differences may enhance the tuning of feature selective neurons through inhibition, while learning to see in clutter may entail gain changes in large neural populations.

11:00
Perception of emotional body expressions depends on concurrent involvement in social interaction
SPEAKER: unknown

ABSTRACT. Many theories about perception of emotions from body movements hypothesize a joint activation of brain structures involved in emotion perception and motor execution during social interaction (Wolpert, 2003; Wicker, 2003). This implies that bodily emotions should be perceived as more expressive when observers are involved in social motor behavior. METHODS: To test this hypothesis, participants judged the emotional expressiveness of an avatar (shown on an HMD) that reacted to their own motor behavior, comparing these judgments with the ones for simple observation without motor involvements within a balanced design. Expressiveness of the movements (10 angry and 10 fearful examples) was controlled by morphing (5 steps), using a probabilistic generative model (Taubert, 2012), optimizing morphing levels individually for each actor. RESULTS: Emotional expressiveness of the stimuli was rated higher when the participants is involved in the action, as compared to pure observation (F(1,17) = 8.701 and p < 0.01, N = 18). This effect was particularly prominent for anger expressions. CONCLUSION: Consistent with theories about embodied perception of emotion, the involvement in social motor tasks seems to increase perceived expressiveness of bodily emotions.

11:00
Face-shape facilitates detection of fearful facial expressions.
SPEAKER: unknown

ABSTRACT. The contribution of face-shape information to the detectability of happy and fearful facial expressions was evaluated using a temporal two-interval forced-choice paradigm. Stimuli were greyscale images of faces with happy or fearful expressions matched for luminance. Neutral and expressive images were morphed to create a range of intensities (0-100%). All images were manipulated so that only the internal features of the face were visible. The shape of the face was either preserved or removed. Face-shape was preserved using a mask that followed the natural outline of the face. Face-shape was removed using an oval mask. Stimuli were presented for 200ms. One interval contained the neutral (0%) face and the other the expressive face (ranging from 0-100%). Observers indicated the interval that contained the image that was more expressive. Accuracy increased as intensity increased for both shape-masked and oval-masked faces. Psychometric functions describing performance with shape-masked faces were steeper than, or shifted to the left of, those describing oval-masked faces for fearful expressions. This suggests that face shape facilitates the detection of fearful facial expressions. Future research is aimed at understanding the relative importance of external features (e.g. face shape) and internal features (e.g. eyes and mouth) in emotion recognition.

11:00
Limited processing capacity for extracting mean emotion from multiple faces
SPEAKER: unknown

ABSTRACT. Previous studies have demonstrated that humans have the ability to extract the mean emotion from multiple faces (Haberman & Whitney, 2007). However, the boundaries of multiple facial expression processing are largely unknown. In this study, we tested the processing capacity of mean emotion representation by using the simultaneous-sequential paradigm. Each set consisted of 16 faces conveying a variable amount of happy and angry expressions and was presented for 100ms. Participants were asked to judge on a continuous scale the perceived average emotion from each face set. In the simultaneous condition, the 16 faces were presented concurrently; in the sequential condition, two sets containing each 8 faces were presented successively. Results showed that emotion intensity judgments varied parametrically with changes in the happy vs. angry faces ratio. Importantly, performance in the sequential was better than in the simultaneous condition, even when the duration was extended to 500ms or the set size decreased to 8, revealing a limited-capacity processing. Accordingly, we conclude that participants can extract the mean emotion from multiple faces shown concurrently, but this process is capacity limited and as such it differs from the processing of lower visual features, including the size for example (Attarha, Moore, & Vecera, 2014).

11:00
I know what you’re doing!: Awareness of other people’s intentions interfere with cognitive task performance
SPEAKER: unknown

ABSTRACT. In shared environments it can be advantageous to have an awareness of the goals and intentions of others. Recent research has found that co-actors form a representation of each other’s tasks even when neither necessary nor beneficial to their own performance. The current study used a novel method to investigate task interference between individuals who have differing intentional relations to a jointly attended stimulus. Pairs of participants were shown the same stimulus (a letter surrounded by two squares of different colours, superimposed at 0 and 45 degrees) on a shared display. Each participant was given their own instruction set asking them to indicate whether a specific conjunction of features was present in the stimulus. Both co-actors were looking for a vowel (shared criterion) in addition to an individually assigned colour present in either of the squares (non-shared criterion). Reaction times and error rates were influenced by which of the co-actor’s target features were present in the stimulus, despite being irrelevant to task goals. Importantly, this was only the case when participants were explicitly aware of their co-actor’s instructions. These findings provide evidence that it is difficult to suppress irrelevant representations of a co-actor’s task, even when detrimental to individual performance.

11:00
Eye-tracking of primate’s preference for curvature
SPEAKER: unknown

ABSTRACT. There is growing evidence that human visual preference for curvature is a universal trait that can be traced back to our biological heritage. Said preference has been hypothesized to result from sharp transitions in contour conveying a sense of threat (Bar & Neta, 2006). While the evolutionary nature of this preference has not been properly explored, a modified two alternative forced choice task (Munar, Gómez-Puerto, & Gomila, 2014) has allowed us to find preference for curvature in a non-Western population (Gómez-Puerto, Munar, Acedo, & Gomila, 2013) and among captive chimpanzees and gorillas (Munar, Gómez-Puerto, Call, & Nadal, submitted). To further explore the nature of said preference, we analysed the gaze pattern of five primate species (human, chimpanzee, bonobo, orang-utan, and gorilla) when presented simultaneously with curved and sharp contoured versions of the same stimuli. Preliminary analyses reveal that curved stimuli were looked at longer, and fixated faster, than their sharp counterparts. These results go against what would be expected if sharp contours were perceived as threatening; which leads us to believe that, in accordance with recent findings (Palumbo, Bertamini, Gheorghes, & Galatsidas, 2014), it might be attraction for curvature, and not aversion of sharpness, which determines primate preference for curvature.

11:00
Dissociation of detection and evaluation of facial expressions in adolescence
SPEAKER: unknown

ABSTRACT. Holistic face perception, i.e. the mandatory integration of featural information across the face, has been considered to play a key role when recognizing emotional face expressions (e.g., Tanaka et al., 2002). However, despite their early onset holistic processing skills continue to improve throughout adolescence (e.g., Schwarzer et al., 2010) and therefore might modulate the evaluation of facial expressions. We tested this hypothesis using an attentional blink (AB) paradigm to compare the impact of happy, fearful and neutral faces in adolescents (10-13 years) and adults on subsequently presented neutral target stimuli (animals, plants and objects) in a rapid serial visual presentation stream. Adolescents and adults were found to be equally reliable when reporting the emotional expression of the face stimuli. However, the detection of emotional but not neutral faces imposed a significantly stronger AB effect on the detection of the neutral targets in adults compared to adolescents. In a control experiment we confirmed that adolescents rated emotional faces lower in terms of valence and arousal than adults. The results suggest a protracted development of the ability to evaluate facial expressions that might be attributed to the late maturation of holistic processing skills.

11:00
Could a red pen really lower maths test scores? An investigation of colour driven cognitive effects.
SPEAKER: unknown

ABSTRACT. The influence of colour on cognition has been long-studied (e.g. Pressey, 1921) but contradicting claims surround the effects of colour on performance. Elliot, Maier, Moller, Friedman and Meinhardt (2007) proposed that in an achievement context (e.g. maths test) the perception of red impedes performance by inducing avoidance motivation. However, replications of the effect are scant, especially in the UK and some suffer from a lack of stimulus colour control. We report five experiments that attempt to replicate the red-effect in an achievement context across a range of settings: online; in school classrooms; and in the laboratory. In each experiment, stimuli were carefully specified and calibrated to ensure that they varied in hue but not luminance or saturation. Only one experiment replicated the red effect - participants who were primed with a red stimulus (relative to white) for 5s scored worse on a subsequent verbal task. However, replication and extension of this experiment failed to reproduce the effect. Explanations for the findings are discussed including: the effect is not present in a UK population; the effect requires very specific methodology; the effect does not generalise to applied settings; and/or the original body of work overestimates the prevalence of these effects.

11:00
Adaptation to natural dynamic facial emotional expressions

ABSTRACT. Visual adaptation to computer-animated dynamic facial emotional expressions shifts perception of subsequent briefly presented ambiguous expression away from the adaptor (de la Rosa et al., 2013). We explored adaptation aftereffect on ecologically valid human-posed dynamic expressions. Using high-speed videocamera, we recorded female model performing transitions from happy to sad expression and vice versa. In Experiment 1, participants adapted (5000 ms) to first and last static images of both transitions, depicting intense recognizable expressions, followed by interstimulus interval (100 ms) and one of ten intermediate ambiguous images from the same transition (50 ms). In Experiment 2, adaptors were video sequences starting from the image perceived as happy in 50% trials in static condition, and ending on either happy or sad end of transitions. Thus, half of the dynamic adaptors were time-reversed. In Experiment 3, adaptors were full dynamic transitions from one intense expression to another, both forward in time and time-reversed. The aftereffect was obtained in static and half-dynamic conditions, but not in full-dynamic, suggesting no advantage of dynamic over static expressions and no effect of time-reversal on adaptation. These results extend previously reported primary role of the overall emotional context in adaptation to naturally expressed dynamic transitions between basic emotions.

11:00
The time-course of behavioral positive and negative compatibility effects within a trial
SPEAKER: unknown

ABSTRACT. The analysis of mean response times using ANOVA can conceil more than it reveils. Here we study the temporal dynamics of behavioral positive (PCE) and negative (NCE) compatibility effects within a trial of the masked (arrow) priming paradigm using event history analysis, a distributional method to study the shape of time-to-response distributions which explicitly takes into account the unidirectional passage of time. In Experiment 1 we manipulate prime type (no prime, compatible, incompatible) and mask type (no mask, relevant, irrelevant, random lines) and keep the prime-mask and mask-target SOAs constant. Without a mask a PCE emerges between 120 and 360 ms after target onset. With a mask a NCE emerges between 200 and 360 ms after target onset. In Experiment 2 we manipulate the SOAs and prime type, and keep mask type constant. The results show that the onset of the NCE is time-locked to mask onset, and that it is preceded by a prime-triggered PCE when prime-mask SOA is long. We conclude that the NCE is caused by inhibition of premature response tendencies in response to the mask, and not by the mask activating the opposite response or by backward masking of the prime.

11:00
The Influence of Fear-Inducing Stimuli on Learning of Visual Context
SPEAKER: unknown

ABSTRACT. Fear-inducing visual stimuli (e.g., spiders) capture visual attention. The present study examined the influences of fear-inducing stimuli on learning of visual context. Participants searched for a landolt square defined by color and gap location (e.g., a red square with a gap on the left or right). Within each landolt square, an image of spider (fear-inducing) or mushroom (fear-irrelevant) was presented. For half of the visual arrays, the locations of the target and the distractors, as well as the images within them, were fixed across trials; for the other half, they were determined randomly on each trial. Search time was shorter when the target contained a fear-inducing image than when it contained a fear-irrelevant image, and when the distractors contained fear-irrelevant images than when they contained fear-inducing images, indicating strong attention capture by fear-inducing stimuli. Search time was also shorter when search array was fixed than when it was random, yielding a contextual cuing effect. However, the effect depended on whether the images within target and distractors were of the same kind, not whether they were fear-inducing. Therefore, fear-inducing stimuli have little influence on implicit learning of visual context.

11:00
The Importance of Diagonal Axes in Aesthetic Appreciation
SPEAKER: unknown

ABSTRACT. Balance as an important factor contributing to the aesthetic preference is supported by previous empirical evidence. The dynamic balance with respect to the various axes was manipulated in this research to evaluate the contribution of each axis and combination of axes. The material invented by Wilson and Chatterjee (2005), composed of circles with unequal sizes, was used in this study. In the first experiment, participants gave their ratings about the aesthetic preference to a set of patterns randomly generated by the computer. The results show that the two diagonal axes are important. In the second experiment, according to the axis of balance, there were four single-axis (vertical, horizontal, two diagonal) conditions, two double-axes (vertical-horizontal, diagonal) conditions, and two control (low balance, medium balance) conditions. The results show that the double-axes/diagonal condition had higher preference scores compared to the control conditions. In contrast, the double-axes/vertical-horizontal condition cannot have the similar advantage. This result is consistent with that obtained in the first experiment. The single-axis/vertical condition also has higher scores than control condition, implying its unique importance. Overall, the results implicate the importance of diagonal axes in aesthetic preference that may be explained by the expression power of dynamics.

11:00
Are visual threats prioritised in the absence of awareness? A meta-analysis involving 2559 observers.
SPEAKER: unknown

ABSTRACT. Many behavioral observations suggest that humans can evaluate the threat content of unconsciously presented visual stimuli. For instance, in the masked visual probe paradigm, threatening stimuli rendered invisible by backward masking can nonetheless capture spatial attention. In binocular rivalry and breaking continuous flash suppression paradigms, different stimuli are presented to each eye and compete for awareness. Increased perceptual dominance of threatening, vs. non-threat stimuli has provided evidence of a threat-related bias in these tasks. Here, we provide a meta-analysis of the evidence for a threat-related bias in visual processing from these three experimental paradigms. Across paradigms, the overall effect size was small (k= 57, N = 2559, dz= 0.30, 95% CI [0.17 0.43], p< .001) with substantial heterogeneity explained by the experimental paradigm and type of threat stimulus used. Interestingly, when fearful faces were removed from the analysis, the remaining threat-related bias was trivially small and non significant (dz=0.07, p=0.35) and we found no consistent evidence for a threat bias for other stimulus categories (e.g. angry faces, IAPS images, threatening words). Furthermore, our analyses provide quantitative evidence that poor control of awareness, low-level stimulus confounds and response biases undermine the evidence for apparently “unconscious” threat sensitive effects.

11:00
A likelihood distribution of d′ in a signal detection experiment

ABSTRACT. The signal detection method of a psychophysical experiment allows us to measure sensitivity (d′) in a deterministic way separately from a response bias. I will show how a likelihood distribution of the real d′ from results of the single experiment (and also a probability distribution of the measured d′ if the real d′ and the response bias are given) can be derived. The derivation is based on assumptions that number of trials in the experiment is finite and the trials are two parallel sequences (signal and noise conditions) of Bernoulli trials. The likelihood distribution allows us to use results of the signal detection experiment efficiently for Bayesian inference. I will also discuss how the criterion affects the measured d′, what to do if the hit or the false-alarm rates are 0 % or 100 %, and how the proposed method is different from other computational methods deriving some distributions of the d′.

11:00
Average faces: Skin texture more than facial symmetry predicts attractiveness perceptions of female faces
SPEAKER: unknown

ABSTRACT. It is well documented that average faces (composites) receive higher attractiveness judgments compared to the original faces, from which they were created. The objective of this study was to determine whether and to what extent a more even skin texture or a higher facial symmetry (both mediated by averaging faces) affect the attractiveness judgments of a female composite. Furthermore, we were interested in understanding the microgenesis of attractiveness assessment processes. Facial stimuli were displayed with varying in presentation time (32ms, 65ms, 200ms and undefined time). One hundred participants (60 females) between 18 to 39 years assessed 16 original faces, the manipulated faces related to skin texture and symmetry, plus an all-in-all-composite version comprising all 16 original faces on 4 variables: attractiveness, prettiness, sexiness and age. We revealed that skin texture, but not facial symmetry, significantly predicted attractiveness-, prettiness-, sexiness-, as well as age judgments of the all-in-all-composite face. We also observed an interesting effect regarding presentation time: whereas original faces were devaluated the longer the presentation time, the all-in-all-composite benefitted from increasing presenting time. The overall results indicate that the process of facial attractiveness appreciation (i) is mainly triggered by skin appearance, and (ii) shows clear microgenetic development.

11:00
Motion makes fearful facial expressions more detectable
SPEAKER: unknown

ABSTRACT. The relative importance of dynamic and static emotion signals from facial expressions was evaluated using a temporal two-interval forced-choice paradigm. Stimuli were black and white images of faces with a happy or fearful expression. A range of signal strengths (0-100%) of expressions were created by morphing neutral and expressive images. Dynamic stimuli were generated using a sequence of frames each containing an increasingly expressive image. One interval contained the comparison face (50%) and the other contained the test face (varied from 20% – 100%). Observers indicated the interval that contained the image that was more expressive. The percentage of times the test face was judged as more expressive increased as signal strength increased. Psychometric functions describing performance with dynamic fearful stimuli are shifted to the left of functions describing dynamic happy stimuli. This suggests that emotion signals conveyed by dynamic fearful faces are more salient than signals conveyed by dynamic happy faces. Dynamic stimuli with a fast rate of change at stimulus onset are shifted to the left of those with a slow rate of change. This suggests that ‘fast’ dynamic stimuli are more salient than ‘slow’ dynamic stimuli. Static fearful and static happy emotion signals are equally salient.

11:00
The menstrual cycle influences attending to evolutionary-relevant visual stimuli. An eye-tracking study.
SPEAKER: unknown

ABSTRACT. Due to variations in hormone levels during the pre-ovulatory (follicular) phase women are more sensitive to reproductively-relevant stimuli, while in the post-ovulatory (luteal) phase they are sensitive to stimuli related to risk of pregnancy termination. Female participants (N=20) were tested in the luteal and in the follicular phase. Progesterone level was measured from saliva sample. Images from six categories were presented: Threat, Disgusting objects, Children, Erotic scenes, Low-calorie food and High-calorie food. Images were segmented to ROI (e.g., aggressor in a Threat image) and background. Number of fixations in ROI (capture of attention) as well as first-pass durations (hold of attention) were compared in the two menstrual phases. In the luteal phase first fixation fell more often in the key regions of Children (t(19) = 2.4, p = .026) and Threat images (t(19) = 3.0, p= .007) than in the follicular phase. This tendency was sustained during following fixations in case of Threat; t(19) = 2.2, p=.042. First-pass duration was shorter for Disgusting objects in the luteal phase than in the follicular phase; t(19) = 2.18, p=.042. Thus, phase of the menstrual cycle influenced automatic and rapid capture of attention, showing that top-down processes can influence even first fixations.

11:00
The rewarding value of attractive faces: modulating effects of emotion, eye-gaze and empathy.
SPEAKER: unknown

ABSTRACT. Human faces convey important social and biological signals, and people work to control their exposure to them in motivated viewing paradigms (Aharon et al., 2001). Attractive faces are generally more rewarding than non-attractive ones but this is tempered by negative emotions that typically signal avoidance behaviour. Specifically angry attractive faces, though rated as aesthetically pleasing, are treated like unattractive faces with reduced viewing times and lower associated reward (Jaensch et al., 2014). Here we explore the modulating effects of other emotion categories (happy, fear and anger) for faces with direct and averted gaze. Eye-gaze can signal potential threats/interests in the environment and offers insights into the intentions of others and therefore modulates the typical approach vs. avoidance response in the perceiver. Results (N = 28 heterosexual males) indicated that a happy expression on an otherwise unrewarding unattractive face rendered it more rewarding than attractive faces with negative expressions despite being rated as less attractive than all attractive faces. Eye-gaze manipulations resulted in the anticipated pattern with positive emotions and direct eye-gaze, and negative emotions and averted eye-gaze more rewarding respectively than their opposing combinations. Finally we found intriguing associations between empathy, aesthetic assessments and motivated viewing behavior across conditions.

11:00
Facial glossiness and age estimation
SPEAKER: unknown

ABSTRACT. Our goal of this study is to investigate the mechanism of human age perception from the skin texture. In experiment 1, a facial image of a female (age 20’s) was selected as having average skewness of image statistics of overall 38 faces. Those 38 images were constructed by various age groups, in their 20’s to 60’s. The PSE of brightness was calculated by method of adjustment for each subject where skewness on test facial images was changed. In experiment 2, we made two stimulus groups, standard stimuli and test stimuli, from those selected skewness baseline of 19 images, then changed skewness of test stimuli towards plus and minus. In each trial, subjects selected an elderly face in 2AFC task where two faces were displayed simultaneously. Our results showed that participant tended to perceive faces with lower image skewness (which image histogram was skewed to the right) as older than faces with higher skewness. The results suggest that a facial image having matte skin was perceived as elderly person than a facial image having gloss skin, and it shows a similar tendency to previous study on general objects such as vegetables.

11:00
Blue color enhances the performance in creativity tasks

ABSTRACT. The effect of color, especially red and blue, on perceptual/cognitive process has been controversial (e.g. Mehta & Zhu, 2009). The present study investigated if red or blue color affects human cognitive performances. METHODS: 66 students with normal color vision participated in the experiment. They were randomly assigned to either red or blue color condition. In a standard classroom, participants engaged in a series of cognitive tasks; word memory task, anagrams, a proof reading, verbal association, figure association, and color preference questionnaire. The tasks and the stimuli were presented on a 2m x 1m screen by a LCD projector (EIKI, LC-XB41), with a background color either of red (47.0 lx, x=0.59, y=0.37) or blue (56.5 lx, x=0.14, y=0.06). The responses to the tasks were all written on an answer sheet. RESULTS: The performance was not consistent between red and blue conditions; e.g. the percent correct in memory task was better in red condition (p<.1), while responses in verbal association were obtained more in blue condition (p<.05). To examine qualitative properties of responses, the creativity of responses in the two association tests was evaluated by third parties. Then, the obtained scores of creativity were consistently higher in blue than red condition.

11:00
The effect of disfiguring features on covert and overt attention to faces
SPEAKER: unknown

ABSTRACT. It is well documented that facial disfigurements can generate avoidance responses in observers towards the afflicted person, yet less is known about the effect of a facial disfigurement on attention to and perception of faces. In two experiments we studied overt and covert attention to laterally presented face stimuli when these contained a unilateral disfiguring feature (a simulated portwine stain), an occluding feature, or no salient feature. In Experiment 1, observers’ eye movements were tracked while they explored laterally presented faces which they had to rate for attractiveness. Overt attention, as measured by the patterns of fixations on the face, was found to be significantly affected by the presence of a facial disfigurement or an occluder. In Experiment 2, we used a covert orienting task with bilaterally presented target and distractor to measure the interference effect induced by a distractor face (disfigured, occluded, or normal) on a nonface target discrimination task. The presence of a face increased response times to the target stimulus, but this interference was not modulated by the presence of a salient feature (disfigurement or occluder). Together, these results suggest that the presence of salient features affect overt but not the covert processing of faces.

11:00
The Interplay Between Emotions and Cognitive Task Performance
SPEAKER: unknown

ABSTRACT. Recently, it has been suggested that both visual working memory accuracy and IQ are tightly connected with individual emotional characteristics. However, little is known emotional states change during cognitive tasks. We expected individuals’ emotional states to depend on task performance, so that subjects with higher accuracy in cognitive tasks would have more positive emotional state reports. Seven emotional state reports were made during a 2-hour session, along with trait emotion questionnaire, a change-detection visual working memory task, a Raven's Matrices test of fluid intelligence, and additional emotion and aptitude tasks. In our pilot sample (n=48) we find that emotional state characteristics correlate with task performance more strongly than emotional traits. The same levels of valence, arousal, and feelings of control (dominance) were maintained throughout the experimental procedure by high-performing subjects (as measured on either change-detection or Raven's tasks). In contrast, low-performing subjects started out the same emotionally but declined in emotional state and dominance as the working memory task progressed. Their state improved again when the difficult tasks ended. Our preliminary results demonstrate that task difficulty affects emotional valence, arousal and dominance; however, emotional predispositions can contribute to this.

11:00
Attribution of emotional state of mind modulates the size of facial expression aftereffects
SPEAKER: unknown

ABSTRACT. Aftereffects following adaptation to facial expressions are well documented, but less is known about the influence of the perceived emotional state of mind of the adapted actor on these aftereffects. To investigate this further, we tested participants’ adaptation to both genuine and faked facial expressions of joy and anger that were matched for intensity. On each trial, participants first assessed whether a facial expression was faked or genuine. They received feedback about their judgement ensuring they held the correct belief about the emotional state of mind of the actor. Participants next adapted to this facial expression for either 500ms, 5s or 8s, in a between-subject design. Responses to the neutral test expression of the same actor were measured on a 5-point Likert scale (including neutral). Following 5s and 500ms adaptation, aftereffects to genuine expressions of joy and anger were significantly larger than those to faked expressions of joy and anger. This ’advantage’ for genuine expressions disappeared following 8s adaptation, where equally strong aftereffects were obtained. These findings suggest that adaptation to facial expressions is influenced by emotional state attribution, but that this effect is short-lasting.

11:00
Effects of Material Appearance on Visual Memory
SPEAKER: unknown

ABSTRACT. This study investigates the relationship between the material appearance of images and both long- and short-term visual memory. We collected 20 images with 10 material categories from the Flicker material database. All images were converted to gray images to eliminate the influence of color. Subsequently, 16 participants with normal vision viewed the 20 images sequentially projected on a screen. Each image was presented for 5 s, and a 5 s mask followed each image. After 5 min, participants viewed a sequence of 40 images, which included 20 dummy images, and answered whether to store each image. The participants viewed the sequence of 40 images after 3 days, which included another 20 dummy images, and answered whether to store each image. Participants also rated eight perceptual qualities of all the test images. Our results showed that material appearance influenced the change in recognition rate between short- and long-term memory. The material categories of foliage and stone revealed interesting results in that performance was significantly better for long as opposed to short-term memory. Additionally, performance in recognizing images with higher perceptual quality ratings of “naturalness” and “prettiness” was better for long compared with short-term memory.

11:00
The effect of prime-target congruence on subsequent prime perception: An ERP study
SPEAKER: unknown

ABSTRACT. The effect of prior experience on congruent stimuli perception (priming effect) was shown in a wide range of experiments (Henson, 2009). Nevertheless, there is lack of data concerning an impact of stimuli presentation on existing memory traces, related or unrelated to these stimuli. The present study was conducted in order to fill in this gap. Thirty-six subjects participated in the experiment. At the first stage we recorded ERPs in response to visual presentation of word stimuli (targets) with primes, related or unrelated to them (supraliminal associative priming). At the second stage we recorded ERPs in response to repeated primes presentation from each prime-target pair. The second stage results showed that repeated presentation of primes from non-associated word pairs is accompanied by increase of ERPs positivity in 400-500 ms temporal window (N400 component), which is supposedly related to decrease of neural activity (Grill-Spector et al., 2006). We explain obtained results within the predictive coding approach (Friston, 2003). Priming effect can be regarded as a prediction error minimization caused by “imposing” a specific prediction upon a subject through prime presentation. Mismatch between such prediction and sensory input causes rearrangement of memory traces associated with the prediction.

11:00
Face and background colour effect on facial expression perception
SPEAKER: unknown

ABSTRACT. Facial colour varies depending on emotional state. Our previous study showed that facial colour affected perception of facial expression. On the other hand, Young, Elliot, Feltman, and Ambady (2013) reported that red (background) colour enhanced the perception of anger expression. In this study, we compered the face and background (BG) colour effects on the facial expression identification. The fear-to-anger morphed faces were presented in face and BG colour conditions (face conditions: gray BG with bluish or reddish-coloured face; BG condition: red or blue BG with neutral-coloured face; control: gray BG with neutral-coloured face). Participants identified a facial expression between fear and anger regardless of its face and BG colour. Our results showed that expression identification was influenced by both face and BG colours. The intermediate morph of reddish-coloured faces or red BG had more tendency to be identified as anger expression, while that of bluish-coloured faces or blue BG identified as fear expression. Facial colour effects were significantly greater than those in the BG condition though color saturation of the face was lower compared to that of BG colour. These results suggested that facial colour is intimately related to the perception of facial expression in excess of simple colour.

11:00
Effect of viewpoint and face visibility in whole body expression recognition
SPEAKER: unknown

ABSTRACT. Appropriate understanding of other’s emotion is crucial for guiding our social behaviour, yet it is unclear what the relative importance is of facial and bodily cues and to which extent this relationship is influenced by viewpoint. In this study participants viewed images of actors expressing different emotions from three different view-points (frontal view, mid-profile, profile) with the face either visible or masked whilst eye-movement were recorded. Behavioural data revealed emotion and viewpoint-specific advantages in accuracy. Regardless of emotion and face visibility, head and body were viewed more than arms, hands or legs, although the relative proportion of gaze allocation at each body region varied with view-point. Unlike facial expression, our findings suggest no view-point invariance in body expression perception. Instead, the results seem more consistent with the use of viewpoint dependent holistic gaze strategy for extracting emotion-specific postural cues.

11:00
Weber's law in iconic memory
SPEAKER: unknown

ABSTRACT. Vision is considered a rich and detailed phenomenal experience. Yet, previous studies showed conscious report is bottlenecked by visual short-term memory [VSTM] and attention. What is the fate of unattended visual information? Is it consciously perceived? Here, we address these issues by comparing perceptual characteristics of VSTM and iconic memory [IM]. IM is considered a larger capacity store with shorter temporal duration, largely unaffected by selective attention. Indeed, when a spatial cue is introduced before IM decay, information can be still transferred to VSTM. We combined a change detection task with classical psychophysical measurements to calculate the JND’s for size discriminations elicited IM and VSTM. The results showed spatial resolution differences between the two memory stores; JNDs linearly increased with object size in line with Weber’s law and had similar linear slopes. However, when introducing the cue at a time in which information in IM is thought to decay, representation did not obey Weber's law although performance still remained above chance level. These findings suggest that size representations in IM are perceptual and obey general principles of psychophysics. Furthermore, we suggest the information available for size discrimination has similar perceptual properties, regardless of whether it has been selectively attended to.

11:00
Intensive visual perceptual learning may increase the specificity of task improvement
SPEAKER: unknown

ABSTRACT. Visual perceptual learning is defined as an improvement in performance of a visual task, following repeated exposure. The extent to which learning is specific to a trained spatial location depends upon task conditions. Here, we aimed to determine the potential for improvement following a single-day training regime. Two groups of participants were trained for 10 sessions in a single visual hemifield on an adaptive motion coherence task. Subjects either completed 2 sessions per day over 5 days (5-day group) or 10 sessions in a single day with brief breaks (1-day group). Both groups were assessed on the motion coherence task before and after training, in both visual hemifields. Across training sessions, both groups showed learning of the motion coherence task, though this occurred earlier for the 1-day group (Session 4 instead of Session 10). At assessment, the 5-day group improved both the trained and untrained visual hemifields, but the 1-day group showed learning specific to the trained hemifield. Thus, there was a significant difference in performance for the untrained, but not the trained, hemifield. These results suggest that condensing learning to a shorter period of time may increase the specificity of learning compared to longer term regimes.

11:00
Visual Preference for Curvature and Art Paintings: Some Data
SPEAKER: unknown

ABSTRACT. The visual preference for curvature is a human phenomenon that has been found on numerous studies. After the success of Bar and Neta (2006) on finding the preference for curvature using sharp-angled and curved versions of the same object, our research group replicated those results using the same stimuli but with a forced choice task in an approach-avoidance framework. With this new task, the effect of preference for curvature was also found in short exposure times: 40 and 80 miliseconds. Next we decided to apply the same paradigm but using art paintings. Pairs of similar abstract art images –a curved version and a sharp-angled one- were created. We used both color and black and white paintings. Only a weak effect was found in the color pairs with 40 ms exposure time. After these results we have revised the paradigm: (a) modifiyng some edges in sharp-angled images to have a more analogous set of curved images and (b) using a Likert scale with the aim to simulate art appreciation.

11:00
Perceiving the Ukraine Crisis is a matter of visual depiction
SPEAKER: unknown

ABSTRACT. Last descriptions about the Ukraine Crisis varied from escalating to de-escalating East vs. West scenarios, often accompanied by protagonists of either side (NATO or Russia). We investigated whether little variations of visual conflict depiction may influence our perception, cognitive evaluation and, finally, attitudes towards the conflict. Therefore, we conducted a 2×2 factorial design in where participants received a description of the actual conflict either added by a picture of Putin or Obama and either titled with “The path to war/or peace”. Study 1 (n=131) was performed within two days immediately after the Minsk 2 convention when further procedures of the East/West allies were very precarious. Study 2 (n=134) contained the same design, but was conducted—again within two days—four weeks later when the conflict abated. We found significant gender differences of perceiving the conflict after Obama/Putin or war/peace were displayed. Additionally, attitudes towards the conflict distinguished as a function of the visual changes as well as the time of data collection regarding Study 1 and Study 2. Results show that even very subtle manipulations in the visual depiction of the Ukraine Crisis are sufficient to affect people’s mind sets and attitudes regarding this complex conflict.

11:00
Cross-cultural differences of fixation patterns in the perception of human faces
SPEAKER: unknown

ABSTRACT. Photos of faces of Caucasoids (Russians) and Mongoloids (Tuvans) were presented on a monitor. The subjects were instructed to read psychological characteristics from the photos. Exposure time was 3 seconds. The subjects’ gaze direction was registered using SMI RED-M tracker. Number and duration of visual fixations on "Top / Middle / Bottom" and "Left / Right" facial zones was calculated. Fixations were detected using LowSpeed algorithm: min. fixation time 50 ms; max. variance 1°. Subjects: 22 Russians (Moscow) and 26 Tuvans, residents (Kyzyl, Tuva Republic). The following statistically significant differences were found: the Russian sample demonstrated a longer average fixation time when viewing faces of both races in the left half of the face and the middle zone; a greater number of fixations in the right half of the face and the upper zone when viewing European faces. In Tuvan sample the number of fixations in the left half of the face and the middle zone when viewing faces of both races was significantly greater than that of Russian subjects. The study was supported the Russian Federation Presidential grant for young scientists, project no. MK-7445.2015.6.

11:00
Likelihood Estimation of Places in Local Environments
SPEAKER: unknown

ABSTRACT. Place recognition is based on long-term memory codes providing local position information. We propose a Maximum-Likelihood model of place recognition taking into account stored and perceived landmark distances and bearings. Stored landmark distance is assumed to be based on triangulation and is therefore veridical. Landmark distance perceived during homing are assumed to be hyperbolically compressed (Gilinsky, 1951). We evaluate the model with experimental data which was collected from place recognition experiments in a virtual reality setup. Three groups of participants learned a goal location within three different configurations of four distinguishable landmarks (parallelogram, irregular with large distance variation between landmarks, irregular with homogeneous distances). In the following test phase the participants navigated to the goal location, but the environment was now covered by “ground fog” removing all environmental information except the landmarks themselves. Error ellipses were elongated towards the most distant landmark and, in the irregular conditions, showed a systematic bias in the same direction. The model reproduces the ellipse orientations if we assume that distance measurements are less noisy for near distances (from about 30 m) than bearing measurements; it also reproduces the systematic biases due to the hyperbolical compression of perceived distances (Gilinsky’s A = 90 m).

11:00
Lateral Presentation of Faces Alters Overall Viewing Strategy
SPEAKER: unknown

ABSTRACT. Previous work on expression categorisation has typically used centrally presented images (Eisenbarth & Alpers, 2011; Guo, 2012; Pollux, Hall, & Guo, 2014), often accompanied by a central fixation cross; the bias toward the centre of the image that is introduced by this method is corrected for by removing first fixations from analysis; However this correction may not be sufficient, as evidence from natural scenes demonstrates a screen centre bias, significantly increasing the number of fixations to the centre of the screen regardless of image location (Bindemann, 2010). The current study provides evidence that laterally presenting faces demonstrates an overall shift to viewing strategies used by participants, significantly reducing fixations to the nose and increasing fixations to the eyes and mouth, in a way that is not accounted for by first fixations alone. It is suggested that this shift in viewing pattern is more similar to a natural viewing pattern that is not distorted by a screen centre or central fixation cross bias.

11:00
Responses of ERPs and eye movements to the recognition of clusters of facial expressions
SPEAKER: unknown

ABSTRACT. To detect human emotional activity using event related potentials (ERPs) and electro-oculograms (EOG) as measurements of eye movement, these two metrics were measured in response to images of 7 typical facial expressions that are all part of a JACFEE database (Matsumoto & Ekman, 1988). Two emotional clusters were created using ratings of emotional impressions based on the Affect Grid scale (Russell et al, 1989). Biological indices were also compared in clusters. The differences in ERP waveforms were observed between 132.5-195.0ms in the central area (Cz) and between 142.5-192.5ms in the frontal area (Fz). The cross power spectrum density (CPSD) of two dimensional eye movements were analysed, and these magnitudes were compared between the two clusters at several time bins, which were from 160ms to 100ms before stimulus onset at 540ms. Differences were observed from a time point 60-220ms after stimulus onset, while the differences in frequency ranges, such as powers of frequency range factors (1.9-2.5Hz and 3.1-3.8Hz) gradually increased across consecutive time bins. These results suggest that both ERPs and EOGs can be indicators of the progress of recognising image clusters of facial emotions.

11:00
Multiple target location learning in repeated visual search: adaptation or new learning?
SPEAKER: unknown

ABSTRACT. Repetition of display arrangement enables faster visual search, an effect known as implicit contextual cueing. However, the effectiveness of the cueing effect depends heavily on the consistency between bottom-up perceptual input and context memory: re-locating targets to unexpected locations within an unchanged distractor context completely abolishes contextual cueing, and the gains deriving from the invariant context recover only very slowly with increasing exposure to the changed displays. The present study investigated whether a change of global display (color) features facilitates recovery of contextual cueing to re-located targets. The crucial manipulations were a change of the target location across training and test, in addition to changing the color of the search items. It was found that contextual cueing was almost as large in test as in training with color changes as compared against a baseline condition with no color changes (in which the effect was severely reduced). Additional single-display-analysis showed that transfer of cueing was due to enhanced learning of repeated displays and the adaptation of previous contextual-cueing displays. However, only the latter effect was statistically reliable. We conclude that color changes help recovery of contextual cueing after target location changes by fostering adaptation of old target displays.

11:00
(No) Role of emotions in Emotion Induced Blindness
SPEAKER: unknown

ABSTRACT. Attentional Blink (AB) is the impairment in reporting the second of two targets (T2) when they are presented in RSVP. AB, however, is not observed when T2 immediately follows the first target (T1) and is called lag-1 sparing. Lag-1 sparing does not occur in Emotion Induced Blindness (EIB), which uses a very similar paradigm as AB with differences in the type of stimulus used (pictures instead of letters/words) and number of targets (Only T2, which is preceded by an emotional picture). Lag-1 sparing is theoretically important in understanding the temporal limits of attention. Systematically comparing EIB and AB would offer insights into the mechanisms underlying lag-1 sparing. In three experiments, we tried to systematically eliminate the differences between the AB and EIB paradigms. First, we replicated the standard EIB effect; then made the emotional distractor a target (T1); and finally, both T1 and T2 were made non-emotional. There was no significant difference in accuracies between the three experiments suggesting that EIB is just AB with pictures and that this use of pictures instead of alphabets is critical for the absence of lag-1 sparing. Emotion does not have a special role in EIB.

11:00
Burke’s fallacy: Is there a male gaze in empirical aesthetics?
SPEAKER: unknown

ABSTRACT. Edmund Burke (2008/1757) described two types of aesthetic appreciation: Beauty evokes tender feelings of affection and the sublime inspires us with delightful horror. For Burke the sublime is per se the more powerful aesthetic experience. However, literature on gender differences in aesthetic appreciation suggests that women are generally less susceptible to the sublime. We tested this hypothesis using 60 picture details from a triptych by Hieronymus Bosch. 150 participants rated these stimuli in terms of threat (respectively safety) and liking. Moreover, state and trait anxiety as well as state depression were assessed. Across all participants safety and liking were positively correlated (R=.45). Yet, this correlation was higher for women (Rfemales=.70) than for men (Rmales=.22). Gender differences were particularly pronounced among participants in a good mood. We conclude that Burke’s dichotomy of the beautiful and sublime is in fact confounded with gender-related aesthetic preferences and that his proclivity for the sublime reflects a “male gaze” (Mulvey, 1975). Burke’s fallacy is discernible in empirical aesthetics today: Although women display a greater openness to aesthetics (Costa, Terracciano & McCrae, 2001) and tend to prefer simple artworks with untroubled subjects (Chamorro-Premuzic et al., 2010), empirical aesthetics focuses on complexity, cognitive mastery, and aesthetic awe.

11:00
Progressively removing high spatial frequencies: the impact on performance when searching for cancer in chest x-rays.
SPEAKER: unknown

ABSTRACT. Clinicians are often concerned that their performance may be affected if display quality is sub optimal so for any medical image perception task it is useful to determine the task related parameters of image quality and relate this to observer performance. We determined the effect of removing the high spatial frequencies on cancer detection performance of two groups of observers with different levels of expertise. A test bank of chest radiographs was created using a wavelet packet transform to progressively remove the high spatial frequencies. 149 1st year undergraduate psychology students and 31 3rd year undergraduate radiography students each viewed 20 images from the test bank of 100 radiographs (10 normal, 10 containing a single lung nodule, with 5 levels of decomposition for each image). Receiver operating characteristic (ROC) results demonstrated that it is only with the most blurred image that performance falls to little more than chance for the radiography students, whereas for the psychology students performance across all images irrespective of the degree of blurring was little more than chance. Study findings demonstrate that for those with some expertise in looking at chest radiographs performance in cancer detection is not affected until an image is severely decomposed.

13:30-15:00 Session 7A: Surface and texture

.

Chair:
Location: B
13:30
The effect of ambiguity of material perception on the mode of color appearance
SPEAKER: Ichiro Kuriki

ABSTRACT. The “mode of color appearance (mode)” is a concept suggesting that variations in a medium that emits, transmits, or reflects light can cause differences in color appearance. For example, the same light that appears brown (or gray) when reflected from a given object surface may appear orange (or white) when emitted from a light source. The present study investigates the relationships between material perception and perceived mode, especially in terms of luminosity. In the experiment, a computer-generated image of spheroid was presented with surrounds of various luminance levels. The spheroid was rendered with a surface texture of either matte gray (three simulated reflectance levels) or one of two fabrics. The participants were asked to evaluate the luminosity (mode) and perceived reflectance of the object. The results show that the mode perception was stable to surface mode, when the material identity was disambiguated. The luminosity was fit with a linear function of the CIE L* value of the object surface, unless the material of the surface is identifiable, and the mode perception of the same object can vary with surround luminance when the surface property is ambiguous.

13:45
Gloss perception of photographs and real multi-material objects

ABSTRACT. Gloss perception experiments mainly investigate unicoloured surfaces. However, there is evidence that the albedo can strongly influence perceived gloss (Pellacini, Ferwerda, & Greenberg 2000). We investigated glossiness judgments within a single surface containing multiple albedos. In addition, we tested real surfaces and photographs of these surfaces. Real surfaces were polystyrene sheets that were then spray-painted in five colours and five gloss levels. Surfaces either had a uniform albedo or were split in two halves painted with two different colours. In a first experiment participants rated the overall gloss of the bicoloured surfaces and we found that overall gloss was close to the mean ratings of corresponding unicoloured surfaces. Similar results were obtained for both, real surfaces and photographs. In a second experiment participants rated only the left or right half of the bicoloured surface. For the photographs displayed on a monitor, perceived gloss of one half of the surface was not significantly influenced by the other half. However, for the real surfaces, there was a simultaneous gloss contrast effect: lighter colors appeared less glossy when presented next to a darker colour. Differences between real surfaces and photographs might come from conflicting 3D cues on the monitor.

14:00
Modifying material appearance with bandsifting operators

ABSTRACT. To understand the impact of subband statistics on material appearance, we have explored a range of techniques for manipulating images. We split an image up into subbands using an edge-aware variant of the Laplacian pyramid. Each subband is further split into positive vs. negative signs and low vs high amplitudes. We explore the range of effects we can get by “sifting” (i.e. keeping or rejecting) information on the basis of amplitude and sign. This pipeline includes, as special cases, some common manipulations such as sharpening, blurring, and denoising. It is difficult to explore the full space of possible operators, but we have found a useful subset. Consistent with prior work, gloss can be increased or decreased by manipulating the high amplitude positivecoefficients. Depending on the subbands, this can look like sparkly gloss or a more broad burnished gloss. Negative coefficients of low amplitude tend to correspond to pigmentation, dirt, and wear. Other combinations yield apparent changes in lighting, making the light appear directional or broad. As long as one stays within reasonable limits, the manipulations appear natural, i.e., one has the impression of viewing an unaltered picture of a natural object.

14:15
Surface reflectance and motion characteristics affect perceived bumpiness of 3D-rotating objects

ABSTRACT. Dynamic visual information projected onto the retina (optic flow) facilitates 3D shape recognition. While optic flow generated by a moving diffusely-reflecting (matte) object is directly linked to its first order shape properties, the flow generated by a specular object is tightly related to its second-order shape characteristics (Koenderink and van Doorn, 1980). Dovencioglu et al. (2015) demonstrated that reflectance-dependent optic flow yields differences in perceived local curvature of rotating matte and specular objects. Here we investigated these perceptual differences in a global shape task. Stimuli were bumpy spheres with the object boundary masked by a Gaussian aperture. The bumpiness level was varied by adjusting the amplitudes of randomly applied sinusoids. We measured ‘percent judged bumpier’ in a 2IFC task, where the reference object was always specular and of intermediate bumpiness. Seven observers completed 5(bumpiness) x 3(material: specular, matte, intermediate) x 3(rotation axes) x 30(repetition) trials. Results indicate that matte objects were judged as less bumpy than specular ones. Moreover - unlike for matte objects -the perceived bumpiness of specular objects was not affected by the object’s rotation axis, suggesting that specular flow characteristics remain largely robust across different types of object motion.

14:30
Neural Representation of Spectral Densities in IT Cortex

ABSTRACT. The inferotemporal cortex (IT) has been shown to be crucially involved in the processing of complex stimuli such as objects, scenes and faces. Recently, the representation of synthetic fractals was decoded in IT (O’Connel & Chun, 2014), suggesting for the first time that what IT may really process is the complexity of information from energy and phase offset of an image. Here we used multi voxel pattern analysis (MVPA) to decode how neurons in IT decode information about energy and phase spectra obtained from images of faces and environmental scenes. To this aim, we used functional magnetic resonance imaging (fMRI) to record the Blood Oxygen Level Dependent (BOLD) response while participants were exposed to images of faces or environmental scenes where energy was spectrally matched to several distributions, including white, pink (1/f), Brownian noise. Our preliminary findings suggested that the energy of 1/f noise was reliably decoded in the right hemispheres of the scene (PPA) as well as the face (FFA) IT areas. The specific energy distribution of face stimuli could not be decoded either from activation in FFA or PPA regions.

14:45
Absolute and relative spatial frequency tuning in V1 neurons

ABSTRACT. Several psychological studies reported that spatial frequency (SF) tuning of human perception varies depending on stimulus size. The size-dependency of SF tuning may contribute to the perceived object “size constancy”. A recent study on neurons in the macaque inferiortemporal cortex (IT) reported that a population of IT neurons decreased their peak SF with increasing stimulus size (Inagaki and Fujita, 2011). They concluded that some IT neurons tuned not to absolute SF (cycles/°) but to relative SF (cycles/image). However, comparatively little is known about whether early visual neurons tuned to relative or absolute SF. In the present study, we investigated the effects of stimulus size on SF tuning of V1 neurons using a drifting sinusoidal grating stimulus. Subpopulation of V1 neurons in the superficial layer (layer 2/3) exhibited nearly perfect relative SF tuning, while most layer 4 neurons exhibited absolute SF tuning. We also found that the neurons with relative SF tuning showed time-varying peak SF from low to high. These results suggest that transformation from absolute to relative SF tuning starts in V1 by converging inputs from absolute SF tuned neurons with different peak SFs.

13:30-15:00 Session 7B: Attention

.

Location: A
13:30
Simulating spatial auditory attention in a gaze contingent display: The virtual cocktail party

ABSTRACT. The ability to make sense of cluttered auditory environments is convincingly demonstrated in the so-called cocktail party effect. A speech signal of interest can be better separated from competing speech signals and background noise when listeners have normal binaural cues to the spatial location of the speaker. However, in most media applications, including virtual reality and telepresence, the audio information is impoverished. We hypothesized that a listener's spatial auditory attention could be simulated based on visual attention. Since interlocutors typically look at their conversational partner, we used gaze as an indicator of current conversational interest. We built a gaze-contingent display that modified the volume of the speakers' voices contingent on the current region of overt attention. We found that a rapid increase in amplification of the attended speaker combined with attenuation but not elimination of competing sounds (partial rather than absolute selection) was most natural and improved source recognition. In conclusion, audio gaze-contingent displays offer potential for simulating rich, natural social and other interactions in virtual environments.

13:45
Social orienting in gaze leading: A mechanism for shared attention

ABSTRACT. Here we report a novel social orienting response that occurs after viewing averted gaze. We show that when a person looks from one location to an object, attention then shifts towards the face of an individual who has followed the person’s gaze to that same object (Experiments 3-5). That is, contrary to a ‘gaze following’ effect, attention instead orients in the opposite direction to observed gaze. Thus, those that follow our eye-gaze capture our attention. This ‘gaze leading’ effect emerges only in active, object-oriented, tasks. The effect is not present when the object is not an image of a real-world artifact, but a mere fixation cross (Experiment 2) and is reversed when the task is passive (Experiment 1; i.e. a standard gaze ‘cueing’ effect). Thus, the context in which our eye-gaze is followed is crucial to how our attention orients. We propose that the gaze leading effect implies a mechanism in the human social cognitive system for detecting when one’s gaze has been followed, in order to establish ‘shared attention’ and maintain the ongoing interaction.

14:00
Feed-forward feature-based attention modulates attentional capture and gaze capture by irrelevant onsets in visual search

ABSTRACT. Attention and eye movements are to a large extent controlled by our intentions and goals, so that, for instance, task-irrelevant items will attract attention and the gaze only (or much stronger) when they are similar to task-relevant stimuli. Yet, one type of stimulus has been described as being exempt from goal-driven top-down modulation: Suddenly appearing stimuli or ‘onsets’ seem to be able to attract attention and the gaze automatically, in virtue of their stimulus attributes and without being necessarily similar to the target. In the present study we examined whether irrelevant onset stimuli can indeed capture attention and the gaze independently of top-down intentions, by asking participants to search for a color target that could be either red or green and announcing the target color prior to each trial with a word cue. The results of separate eye tracking and EEG experiments showed that the attentional capture and gaze capture by the irrelevant onset distractor were strongly modulated by whether it had a color that matched or mismatched the word cue, demonstrating that onset capture is strongly modulated by top-down, goal-driven processes. These results argue against the view that capture by onsets is exempt from top-down, feature-based attention.

14:15
Why don’t we see the gorilla? Looking in the wrong place, attending to the wrong objects, or doing the wrong task?

ABSTRACT. Observers counting basketball passes often do not notice an unexpected "gorilla" (Simons & Chabris, 1999). They notice it more often when counting black-team passes (83%), than white-team passes (42%). Supposedly, when counting black-team passes, the gorilla’s similarity to the attended team leads to its capture by attention, and subsequent conscious perception. However, other attentional factors may play a role. We find that: (1) Fixations differ in important ways when counting black vs. white-team passes. "Black-team fixations" land closer to the gorilla (m=6.9 deg horizontal distance) than "white-team fixations" (m=10.0 deg, t(57)=2.31, p=0.02, display=40x30 deg). (2) However, observers with a known gorilla discrimination task (150 ms presentation of individual video frames) are equally good with either white-team fixations (d’=2.30) or black-team (d’=2.27). (Umbrella woman d’>3.25) (3) Naïve observers (n=11) with white-team fixations, attending to the black team for a numerosity task (static images, 150 ms), rarely notice anything unusual (54%), whereas with black-team fixations (n=10) they often do (80%). These results suggest that attentional selection of similar items is not the whole story. Rather, an interaction between looking the wrong places, and not knowing the “real” gorilla detection task helps render the gorilla invisible.

14:30
Perceiving Crowd Attention: consensus gaze following in human crowds

ABSTRACT. An effective means to receive information from others in the group is to observe their attention focuses through gaze direction (e.g. gaze cue effect). In many cases, the gazing directions in crowds are not consentaneous. We aimed to figure out how human process the conflicting group attention information, and modified the gaze cueing paradigm into a human crowd version. A group of human avatars was firstly presented, and the situations of gaze cue orientation in crowds were manipulated from Consistency (all gazing at the same orientation) to Inconsistency (1 to 5 out of 10 avatars with opposite gaze orientation). Then the participants had to indicate whether the following probe item on either side (whether was cued by the crowd gaze) contained two or three dots. The results demonstrated that the majority’s gaze orientation attracted most of attentional resource, and the attention distribution was in accordance with the proportion of individuals who share the same gaze orientation in this group. As the degree of divergence went up, the difference of attentional resources between two orientations increased. It is the first time to explore the attention distribution mechanism behind the gaze following phenomenon in the crowd attention conflicting situation.

14:45
Motion direction is processed automatically

ABSTRACT. Motion may be prioritised in visual processing since it is highly biologically-relevant. Here we test the idea that motion is processed automatically. In the first study, an arrow pointing to the left or right contained a flowfield of left- or right-moving dots. Participants responded to either arrow direction or motion direction. There was a congruity effect (CE) for both tasks suggesting automaticity. However, reaction time (RT) was faster for the arrow task, and the arrow direction produced a larger CE on the motion task than vice versa. Analyses of the RT distribution by means of a binning procedure showed a significant CE for the motion task across RTs. For the arrow task, a significant CE emerged only for late RTs. In the second study, we presented small translating arrows instead of the large static arrow. Thus arrow direction and motion direction were local properties. We replicated the previous result that arrow direction interfered more and earlier than motion direction, suggesting that the results are not explained by a global over local advantage. Time-course analysis of CEs revealed that motion direction is automatically processed, and interfered at late response times, consistent with the neuro mechanisms supporting shape and motion processing.

13:30-15:00 Session 7C: Vision preference and emotion

.

Location: C
13:30
Biological foundations of adult colour naming and preference are revealed by infants’ response to colour.
SPEAKER: Anna Franklin

ABSTRACT. Across cultures, there are commonalities in how colour lexicons categorise colour and common patterns of colour preference (e.g., Kay & Regier, 2003; Hurlbert & Ling, 2007). Here, we investigate the origins of commonalities in adult colour naming and preference by investigating colour categorisation and colour preference in 4-6 month old infants. Hues were systematically sampled around the hue circle at maximum chroma in steps that are greater than infants’ chromatic discrimination thresholds at 4-6 months. Whether pairs of hues were distinguished in infant memory was measured with a novelty preference method (e.g., Franklin & Davies, 2004). Looking times to individually presented hues were also measured. Infants distinguished blue-green, green-yellow, purple-red and blue-purple colour differences, and did not distinguish hues within blue, green, purple or yellow-red regions even when hue differences were maximised. The distinctions that infants made align with common fault lines in the world’s colour lexicons and with cone-contrast mechanisms. Infants also looked longer at colours that adults prefer, and both infant looking and adult preference was greater with more positive S-(L+M) cone-contrast. Overall, the findings suggest that commonalities in adult colour cognition are present in infancy and can be explained by low-level mechanisms of the visual system.

13:45
Missing it, and missing it badly: negative affect induced by missed changes in change blindness paradigm

ABSTRACT. Observers who fail to locate visual search targets despite long gaze dwell times tend to dislike that targets afterwards (Kristjansson & Chetverikov, 2015). We used a novel mouse-contingent change blindness (CB) paradigm to test if this effect contributes to CB via negative feedback. Observers studied a 5x5 matrix of artificial traffic signs repeatedly presented for 250 ms with 83 ms blank intervals. On each repetition, one of the signs changed in color, orientation, or content. The visible area was limited (5 degrees), centered on the mouse position, and began to shrink if the mouse position did not change so observers had to move the mouse to see the stimuli. A trial ended if observers found the changing target or if the mouse stayed on the target (or on a randomly selected item on catch trials) long enough to see both original and changed target. In the latter case, observers were asked to select the traffic sign they liked or disliked (counterbalanced) among the target and the two closest distractors. We found that compared to catch trials, “missed but looked at” targets are disliked, arguing for both implicit change detection in change blindness and the role of affective feedback in perception.

14:00
Low-level Image Properties correlate with Personal Traits in Artificial Face Images

ABSTRACT. Recently, low-level image statistics have been associated with several properties of faces like emotions, attractiveness and age. In this study, I investigated these properties according to their association with personality traits of faces. Therefore, I used artificial face images from seven databases created by the Todorov research group (Todorov et al. 2013). Each database included images of 25 maximally distinct face identities that were manipulated on seven different traits (Attractiveness, Competence, Dominance, Extraversion, Likeability, Threat, and Trustworthiness) for shape and reflectance in the positive and the negative direction, respectively. I used established measures for low-level image properties that have previously been associated with aesthetics, like Fourier power and slope, and PHOG self-similarity and complexity, amongst others. Interestingly, there were similar patterns of correlations with low-level image properties for Extraversion, Threat and Dominance, as well as for Attractiveness and Likeability. The results indicate that controlled changes in morphology and colour of lead to specific changes in low-level properties. Therefore, it is conceivable that these low-level image properties might play a role in the subjective perception of personal traits.

14:15
Functional integration of neural signals during person perception

ABSTRACT. The role of the ventral visual stream in person perception has been studied extensively. Segregated patches of cortex show selective responses for images of faces and bodies. Less research to date has investigated functional integration between neural signals during person perception. I review recent fMRI studies from my lab that have investigated the hypothesis that functional interplay within the ventral visual stream as well as with extended brain circuits underpins the representation identity. First, I will show that when observing others and overtly inferring social traits (e.g., friendly, kind), body-selective patches are functionally coupled with the theory-of-mind network. Second, I will show how detection of social signals from body cues, such as an overweight or muscular physique, involves functional coupling within body-selective circuits, as well as between body patches and dorsal parietal cortex. These data firmly support the view that the representation of identity engages a distributed neural network, which is not restricted to segregated processing units in the ventral visual stream. Instead, during person perception, category-selective responses show coupling between each other, as well as with attention-orienting and inferential mechanisms. These findings highlight the importance of considering distributed and connected brain circuits when investigating the ventral visual stream.

14:30
Exogenous cuing of attention increases preference for abstract shapes

ABSTRACT. There is growing interest in the role of attention on preference formation. It is known that people attend to rewarding stimuli. We examined the opposite effect: Does orienting attention increase preference? We used an exogenous cuing task followed by an explicit rating of preference. In Experiment 1 an uninformative cue appeared left or right, followed by an abstract pattern. Participants quickly classified patterns regularity, and then rated preference on a 9-point scale. One group overtly attended the patterns whereas the other group maintained fixation trough the whole trial. Patterns at cued (valid) locations were liked more than patterns at invalid locations (validity effect). Interestingly, this effect was observed only with overt shift of attention. In Experiment 2 validly and invalidly cued patterns were tested against a baseline (no-cue condition). There was a ‘valid > no-cue > invalid’ trend both on performance and preference. By manipulating some parameters in further experiments we observed that preference modulation reflected cueing effect on attention. We conclude that attention and preference are reciprocally related. Moreover, the activation of a sensorimotor response (i.e. overt orienting to/away cued locations) is critical: people move their eyes to preferred stimuli, and in turn, shifting the gaze increases preference.

14:45
The Effectiveness of Augmented Reality in Enhancing the Experience of Visual Impact Assessment for Wind Turbine Development

ABSTRACT. A Visual Impact Assessment (VIA) is a formal requirement for proposed building developments likely to significantly affect the landscape. A key component of a VIA is the production of visual materials illustrating the development. But how effective are they? 67 Participants were told about a planned wind turbine development near our University and that the proposed site could be viewed from a location on campus. The illustrations used were: a printed static photograph simulating the proposed development (adhering to current UK guidelines), the same photograph presented on the screen of a laptop and an augmented reality simulation on a tablet computer which included an animated wind turbine superimposed on the scene. Participants were asked to rate various aspects of these simulations (e.g. clarity, trustworthiness) as well as to state their overall preference. The augmented reality simulation was rated as best in all ratings and the most preferred. The reasons given included that the animation gave a better idea of what the wind turbine would look like “in situ” and that it was easier to alternate viewing between the real scene and the simulated scene. These results will inform future government guidelines on materials used in VIAs.

15:00-16:00 Session 8

Vision & Cognition (Expertise, Learning, Memory & Decisions) / The Human Face (Detection, Discrimination & Expression) / Visual Art, Attraction & Emotion

Location: Mountford Hall
15:00
Perceptual training of faces in rehabilitation of acquired prosopagnosia
SPEAKER: unknown

ABSTRACT. Despite the large number of studies on prosopagnosia, there have been few attempts at rehabilitation. Here, we determine whether perceptual learning can be used in the rehabilitation of acquired prosopagnosia. Nine acquired prosopagnosics completed a 12-week face training program and a 12-week control task. Patients were presented with 3 faces and were asked to determine which of the bottom two faces most resembled the top face. A staircase procedure was used to tailor the training to each patient, in which faces became increasingly similar as performance improved. Training began with neutral front-on faces, and varied across expression and viewpoint as the training progressed throughout the weeks. Following training, we observed an improvement on trained faces (23%), as well as for new untrained expressions (28%) and viewpoints (20%), but less on untrained identities (8%). With the exception of untrained identities, improvements were significantly larger following training as compared to the control task. Finally, we found no pattern between lesion location and the benefit of training. In summary, perceptual learning may be a useful tool for improving face recognition in acquired prosopagnosia, allowing generalizability across viewpoints and expressions, but may be limited in its generalizability to new identities.

15:00
The effect of repeated exposure of abstract visual patterns on aesthetic preference in children and adults

ABSTRACT. According to the two-step attributional theory the repeated exposure of stimuli enhances the subjective feeling of perceptual fluency, which in turn influences preference for old over new stimuli. However, previous research of mere exposure effect conducted in children compared to adults showed quite inconsistent findings. In the present study we investigated the effect of repeated exposure of abstract visual patterns on aesthetic preference in participants from three age groups (adults, 13 and 9 year olds). In the familiarization phase abstract visual patterns were presented in heterogeneous sequence, on optimal level (100ms), with frequency of 2, 5 or 10 exposures. In the second phase of study participants rated their aesthetic preference of presented and new visual stimuli on the seven-step bipolar beautiful-ugly scale. The results showed that the positive effect of mere exposure on preference was obtained only in the group of adults. In the group of 13-year-olds, the effect of mere exposure was not significant, while in the youngest age group, the effect of mere exposure was reversed, i.e. respondents preferred new over old stimuli. We conclude that two-step account is not appropriate for interpretation of findings obtained in children and discuss alternative interpretations.

15:00
Timed object naming in Russian language

ABSTRACT. Picture naming norms were collected in different countries as language specific factors have large effects on the performance in this task (Bates et al., 2003). The aim of this study was to acquire picture naming norms for Russian. For this study, thirty-three native Russian speakers ranging from 18 to 22 years (23 female) named each of 520 black-and-white pictures of objects as quickly as possible. These pictures were taken from the standard IPNP database set (Szekely et al., 2002). Dominant name, name agreement score, percent of valid and invalid answers as well as instances when a subject could not name the object were recorded for each picture. The mean naming times for dominant words and the general mean naming times for each picture were calculated. Comparison of Russian scores to English norms identified pictures where naming rate and other variables differed significantly, as well as pictures that did not differ significantly from English norms. Some of the depicted objects were less familiar to native Russian speakers compared to other groups. Thus, it is necessary to use norms collected for Russian in studies with native Russian speaking participants.

15:00
Cross-dimensional correspondences in perception enhance short-term memory for congruent but not incongruent shape-elevation and shape-pitch feature pairs
SPEAKER: unknown

ABSTRACT. Recent research exploring perceptual correspondence between and within sensory modalities has demonstrated behavioural advantages when to-be-judged targets are congruent with concurrent distractors (Spence, 2011). This phenomenon holds for a range of perceptual dimensions (e.g. shape- pitch, size-weight, etc). The experiments reported here go beyond current perceptual experience and instead explore the influence these correspondences have on short-term memory. Two experiments tested the idea that congruent dimensions would enhance recognition. In a change detection task participants encoded either six concurrently presented shape-elevation objects (Expt 1: e.g. angular-high, rounded-low), or four sequentially presented pitch-shape pairs (Expt 2: e.g. high-angular, low-rounded). Pairings in both experiments were equally likely to be congruent or incongruent. A single pair was tested after a 2 s delay. For shape-elevation, performance was influenced by the congruence between the elevation of the object and its shape, with congruent pairings enhancing performance significantly with a large effect size. For pitch-shape, change detection was again significant, with congruent pairs enhancing memory. The results imply that congruent correspondences across the perceptual features of an object or event improve memory. This may be the result of attention prioritising congruent pairings, or object-based binding mechanisms dedicated to the typical feature pairings of an object.

15:00
Lesions of the Medial Occipito-Temporal cortex affect spatial binding of sensory and memory data
SPEAKER: unknown

ABSTRACT. Occipito-temporal cortex is parcellated, with medial regions having a greater representation of the visual periphery than regions along the lateral aspect. Imaging studies have suggested disparate functions for medial regions. These proposals do not generally account for the dramatic impairments of attention and memory displayed by patients with strokes in medial Occipito-Temporal cortex. We examined a middle-aged man, who had suffered bilateral posterior circulation strokes involving the medial Occipito-Temporal cortex. The patient showed impaired recognition of compound objects, when constituent parts were rearranged, but not, for example, when their shape was changed. The patient was impaired when recalling the color of an object from visual working memory, only when the object was identified by its location, rather than shape. He showed a specific liability to long recall delays, with an increase in spatial binding errors. The patient had no difficulty discriminating large and small objects, or performing a facial discrimination task, as long as the task did not require the appraisal of fine spatial relations between facial features. We conclude that medial Occipito–Temporal Cortex is crucial for spatial binding of perceptual and memory information, in part because of its role in maintaining stable spatial representations over time.

15:00
Representational Space of Cave Paintings and Petroglyphs
SPEAKER: unknown

ABSTRACT. Four experiments examined memory for cave paintings and petroglyphs, and the hypothesis of Dobrez (2013) that reports of looming in such rock art was due to representational momentum was tested. Participants briefly viewed a target photograph of a cave painting or petroglyph, and then a probe photograph of the same cave painting or petroglyph was presented. The viewpoint in the probe could be closer than in the target, the same as in the target, or farther than in the target. Participants judged if the viewpoint probe was (a) the same as or different from the viewpoint in the target or (b) closer, the same distance as, or further than the viewpoint in the target. Experiments 1-2 presented photographs of rock art that depicted various entities, and Experiment 3 presented photographs of rock art that involved hand prints and stencils. In all experiments, memory for the originally viewed target was displaced away from the observer; this is not consistent with representational momentum, but is consistent with boundary extension. It is suggested that looming effects arise only after continued inspection and reflect a mismatch between the remembered initial information (that has been displaced due to boundary extension) and current perceptual information.

15:00
Anatomical dissociations of forward and backwards semantic processing in the cerebellum using theta burst stimulation.
SPEAKER: unknown

ABSTRACT. The cerebellum is an area commonly involved in the learning and prediction of motor sequences. Recently, some researchers have begun to study its involvement in other higher order functions, such as visual semantic processing from words. Argyropoulos (2011), for example, stimulated medial areas of the cerebellum, producing a boost in semantic priming. This effect has been explained as a demonstration that the cerebellum does not only model the association between perceptual and motor units but that would also work at higher levels, including semantic associations. In this study we wonder if the cerebellum has a role no only in forward associations normally study in semantic priming (i.e., DOG-BONE), but also in backward ones (BONE-DOG). To do so, we replicated the previous study by Argyropoulos and extended it to the evaluation of backward priming pairs. Results demonstrated a clear anatomical dissociation with an increase in backward priming after stimulating the left cerebellum and an increase in forward priming in the right cerebellum. These results are discussed with respect to current models of the cerebellum and their role in semantic processing.

15:00
Contributions of feature shape and surface cues to facial expression perception
SPEAKER: unknown

ABSTRACT. Theoretical accounts of facial expression recognition emphasise feature shapes as the dominant visual cue, with surface reflectance and texture cues providing less useful information (Bruce & Young, 2012). Recently, however behavioural adaptation has suggested an important role for surface based cues in expression perception (Benton , 2009).To address this issue, we created stimuli with shape only (normalised reflectance) or surface only (normalised shape) information for expressions of five basic emotions (fear, anger, disgust, sadness and happiness). In experiment one, we found that images containing only shape or only surface cues similarly impacted categorisation of facial expressions. However, we also found that the importance of shape or surface information was different for each expression. Expressions of fear and anger relied more on surface information, while sadness and happiness were strongly dependent on shape information. In the second experiment, we explored the role of feature shape and surface properties on perceptual similarity judgements of facial expression. Using a regression analysis, we again found that feature shape based information provides a strong cue for expression perception, but surface cues can also significantly predict perceptual similarity ratings of expressions. Our findings clarify how shape and surface cues contribute to the perception of facial expressions.

15:00
The perceptual costs and benefits of learning to multitask
SPEAKER: unknown

ABSTRACT. Perceptual learning defines the ability to improve perceptual performance through practice. Most work in this field has focused on the rules governing transfer of learning between different perceptual tasks or the magnitude of learning with different temporal patterns of stimuli. Here we examine the perceptual costs and benefits of learning multiple perceptual tasks at the same time with stimuli drawn from the same or different visual dimensions. Using the method of single stimuli, one group of observers simultaneously practiced discriminating the mean orientation (45 deg.), spatial frequency (2.5 cycles/deg.) and envelope size (SD 0.66 deg.) of a Gabor over eight daily sessions. Three other groups independently practiced discriminating multiple mean orientations (30, 45, 60 deg.) spatial frequencies (1.5, 2.5, 3.5 cycles/deg.) or envelope sizes (SD 0.33, 0.66, 0.99 deg.) over eight days. A numeric cue indicated the implicit mean to be discriminated on the next trial. When stimuli were drawn from independent visual dimensions, observers learned with the same rate and magnitude on all tasks, but with stimuli drawn from the same visual dimension, there was a cost for learning on multiple tasks. Our results demonstrate there is no cost to multitask perceptual learning with stimuli on independent visual dimensions.

15:00
Detection and Recognition of Emotional Facial Expressions in Peripheral Vision
SPEAKER: unknown

ABSTRACT. Successful perception of facial expressions of emotion is a fundamental social skill in humans that facilitates social interaction, by allowing an observer access to information about a signaller’s mental states. In order to fully characterize the signalling properties of facial expressions we need to understand the environmental constraints under which they can be recognized (e.g. Schmidt & Cohn 2001). Signals of danger or threat may be important to recognize in the periphery of the visual field (VF) or at greater viewing distances, due to the evolutionary importance of their correct perception. Here we investigated how perception of the basic facial expression categories (Ekman & Friesen, 1976) changes across different eccentricities in the VF (up to 30 deg). Participants performed both an emotion identification (which emotion?) and an emotion detection (emotion Vs neutral) task. We found that in emotion identification, happy and surprise were the best recognized emotions extending into the periphery, agreeing with our previous work (Smith & Schyns, 2009). Interestingly, in the detection task, happy and surprise but also fearful faces were detected above chance in the periphery. We interpret our results with respect to stimulus salience, evolutionary pressures, and neurobiological theories of fearful face signalling.

15:00
Predicting perceived visual complexity using objective image properties and eye-movement measurements
SPEAKER: unknown

ABSTRACT. Visual complexity of stimuli is thought to influence many experimental measures like detection rate, reaction time, aesthetic evaluation, etc. The goal of this work is to predict the perceived visual complexity of images by combining objective computational measures (e.g., image compression rate, edge detection) with individual eye-movement statistics (e.g., fixation duration, saccade length). 63 participants were presented abstract paintings and abstract black-and-white patterns in two blocks for a 5 s free-viewing period while recording eye-movements. After viewing each image, they were asked to rate their visual complexity on a 7-point scale. Subsequently, we calculated several objective parameters of image complexity and eye-movement statistics for each trial. A combination of ranking by AICc (Akaike information criterion corrected for finite sample size), VIF (variance inflation factor), and data sub-sampling was used to obtain parsimonious linear mixed-effects models predicting perceived visual complexity. For both groups of stimuli, the best models combined one or more objective computational measures of complexity with some eye-movement statistics. Therefore, we conclude that while objective measures like image compression offer a convenient and often sufficient approximation of visual complexity, eye-movement measures can help to further improve predictions.

15:00
Further experimental investigations into perception and recognition of two-colour pictures of faces at isoluminance

ABSTRACT. First evidence for deactivation of configurational face information processing at isoluminance was shown by Bliem (1993). Further elaboration of fine-tuned experimental conditions to investigate the detailed role of isoluminance in face processing is presented. In the first experiment the subjective isoluminance point for two-colour faces was determined by measuring the threshold where perception of faces was worst. In the second experiment subjects had to recognize faces under positive and negative luminance contrast of +/-80%, +/-8%, +/-4%, each for chromatic and achromatic two-tone conditions and under isoluminant two-color conditions. In the inspection phase faces had been presented as original multi-tone grey-level pictures. Results show a significantly higher error score at isoluminance compared to all positive contrast conditions but there was no significant difference between isoluminance and all negative contrast conditions. Findings reveal a deactivation of global gestalt-like processing at isoluminance and all negative contrast conditions. Further on, findings clearly reveal the exclusive importance of positive contrast conditions to activate face-specific 3D shape from shading underlying successful global face perception and recognition. Interpretations are in accordance with an integrated neural model of global (magno-supported) and local (parvo-supported) face information processing (Bliem, 2010).

15:00
Unmasking backward masking of emotional faces
SPEAKER: unknown

ABSTRACT. Experimental techniques to render emotional facial expressions subliminal have included brief target presentations, backward masked by emotionally neutral face pictures (Esteves & Öhman, 1993; Whalen et al., 1998). To measure pupillary changes induced by different emotional expressions (whilst keeping luminance pre and post-exposure constant), we presented a target emotional face sandwiched between forward and backward presentation of a neutral face. Twenty participants were presented with a neutral face for 1000 ms, followed by 10 ms exposure of a target face (2 blocks: fear/neutral; happy/neutral), finally masked by a neutral face for 4 seconds. They indicated via button press whether an emotional face was present in the display. Participants were expected to perform at just above chance in identifying masked target emotions. However, under these masking conditions most participants were aware of features (teeth and eyes) and were well above chance at identifying all emotions. Performance was best for emotional faces, and happy faces were more reliably identified than fearful. We conclude that effective backward masking of faces is unmasked by the inclusion of a forward neutral mask, presumably because the contrast between facial features (e.g. teeth) allows participants to more readily detect a change between neutral and emotional faces.

15:00
The face N170 is mostly sensitive to pixels in the contralateral eye area
SPEAKER: unknown

ABSTRACT. Recently, we quantified the coding function of the N170 single-trial variability in possibly the simplest socially relevant task: face detection (Rousselet et al. 2014). On each experimental trial, observers saw face and noise pictures sparsely sampled with Gaussian apertures (“bubbles”; Gosselin & Schyns 2001). Using reverse correlation and mutual information, we found that the presence of pixels around the eye contralateral to the recording electrode modulated single-trial ERPs at lateral-occipital electrodes, and most strongly the N170. Here, in three control experiments, each involving 6 subjects, we show that this result holds for faces of different sizes, faces expressing different facial expressions of emotions, and after contrast normalisation. However, the absolute N170 contralateral eye sensitivity differed across face sizes, and was delayed and strongly reduced by contrast reversal. Because contrast reversal preserves local edges, and eye saliency, but affects the distribution of contrasts across the face, the lower brain sensitivity to eye pixels in that condition suggests that it reflects some form of feature processing in a face context, possibly tuned to a particular face size. We conclude that the N170 reflects predominantly, in a face detection task, the encoding of a single feature: the contralateral eye.

15:00
Compressed subjective duration of social interactions mediated by oxytocin
SPEAKER: unknown

ABSTRACT. Communication through body gestures permeates our daily life. Efficient perception of the message therein reflects one’s social cognitive competency. Here we report that such competency is manifested temporally as shortened subjective duration of social interactions: motion sequences showing agents acting communicatively are perceived to be significantly shorter in duration as compared with those acting independently. The strength of this effect is negatively correlated with one’s autistic-like tendency. Critically, intranasal oxytocin administration fosters the temporal compression effect in socially less proficient individuals whereas the administration of atosiban, a competitive antagonist of oxytocin, abolishes the very effect in socially proficient individuals. These findings clearly demonstrate that perceived time, rather than being a faithful representation of physical time, is highly subjective and imprinted with one’s social traits. Moreover, they highlight the role of neuropeptides in mediating time perception, which has rarely been studied thus far.

15:00
Under-stimulation at untrained orientation may explain orientation specificity in perceptual learning
SPEAKER: unknown

ABSTRACT. Perceptual learning (PL) can transfer completely to an orthogonal orientation if the latter is exposed through an irrelevant task (Zhang et al., 2010). PL thus is more likely rule-based cognitive learning. Here we used a continuous flashing suppression (CFS) paradigm to investigate why PL is orientation specific in the first place. Foveal orientation training was combined with various conditions of orthogonal orientation exposure that was rendered subconscious with dichoptic flashing white noise. Observers first reported the color of a dot centered on the noise, without knowing the presence of the subconscious orthogonal Gabor. This bottom-up exposure produced partial transfer. In a top-down “exposure” condition, observers guessed whether a Gabor/letter-C was presented while no Gabor was actually shown, which failed to produce transfer. However, when the orthogonal Gabor was actually present, transfer was complete with this combined bottom-up and top-down exposure. These results indicate that bottom-up orientation exposure is required for learning transfer, and that orientation specificity may result from under-stimulation of untrained orientations, either unstimulated or suppressed during training. Although top-down influence has no impact on transfer, it can boost the effect of bottom-up exposure, so that high-level learning can functionally connect to new orientation inputs for complete learning transfer.

15:00
Startling fluency? Testing effects of processing fluency on affect-modulated startle
SPEAKER: unknown

ABSTRACT. Processing fluency is usually associated with increasing affective value (e.g., Reber, Schwarz, & Winkielman, 2004) but can also be interpreted as amplifying the initial affective value of a stimulus, as done in the Fluency Amplification Model (FAM, Albrecht & Carbon, 2014). To examine an impact of fluency on the stimulus affective value at a psycho-physiological level, we used an affective startle modulation paradigm (for guidelines, see Blumenthal, 2005). Photographic stimuli varying in valence (7 categories from negative to positive), arousal (high vs. low), and processing fluency (original pictures vs. blurred variants of them) were presented while eliciting startle responses by 105 dB white noise bursts (50 ms) at different SOAs. The eye blink component was measured via facial-EMG. Consistent with previous findings, results showed a modulation of startle intensity by the affective stimulus content for late startle SOAs with negative stimuli leading to greater startle responses than the positive ones. Although fluency did not seem to affect startle reflex intensity, we revealed an effect of fluency on the latency of the startle reflex for short SOAs. Implications for the link between fluency and the affective value and subsequent post-processing of stimuli are discussed.

15:00
The D-Scope®: Beyond Veridicality

ABSTRACT. This paper introduces a new audio-visual medium in the form to the D-Scope®, ‘a system of apparent motion using concrete physical objects lit sequentially’ (Trope, 2014). Animation occurs in the D-Scope® without an intervening lens or a framing screen so that it occupies the same environment as participants. Apparent motion is simply derived from mapped illumination revealing objects in turn. Attention is mediated, but the objects are real. There is no strobe light or any shuttering device, just objects hung carefully in a void and lit sequentially to produce apparent motion. This paper considers the implications of what it means if an image is not ‘a representation of the external form of a person or thing in art’ but is instead constructed both from concentrated and condensed reality (so not an image), and our perception of it is assembled from a continuity of fleeting moments. Here the properties of real/unreal become meaningless and we are dealing with a visual order constructed of abstract purity combined with structural unity. We live in a state of flux and the D-Scope® captures this. It is change made manifest.

15:00
Cues to Gender in Children’s Faces
SPEAKER: unknown

ABSTRACT. The human face carries information about the individual’s identity, age, gender, race and emotional state; however, it is not always evident which facial properties provide the most information about each of these characteristics. Here we examine the ability of young adults to identify the gender of children aged 7-11 from colour photographs (hair covered), and we compare our psychophysical results to a principal components analysis (PCA) of the geometric properties of the same faces. In a 2 interval forced choice paradigm, participants chose either the male (N=40) or the female (N=40) face where face pairs were matched for age. Findings revealed that this population can identify the gender of children at rates significantly better than chance; in general, performance was in the 75% correct range. A significant (p<0.001) effect of face age was seen in all test conditions (accuracy was worst with 11 yr old face photos). Surprisingly, participants choosing the “female” face performed significantly better than those choosing the “male” face from the same face pairs (p. <0.001). The contribution of individual differences in faces, face geometry (from the PCA), gender of participants, amount of experience with children, and self-report of cues attended to will be considered.

15:00
The attentional capture by emotional distractor faces differed between adult’s and children’s facial expression.

ABSTRACT. The threat related stimuli capture attention efficiently and automatically. Precious studies suggest that threat-related emotional faces captured attention independent of their task-relevancy (Hodsoll, Viding, & Lavie, 2011). Williams and Mattingley (2006) reported angry male faces captured more attention than angry female faces. They discuss that the potential for physical threat is critical for this male advantage. Thus we hypothesize that the lesser threat attribution of the faces like children less capture attention. We use both adults’ and children’s emotional faces as the task-irrelevant distractor in the visual search task to test this hypothesis. Experiment1, participants asked to detect an adult male face among adult female faces and to decide its orientation. Three facial expressions (neutral, angry, and happy) were used for both target and distractor faces. Experiment2, we used the same task of Experiment1, except the children’s faces were used as stimuli. The results showed that angry faces detected significantly faster than other emotional faces (Experiment1). However, there were no significant effects for children’s angry faces (Experiment2). These results suggest that the attentional capture for angry face is based on its threat feature; unlike adults’ angry face, non-threat children’s angry face lesser captured attention.

15:00
Tracking Perceptual Uncertainty in Rapid Serial Visual Presentations
SPEAKER: unknown

ABSTRACT. Optimal multisensory integration (Ernst & Banks, 2002) and dyadic interaction (Bahrami et al., 2010) rely on computing the uncertainty of visual information. But this is at odds with decision-making models disregarding evidence reliability (Ratcliff & McKoon, 2009) and with data showing that variance is severely underweighted in confidence judgments (Zylberberg, Roelfsema & Sigman, 2014). Here, we asked whether subjects can track the reliability of serially presented visual information and how it guides choice and confidence. Participants (N=20) observed a sequence of 30 tilted Gabor patches in rapid serial visual presentation at 4 Hz. The orientation of the patches was drawn from uniform distributions with different variance. Subjects were instructed to report the grand-average tilt of the patches compared to the vertical meridian (clockwise or counter-clockwise) and to state their confidence (from 1 to 6). We observed that objective and subjective performance decreased with increasing variance. We explain these results using a learning model that updates both mean and variance but with different rates. We propose rules for combining these quantities in order to reproduce the observed patterns of choice and confidence. Overall, our results provide novel insights on how uncertainty is tracked by the visual system and communicated as confidence.

15:00
An optimum stimulation method in SSVEP-Based researches and BCIs
SPEAKER: unknown

ABSTRACT. Recent researches, especially brain-machine interface (BMI) studies, often used the EEG component caused by flickering visual stimulus, which is called steady state visually evoked potential (SSVEP). Nevertheless, the suitable frequency, duty cycle, and stimulators are still incompletely understood. We investigated the difference of SSVEP caused by different stimulators (the cathode ray tube (CRT), liquid crystal display (LCD), and VIEWpixx monitor). We recorded the SSVEPs presenting a square flickering in the frequencies (7.14Hz, 10Hz and 12.5Hz) and duty cycles (10% and 50%). As a result, the elicited SSVEP by presenting stimuli on CRT and VIEWpixx showed a similar tendency: SSVEP amplitude of harmonics was significantly higher than that of the fundamental frequency in the case of 50% duty cycle except for 12.5Hz. Only the LCD showed the fundamental frequency power to 10% duty cycle was larger than that to 50%. These differences suggested the importance of selection of stimulators, and duty cycles for SSVEP research.

15:00
Recognising the same face in different contexts: Testing within-person face recognition in typical development and in autism
SPEAKER: unknown

ABSTRACT. Research on unfamiliar face recognition has focused almost exclusively on the ability to tell different people apart, rather than the ability to recognise the same face across varying images. Here, we investigated ‘within-person’ face recognition across development and in autism. In Experiment 1, 77 typically developing children aged 6-14 years and 15 adults were given 40 different photographs of two distinct male identities and asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities (median = 15). Younger children divided images into more identities than adults and also made more errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, 32 cognitively able children with autism reported a similar number of identities and made similar numbers of errors to 32 typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in the ratio of same-identity to different-identity matches. We suggest that the immature performance in typical children and children with autism arises from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity.

15:00
The influence of teeth-exposure on attentional bias to angry faces in the dot-probe task
SPEAKER: unknown

ABSTRACT. Dot-probe studies consistently show that high trait anxious individuals (in contrast to low trait anxious individuals) have an attentional bias towards threatening faces. However, little is known about the influence of perceptual confounds of specific emotional expressions on this effect. Teeth-exposure was recently recognized as an important factor in a closely related paradigm (the face-in-the-crowd paradigm), leading us to investigate the effect of exposed teeth on attentional bias to angry faces. Participants (N = 78) were asked to classify probe stimuli that were preceded by two simultaneously presented faces, one angry and the other neutral. The probe either appeared at the location of the angry face (valid condition) or the location of the neutral face (invalid condition). Half of the angry faces exposed their teeth, the remaining ones did not expose them. Afterwards, participants completed the trait anxiety scale of the STAI. Teeth-exposure manipulation moderated the relationship of attentional bias and trait anxiety. For angry faces with non-exposed teeth, we found a positive correlation (r = .44) of trait anxiety with the attentional bias score (i.e., response times invalid minus response times valid). However, we found no influence of trait anxiety on attentional bias to angry faces with exposed teeth.

15:00
Computational analysis of visual complexity and aesthetic appraisal reflected in eye-tracking data
SPEAKER: unknown

ABSTRACT. We investigated whether subjective visual complexity (VC) and aesthetic pleasure (AP) of images are reflected in eye movement parameters. Participants (N=26; 13 females) explored greyscale car front images (N=50) while their eye movements were recorded. Following each image exposure (10 sec), image VC and AP were rated on 9-point scales. We found that fixation count positively correlated with VC, while AP correlated with dwelling time. The two eye movement parameters were best-fit by linear functions and correlated negatively (r = -.507, p < .001). Subjective ratings of AP and VC were related too and followed an inverted U-shape function (cf. Berlyne, 1971) best-fit by a quadratic equation: car fronts with perceived plain design (low complexity) and those with too elaborate design (high complexity) were judged as less aesthetically pleasant. The function peak was more pronounced for females, indicating their greater preference of the medium-complexity images. Also on average, women made more fixations on car images while dwelling shorter than men. The findings suggest that image visual complexity is reflected in fixation count, an objective eye-movement measure. VC affects image aesthetic appraisal, the relationship conjectured to be mediated by perceptual fluency (ease-of-processing) of an image (Orth & Wirtz, 2014).

15:00
Frontal transcranial random noise stimulation improves the acquisition of verbal knowledge

ABSTRACT. Acquisition of verbal knowledge is a crucial skill, and people spend considerable amounts of time and energy to learn new information. Brain areas involved in the working-memory function can be stimulated to improve performance in various cognitive tasks, such as the n-back and attentional tasks. However, it is still unknown whether the stimulation of these areas can improve acquisition of verbal knowledge –encoding information into semantic memory. As the working-memory function is supported by cortical areas in the frontal and parietal lobes, we determined which of these areas is directly involved with verbal knowledge acquisition. Transcranial random noise stimulation, or tRNS, was used to answer this question. Participants carefully read a passage and were then required to write anything they could remember about it. This memory task took place five minutes and seven days after the initial reading of the passage. Participants were stimulated using tRNS over the frontal or parietal lobes; a placebo group was included too. Frontal stimulation, but not the parietal, substantially improved memory performance compared to the placebo condition. This effect was evident both immediately and in the long-term. These results suggest that the frontal cortex contributes to the encoding of verbal information within semantic memory.

15:00
Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code
SPEAKER: unknown

ABSTRACT. The visual system is optimised to process natural scenes with few active neurons (a sparse code). Visual discomfort is associated with a hyperneuronal response of the visual cortex. Previous work suggests that certain categories of uncomfortable/unnatural images produce large and less sparse activity in a bank of units with Gabor-like receptive fields (Hibbard & O’Hare, 2015). We develop a model of the visual cortex involving not only Gabor-like receptive fields but also lateral interactions, both excitatory and inhibitory (Penacchio, Otazu & Dempere-Marco, 2013). It is known that lateral interactions are central in producing a sparse code. We measured human ratings of discomfort for a wide range of images and found that the judgments are well predicted by the sparsity of the model network activity. The lateral interactions in the model increase the correlation with discomfort. Our findings suggest that an architecture including lateral interactions fails to produce a sparse representation of uncomfortable images. The greater neural activity is consistent with the hyperBOLD response to uncomfortable images.

15:00
Color induction in the face using eye shadows of desaturated colors
SPEAKER: unknown

ABSTRACT. Among makeup professionals, two different methods are used to improve skin complexion. It is still an open question how color difference between facial versus makeup colors affects the perceived complexion. The present study examined the effect of eye shadows on the complexion. Eight physically, almost equally, modestly saturated colors were used as eye shadows. Stimuli consisted of two different Japanese female faces, one slightly reddish, the other one slightly yellowish. The lip color was fixed at a natural red. Twenty-three Japanese female students judged the redness of the reddish face and the yellowness of the yellowish face by a paired-comparison method; they also judged the lightness of the faces. The results are as follows: (1) some reddish colors with relatively greater positive a* (CIE L*a*b* redness) made the reddish face more reddish and the yellowish color with relatively greater positive b* made the yellowish face more yellowish. (2) The face without any eye shadows was judged as the face with the highest lightness value (L*). Although the colors used as eye shadows were rather desaturated, assimilation effects of the complexion into the eye shadows were observed. The results are related to the echo illusion (Morikawa, 2012) in hue and chroma.

15:00
White-matter connectivity for learning of hierarchical structures
SPEAKER: unknown

ABSTRACT. Acquiring new skills (e.g. language, music) and navigating in novel environments involves extracting hierarchical structures from simple repetitive patterns to more complex probabilistic combinations. However, little is known about the brain circuits that support this learning of hierarchical structures. Our previous fMRI work has implicated fronto-parietal regions in learning of simple repetitive structures, whereas subcortical regions in contextual learning. Here we combine behavioural and Diffusion Tensor Imaging (DTI) measurements to test whether white-mater connectivity in these brain circuits changes with learning of hierarchical structures. Observers were trained with sequences that were determined by their frequency of occurrence, followed by sequences determined by their temporal context. We measured performance during training (i.e. learning rate) and fractional anisotropy (FA) – a DTI-derived estimate of white-matter connectivity – before and after training on these two sequence types. Regression analyses showed that for learning of simple repetitive structures learning rate predicted increased FA in inferior parietal cortex. In contrast, for context-based sequences learning rate predicted increased FA in striatum (putamen). Our findings suggest that learning hierarchical structures is implemented by brain connectivity changes in posterior parietal regions for frequency statistics, while subcortical regions for more complex probabilistic combinations.

15:00
Vita brevis, kitsch longae―When death was salient, kitsch appears less kitschy
SPEAKER: unknown

ABSTRACT. The word “kitsch” is used to dispraise of something as nostalgic, overly simplistic and consoling. This judgmental term suggests that kitsch is liked because it helps people to cope with uncertainty and negative emotions. This assumption was tested in an experiment (N=85) based on the mortality salience (MS) paradigm. Prior to rating 20 images of decorative and devotional objects in terms of liking and kitschiness, participants were randomly assigned to three different conditions: In two conditions participants were instructed to reflect on their own mortality. They were either tested solitarily (MS-solitary) or simultaneously in groups of three participants (MS-group). In the third condition, participants were seated in a wheelchair and asked to imagine how their life would change after a severe car accident (CA). Ratings from these three experimental conditions were compared with ratings of participants (N=50) who rated the same set of images in a neutral setting (NG). Liking did not vary across conditions. Kitsch ratings were lower in both MS conditions, but not in the CA condition. We see evidence that the salience of kitschiness is affected by the previous salience of mortality. One explanation is that existential threat makes kitsch more acceptable.

15:00
Facelikeness mediates individual-level discrimination for novel objects: Evidence from extensive training
SPEAKER: unknown

ABSTRACT. Visual expertise leads to fast and automatic individual-level recognition. However, the factors leading to expertise acquisition remain unknown. For instance, adults can learn to quickly and effortlessly name novel objects called Greebles (Gauthier & Tarr, 1997) but this finding may be attributable to the observation that Greebles have, at least at an abstract level, a typical facelike configuration of parts (two eyes above a nose, above a mouth). Here we tested whether the facelikeness of novel objects affects expertise acquisition. Two groups of 11 participants underwent extensive training (20 hours, 2 weeks) with a well-controlled set of novel objects (Willenbockel et al., 2014). The groups differed in whether the picture-plane orientation of the objects led to a facelike or nonfacelike configuration. Both groups reached the same level of performance on naming, verification and visual search tasks. However, the facelike group performed significantly faster in a matrix scanning task (Wong et al., 2009). This task placed a high demand on individual-level discrimination as it required fast and accurate scanning for changing target objects in an array of objects. These findings demonstrate an important role of facelikeness in shaping behavioural responses to novel objects despite the same amount of extensive training.

15:00
Help or hindrance: do facial expressions facilitate identification?
SPEAKER: unknown

ABSTRACT. Behavioural research suggests some degree of interdependence between the processing of facial identity and facial expressions, although this relationship remains unclear. We investigate this using an identity categorisation task that incorporates the extensive within-person variability represented by ambient photographic images. Following Jenkins, White, Van Montfort and Burton (2011), eighty-five participants sorted ambient facial images of 2 unfamiliar actors into piles, one for each separate perceived identity. Participants underwent two conditions (order counterbalanced across participants), one in which the images were expressive and one in which they were neutral. Expressiveness/neutrality was determined by prior experimentation. We used a measure that combined the number of identities into which images were sorted, along with the internal consistency of those identities. We found no evidence of a performance difference when comparing the neutral-first and expressive-first tasks, however there was a robust increase in performance on the neutral-second task – prior sorting of expressive faces confers a performance benefit. These findings suggest that successful identification of a newly-encountered face is substantially improved by viewing that face with a range of expressions, over and above the recognition rates arising from exposure to more canonical neutral expressions.

15:00
An encoding advantage for own-race faces in Taiwanese participants: A morphing face study
SPEAKER: unknown

ABSTRACT. The “other-race effect” (ORE) is a reliable, cross-cultural observation but little is known as to the exact stage of visual processing at which ORE may come into play. Using a morphing face technique, Walker and Tanaka (2003) reported a small but significant perceptual encoding advantage for own-race faces over other-race faces in Canadian Caucasians. Here we adopted a similar approach to explore the encoding advantage hypotheses in Taiwanese participants. The method of constant stimuli with swathes of morphed images of Caucasian and Asian male and female faces were used. In each trial, the participant viewed an Asian or Caucasian parent face followed by either the “same” face (0%) or a “different” morphed face (i.e., 10%, 20%, 30%, 40%, or 50% contribution from the other race face) with equal probability. The results showed that Taiwanese participants (N=20) had a higher rejection rate for Asian-parent condition than that of Caucasian-parent condition. Furthermore, fitted with a 4-parameter sigmoidal function, the participants’ data exhibited a significantly lower discrimination threshold (22.76% for Asian vs. 44.22% for Caucasian) and a steeper slope (0.849 for Asian vs. 0.812 for Caucasian) in the Asian-parent condition. In sum, our results lend a strong support for the encoding advantage hypothesis.

15:00
Attentional control of positive and negative visual emotional distraction
SPEAKER: unknown

ABSTRACT. Emotional information attracts attention. This claim relies on inconsistent findings from studies that mostly use negative stimuli, tasks which target different stages of attentional selection (engagement, disengagement) and evoke different control strategies (information relevant vs. irrelevant). Here, we studied disruption of goal-relevant processing (i.e. capture) by negative and positive stimuli and mechanisms used to control this distraction. In Experiment 1, irrelevant negative, positive and neutral IAPS images, intact or scrambled, appeared peripherally on 50% of trials while participants identified central letters. Emotional images were more distracting than neutral ones but only when intact, supporting the role of content rather than low-level features. In Experiment 2, we manipulated the probability of distractor presentation (25% or 75%) to determine whether people can strategically control emotional distraction. A low distractor probability produced substantial emotional distraction. However, a high probability almost abolished neutral and emotional distraction, consistent with the use of a proactive control mechanism when frequent distractors are expected. Importantly, attenuation of distraction could not be attributed to the repetition of images. Proactive control seems equally effective at inhibiting emotional and neutral distractors, despite the biological relevance of the former. The different control strategies triggered by distractor probability may partly explain previous inconsistencies.

15:00
Influence of spatial frequencies on early stages of face processing : an event-related potentials study
SPEAKER: unknown

ABSTRACT. Face perception requires coarse configural information and finer featural information transmitted respectively by low spatial frequencies (LSF) and high spatial frequencies (HSF). Results on the early visually evoked components P100 and N170 in response to LSF and HSF filtered images appear contradictory. Particularly, the N170 amplitude was found larger or smaller for HSF compared to LSF face stimuli among studies. Discrepancies might stem from differing methodologies. The aim of this study was to investigate the time course of SF integration with controlled face stimuli and provide information on how these information form the facial representation. Face images, calibrated in luminance and contrast, were presented in four conditions (LSF (<8 cycles/face), middle range SF MSF (8-16 cycles/face), HSF (>32 cycles/face), and non-filtered (NF)) during a gender discrimination task through ERP recording. Analysis showed that the P100 amplitude evoked by LSF was larger than HSF stimuli. The N170 amplitude was smaller for HSF than LSF stimuli. As expected, MSF and NF evoked similar amplitudes. In line with previous studies, LSF supporting configural information are primarily processed (P100) by the visual system while HSF associated with featural information are of later relevance (N170). Results corroborate the coarse-to-fine hypothesis of SF integration.

15:00
Investigating visual stimuli processing under mortality salience on a microgenetic level
SPEAKER: unknown

ABSTRACT. The impact of processing death-related stimuli under mortality salience (MS) has been highlighted in Terror-Management-Theory (TMT). Thus, when MS is conscious, suppressions of death thoughts have been shown, whereas death thoughts become highly accessible when MS has been removed from consciousness (distal defense). However, only little research has been made taking visual stimuli processing under MS into account. Therefore, we generated a change blindness task on a microgenetic level. Participants had to decide, whether one out of four different pictures had been changed or not with presentation times varying from 33, 67, 134 to 534 msec. Three types of action were implemented: no change, a neutral picture change, or a change from neutral picture to a death-related picture. Before the task, participants were randomly assigned to either a MS-treatment or a controltopic – both followed by a delay task to investigate distal defense reactions. In accordance with TMT we found evidence that MS-treated participants showed higher accessibility for death-related pictures in correlation with longer presentation times, but–most importantly–when presentation times were very short, suppressions of perceiving death-related pictures occurred. This finding is compatible with the idea that general perceptual processes under MS are also susceptible of a microgenetic development.

15:00
Varying curvature and angularity of architectural façades can influence aesthetic preferences
SPEAKER: unknown

ABSTRACT. Several researches showed that people prefer curved stimuli compared to angular ones (shapes, objects, interior spaces; Bar and Neta, 2006, 2007; Vartanian et al., 2013). Our aim is to extend to architectural façades the role of curves and angles in terms of preference. We produced four different versions of the same building, varying its curvature (high and medium curvature; rectilinear and angular). Twenty-four female participants 1) made a preference-forced choice between pairs of the stimuli; 2) evaluated each stimulus on 5 point likert scales (liking, familiarity, complexity, stability, approach); 3) classified all the stimuli from the most to the least preferred. Asymmetric multidimensional scaling on the forced choices showed that the high curved façade was the most preferred, followed by the medium curved, angular and rectilinear ones. Multidimensional unfolding showed that the majority expressed higher preferences for the curved façades compared to angular and rectilinear ones. A repeated measures ANOVA on liking evaluation showed that the high curvature façade was the most preferred followed by angular, medium curved and rectilinear ones. Our preliminary findings showed that curvature influences preferences also for architectural stimuli. Interestingly the least preferred stimulus was not the angular but the rectilinear one.

15:00
On the genesis and processing of facial representations and prototypes
SPEAKER: unknown

ABSTRACT. Face researchers address a huge variety of perceptual and processing issues of faces while hardly addressing how exactly facial representations and prototypes are generated and on which experiences they are established. In the present study we investigated the process of unfamiliar faces become familiar. Accordingly, we let participants position 20 facial presentations of an unfamiliar individual spanning a period of ∼60 years on a checker‐board with coordinates according to their similarity (in a between-participants-design we employed 4 different unfamiliar persons with one randomly assigned to each participant). Subsequently, participants had seven days to get familiar with these depictions by use of an online face-learning task. In a test-retest fashion, we replicated the initial positioning task, one week later. Analyses by means of cluster analysis and multidimensional scaling revealed clear sub-prototypical clusters of the facial outward appearance of the persons providing first hints that each development period of a person reflects genuine clustered prototypes clearly opposing the idea of a super prototype spanning an entire life of a person.

15:00
What you need is what you like - knowing target and distractor categories is sufficient for distractor devaluation
SPEAKER: unknown

ABSTRACT. Distractors are devalued compared to targets and novel items in search and localization tasks (Raymond, Fenske, & Tavassoli, 2003; Raymond, Fenske, & Westoby, 2005). This does not occur when distractors are expected to become relevant in a subsequent task (Dittrich & Klauer, 2012), suggesting that simply knowing the (ir)relevance of items is enough for devaluation to occur. To test this idea, we conducted a distractor devaluation task, with added evaluations before, and also after, target localization. Halfway through the experiment we switched the roles of targets and distracters, in order to compare item-based evaluations in both roles. Results show that knowing an item’s status as target or distractor suffices for distractor devaluation to arise. At the same time, committing errors in the localization task was also associated with task performance, indicating that task performance has effects of its own. The effects of target status and erroneous performance both appeared to be restricted to the second run, which suggests that the role reversal of targets and distractors was required for both types of devaluation. Results showed that distractor devaluation persisted after the localization task ended, suggesting an inflexibility in the termination of devaluation when an item’s irrelevance is terminated.

15:00
Letters in the forest: global precedence effect disappears for letters but not for non-letters under reading-like conditions
SPEAKER: unknown

ABSTRACT. Normally skilled reading involves special processing strategies for letters, which are habitually funneled into an abstract letter code. Previous studies predict an analytic strategy for the processing of letters, while non-letters are preferably processed via a holistic strategy. The global precedence effect (GPE) seems to contradict to this assumption, since, with compound, hierarchical figures, including letter items, faster responses are observed to the global than to the local level of the figure, as well as an asymmetric interference effect from global to local level. We argue that with letters these effects depend on presentation conditions; only when they elicit the processing strategies automatized for reading, an analytic strategy for letters in contrast to non-letters is to be expected. We compared the GPE for letters and non-letters in central viewing, with the global stimulus size close to the functional visual field in whole word reading (6.5° of visual angle) and local stimuli close to the critical size for fluent reading of individual letters (0.5° of visual angle). Under these conditions, the GPE remained robust for non-letters, but disappeared for letters. We interpret these results as according to the view that reading is based on resident analytic visual processing strategies for letters.

15:00
Conversation, a risky business: Naturalistic conversation changes risk behaviour and loss perception
SPEAKER: unknown

ABSTRACT. We often hold conversations whilst performing other everyday activities. However, conversation can interfere with our ability to perform tasks that require our attention (e.g., Strayer, Cooper & Drews, 2004). In the current study we used the Columbia Card Task (Figner, Mackinlay, Wilkening & Weber, 2009) to assess the effect of conversation on risk taking behaviour and the perception of loss events. The number of cards turned over on each trial was used as a measure of the level of risk taking and we also recorded physiological measures (Electrodermal Activity). Whilst conversing, participants: i) made riskier decisions (turned over more cards), ii) were less likely to use information designed to assist them in their decision making, and iii) were significantly less likely to experience a physiological response to loss events, suggesting that conversation reduced the emotional impact of making a bad decision. The results are considered in terms of the influence of conversation on attention, perception and decision making, and the impact on real-world performance such as driving behaviour.

15:00
Developing perceptual scales to measure properties of aesthetic attributes
SPEAKER: unknown

ABSTRACT. The inherently subjective nature and semantic connotations of ‘beauty’ make aesthetic judgements notoriously difficult to capture experimentally in quantitative and objective measures. We present here the results of a set of experiments that use a simple computer-based procedure to rank the strength of a perceived quality, such as complexity, regularity, or symmetry, in a set of synthetically generated stimuli. These attribute variations were defined by simple mathematical rules, for instance by generating fractal patterns with varying degrees of complexity. In an iterative process the stimulus associated with the smallest (or the largest) percept strength of the particular property is selected by the participant, and eliminated from the set for the next presentation: for a set of 5 stimuli a rank order is established in 4 simple decisions. This ranking is repeated 8 times for newly generated stimulus sets, generating robust data that can be collected quickly and easily in large numbers from naive participants. We use this method to compare how synthetic patterns generated with different algorithms can be used to capture perceptual aesthetic attributes. With complexity, for instance, we find that fractal patterns generate more robust judgements than frequency modulated (1/f) noise patterns.

15:00
Distributed adaptation facilitates long-term face aftereffects
SPEAKER: Thomas Ditye

ABSTRACT. Adaptation is a neural mechanism supporting the optimisation of visual processing on the basis of previous experiences. Perceptual short-term effects of adaptation have been studied extensively, however recent evidence from the study of long-term aftereffects point also towards a significant role of adaptation in learning about the visual properties of the world in the long run. Here we tested the effects of different adaptation protocols on face distortion aftereffects along multiple time scales. In all experiments adaptation to a distorted image of a famous person biased participants' perception of the original face. These shifts were as large and as stable in a group of participants who followed a distributed adaptation protocol (10 x 1 min of adaptation within 1h) as in a group of participants who underwent blocked adaptation (1 x 10 min within 1h). However follow-up experiments revealed modulations of this pattern when the degree of visual interference during the interval between different parts of the experiments was varied (low versus high visual interference). These findings suggest differential neural mechanisms to operate in perceptual short-term aftereffects as opposed to long-term adaptation more closely associated with stable and on-going adjustments of the sensory systems.

15:00
Pupil size is a sensitive indicator of motivation-driven modulation of arousal in the macaque in a visual discrimination task
SPEAKER: unknown

ABSTRACT. Pupil size under constant luminance is thought to reflect arousal in humans. To validate pupil size measurements as an indicator of arousal in the macaque we manipulated motivation (and hence arousal) by predictably varying reward size when one monkey was performing a visual discrimination task. Simultaneously his pupil size was monitored. The available reward was large after the monkey made correct choices on three consecutive trials and all following correct trials until the next error, when reward size was set back to small. Across 89 daily sessions we confirmed that the available reward size modulated the monkey’s motivation: his psychophysical threshold was lower (improved performance) on trials with large available reward (p < 10-3). To see if the pupil size was also modulated by motivation, we computed the trial-by-trial mean pupil size during the fixed duration stimulus presentation period. Statistically, the modulation of the pupil size was even stronger than that of performance: pupil size was larger on trials with large available reward than small available reward (p < 10-12). This effect was independent of stimulus position and choice. Therefore our results suggest that pupil size is more sensitive to motivation-driven modulation of arousal of the monkey than task performance.

15:00
Influence of facial skin movement and viewpoints on age perception
SPEAKER: unknown

ABSTRACT. In previous studies on facial age perception, static facial images were evaluated on the basis of the static frontal view of neutral, inexpressive face. We conducted an experiment to find out whether motion of faces has any effects on perceived relative age-group by comparing dynamic display to static images. 80 Japanese females in their twenties to sixties were recruited as models, and their regulated facial movements (stretching vertically/horizontally, and puffing out their cheek etc.) were utilized for stimulus movies. In addition, four simultaneous video-recordings as view points were incorporated. In experiment, movies in dynamic condition were presented as continuous changes in their expressions, whereas control movies in static condition were presented as static images selected from each relevant movie of each model at the maximum intensity of facial expression. Participants observed each face in a trial, and made judgement by 2AFC whether the face is in the first half or second half of each age group (meaning an age group as 5-year step). The results indicated that there was a difference in age estimation between dynamic and static faces; dynamic face is perceived higher age than static one. Moreover, viewpoint dependence was also found in estimated groups of ages.

15:00
Overlapping neural codes: Individual frontal voxels are more likely to be re-used if the encoded stimuli are more distinct
SPEAKER: unknown

ABSTRACT. In everyday life we need to select relevant information and ignore distraction. A circuit of frontal and parietal areas, including the anterior cingulate cortex (ACC), are believed to support this process by adjusting their responses to selectively process information that is currently relevant (Duncan, 2001). In different tasks, the same neurons may be “re-used” to code different information. Evidence from non-human primates suggests that the extent to which neurons can be re-used between tasks depends on the similarity of the stimuli (Cromer, Roy & Miller, 2010). To examine this in humans, we developed a variant of multivoxel pattern analysis for functional magnetic resonance imaging data. In two independent data sets, we established the multivoxel codes for different visual stimuli and then assessed the extent to which the same voxels were used in each code. We predicted a counter-intuitive increase in the extent to which voxels were re-used when the encoded stimuli were more different from one another. Indeed, in the ACC, a larger proportion of voxels were re-used to code dissimilar visual objects than were to code different aspects of the same visual objects. This suggests that the flexibility of population coding depends on the demands of the task.

15:00
Influence of viewpoints on facial age perception with eye movement analysis during making a judgment
SPEAKER: unknown

ABSTRACT. The influence of viewpoints on facial age perception is largely unknown. In this study, we focused on the relationship between viewpoint of a face and eye movement during making a judgment of age. 280 Japanese females in their twenties to sixties were recruited as models, and their three dimensional facial shape data were obtained by a stereo camera system. Images of 7 viewpoints of the face including frontal, various horizontal/vertical oblique views were created as stimuli. An ABX paradigm was adopted as a task during gaze recording, where two faces of A and B were presented successively and observer was asked to select face X from one of A and B as the face looked more aged. A frontal face was always displayed as A/B paired with a face as B/A of another model in one of angled views (including frontal view). The results of ROI analysis of eye movement indicated that observers more intensely viewed around eyes and mouth area of the frontal face, whereas the gaze tended to be dispersed to other areas of same person's face in angled view. In addition, the effects of eye movement and of evaluated age varied in different age groups.

15:00
Eye movement strategies are not optimal: people simply employ reasonable but idiosyncratic search strategies
SPEAKER: unknown

ABSTRACT. The ideal searcher framework postulates that during visual search the observer directs attention in order to maximise knowledge on a target’s location, taking into account retinal sensitivity. As the human visual field maintains high resolution further horizontally than vertically an implication is that humans are likely to exhibit many short eye-movements in an approximately horizontal direction and then make an occasional jump vertically. This particular behaviour has been observed previously in a visual search task by Najemnik and Geisler (2005, 2008). We tested this hypothesis by repeating their search task, and, as their data was based on two participants, we increased the number of participants in order to assess whether this behaviour is a general strategy across participants. We took a novel analytical approach by using Hidden Markov Models, allowing for the identification of clusters (i.e., horizontal short saccades and vertical long saccades) in the data, as well as the transitions between clusters. Results however showed the two expected clusters could not be identified from the full dataset. Rather we found that strategies vary across participants. Based on these results we theorise that priming may have influenced the predominance of particular eye-movements. We present some initial results investigating this hypothesis.

15:00
Angry faces do not have privileged access to awareness: Evidence from the attentional blink paradigm

ABSTRACT. When two targets are placed in a rapid sequence of distractors, identification of first target (T1) disrupts the processing second target (T2) when presented within about 500ms after T1, a phenomenon known as the attentional blink (AB). Interestingly, with schematic faces, Maratos, Mogg and Bradley(2008) obtained the results that, compared with neutral or happy faces, angry faces presented as T2 were less affected by AB, suggesting angry face priority over neutral or happy faces. In this research, we examined the degree of AB by adopting photos of animal faces as T1 and photos of angry, happy and neutral faces as T2. Distractor stimuli were photos of neutral upside-down faces. Note that we used toothy and non-toothy happy and angry faces: Toothiness of emotional faces was a between-subjects factor. Results showed that, in non-toothy condition, AB was larger when T2 was angry face than when T2 was happy face. In toothy condition, there was no difference in AB between angry and happy faces. Oddly enough, there was little, if any, AB observed for neutral faces in both conditions. Taken together, these findings suggested that, contrary to the results with schematic faces, angry faces did not have privileged access to awareness.

15:00
Adaptation to Perceived and Imagined Facial Gender
SPEAKER: unknown

ABSTRACT. Perceptual adaptation studies show that prolonged exposure to a face (adaptor) typically results in a contrastive face aftereffect (FAE) where a subsequently presented face appears less like the adaptor. As a process recruiting perceptual brain areas, visual imagery is expected to generate similar FAEs. Recent studies of imagery adaptation to facial gender have however yielded inconsistent results. While some experiments report contrastive FAEs, other experiments report no effect or atypical (i.e. non-contrastive) FAEs. In a recent study, D’Ascenzo et al. (2014) observed atypical FAEs, in which androgynous faces appeared more feminine after imagining female faces of recently familiarised strangers than after imagining male faces. Our study aims to replicate this observation and to investigate the effect of familiarity on adaptation to perceived and imagined facial gender. We found stronger adaptation for celebrities than unfamiliar faces in both perception and imagery tasks, and no evidence of atypical FAEs for imagery. These findings suggest that familiarity may have a modulatory effect on adaptation to perceived and imagined facial gender.

15:00
Active and passive exploration of faces
SPEAKER: unknown

ABSTRACT. In most face recognition studies, learned faces are shown without a visible body to passive participants. Here, faces were attached to a body and participants were either actively or passively viewing them before their recognition performance was tested. 3D-laser scans of real faces were integrated onto sitting or standing full-bodied avatars placed in a virtual room. In the ‘active’ learning condition, participants viewed the virtual environment through a head-mounted display. Their head position was tracked to allow them to walk physically from one avatar to the next and to move their heads to look up or down to the standing or sitting avatars. In the ‘passive dynamic’ condition, participants saw a rendering of the visual explorations of the first group. In the ‘passive static’ condition, participants saw static screenshots of the upper bodies in the room. Face orientation congruency (up versus down) was manipulated at test. Faces were recognized more accurately when viewed in a familiar orientation for all learning conditions. While active viewing in general improved performance as compared to viewing static faces, passive observers and active observers - who received the same visual information - performed similarly, despite the absence of volitional movements for the passive dynamic observers.

15:00
Attending redundant information increases the precision of visual working memory for complex stimuli
SPEAKER: unknown

ABSTRACT. Recent evidence suggests visual working memory (VWM) capacity is inversely related to object complexity (Alvarez & Cavanagh, 2004). In this study, we compared memory for objects comprising one or two luminance gratings. Stimuli were circularly vignetted sinusoids comprising high (3.2 CPD), low (0.8 CPD), or both spatial frequencies. Trials contained a test and a probe stimulus with an inter-stimulus-interval of 2000-ms. In Experiment 1, test and probe stimuli were identical except for a change in orientation. In Experiment 2, test stimuli contained two gratings, and probe stimuli could contain one or two gratings at different orientations. Observers matched the orientation of the probe to the remembered test stimulus and a mixture model was used to estimate best fitting Gaussian and uniform components to the errors observed (Bays, Catalao & Husain, 2009). In Experiment 1, estimates of precision for objects containing one or two gratings were equivalent. In Experiment 2, precision was greater for probes containing two compared to a single grating. The results indicate redundant sources of information can increase the precision of VWM for complex objects. Importantly, this benefit was only obtained when observers were required to attend both gratings in the test to complete the task.

15:00
The visual information driving familiarity and identity judgements from faces.
SPEAKER: unknown

ABSTRACT. The assessment of another as familiar and the ability to establish their unique identity are the two components that underlie our remarkable face-recognition abilities. A considerable body of work has explored the neural underpinnings of these abilities and debate remains regarding whether they are dissociable, i.e., the separate constituents of a dual process, or rather constitute different aspects of the same retrieval process. Even less is known about the specific visual information that is used to determine the familiarity of a face and/or to identify it by name. Here we sought to establish the critical information underlying participants’ judgments of facial familiarity and identification. To this end we created a new standardised stimulus set comprising 6 personally familiar faces (3 male) and 12 unfamiliar faces from members of the teaching and research staff at Birkbeck College. We then applied the Bubbles reverse-correlation methodology to establish the information driving correct performance in each task. 29 final year Birkbeck students participated in the study. Results indicated that markedly different information underlies familiarity and identity judgements, with familiarity driven by lower spatial frequency broad facial cues (eyes, mouth and face shape), whereas identity decisions rely on fine details in the eyes and mouth.

15:00
Oculomotor inhibition and the preference rating of abstract visual patterns
SPEAKER: unknown

ABSTRACT. Do different levels of oculomotor inhibition, induced using the minimally delayed oculomotor response (MDOR) task (Wolohan & Knox, 2014), alter the values participants attach to symmetrical and random abstract patterns? Participants competed 240 trials in which after a randomised fixation time (0.5-1.5s), a pro-saccade target was displayed for either 200ms or 1000ms (DT and target 5° left/right randomised). Participants were instructed to saccade to the target position on target offset, prior to which a 3.5° square, random or symmetrical abstract pattern was displayed for 140ms at fixation. After each trial participants rated the pattern from 1 (“not liked very much”) to 9 (“liked very much”). Saccade latency was modulated by DT (200ms: 449±96ms, mean±SD; high levels of oculomotor inhibition; 1000ms: 315±62ms). A repeated measures ANOVA on the pattern ratings (DT vs. pattern type within subjects) showed significant main effects for DT (F (2,13) = 3.781, p = 0.035, ηp2 = 0.21) and pattern type (F (2,13) = 16.798, p < 0.001, ηp2 = 0.55) with no interaction between factors (p = .9, ηp2 = .004) suggesting that high levels of inhibition depressed pattern ratings. These results are consistent with a linkage between affective responses and (at least) oculomotor inhibition.

15:00
Preconscious processing of facial attractiveness under continuous flash suppression
SPEAKER: unknown

ABSTRACT. Facial attractiveness is an important biological and social signal in social interaction. Although recent research has provided ample evidence regarding spontaneous appraisal of facial attractiveness, it is still unclear whether evaluation of facial attractiveness is restricted to conscious appraisal. To test the possibility that facial attractiveness is processed even at a preconscious level, we used a continuous flash suppression (CFS) paradigm in which monocularly viewed stimuli are erased from visual awareness because of a continuous flash-masking image presented to the other eye. We presented faces to each participant’s non-dominant eye while presenting a continuous flash to his or her dominant eye. The faces were attractive, neutral, or unattractive. We measured (i) the time a face needed to overcome CFS and emerge into awareness and (ii) the duration for which a face was being consciously perceived. Results revealed that attractive faces broke CFS faster than unattractive faces, and attractive faces were consciously perceived longer than unattractive faces. These results suggest that evaluation of facial attractiveness occurs even at a preconscious level, and attractive faces shorten suppression. Further, attractive faces captured greater attention and prolonged the duration of awareness of faces at a post-conscious level.

15:00
Not only excitation but also inhibitory processing is carried over into the subsequent task
SPEAKER: unknown

ABSTRACT. It is known that a prior mental activities crucially affect the performance of subsequent task; carry-over effect (e.g., Hine & Itoh; 2014). There are two accounts of carry-over effects. First, excitation of an appropriate processing in a prior task could lead to greater propensity to use in the subsequent task. Second, inhibition of an inappropriate processing could lead to greater tendency to use in the subsequent task. Most of previous studies focus on the excitation of an appropriate processing. Here, to investigate the possibility that inhibitory processing could be carried over into the subsequent task, we conducted the experiment in which participants engaged in Global or Local Navon task (Navon, 1977), and then took a Stroop task that was an indicator of inhibitory processing. Global Navon task is reading large letters in Navon figure, which is large letter consist of small letters. On the other hand, Local Navon task is reading small letters in Navon figure, and is required inhibitory processing. Our results showed that the accuracy of Stroop task after Local Navon task was higher than that after Global Navon task. This indicates that inhibition of an inappropriate processing is carried over into the subsequent task.

15:00
Variations in implicit social learning in the typically-developed population.
SPEAKER: unknown

ABSTRACT. The ability to implicitly or spontaneously learn about others’ pro- or antisocial dispositions on the basis of a multitude of social cues is crucial for effective social interaction. The aim of the current study was to investigate variations in implicit social learning abilities in typically-developed individuals with few or many autistic traits, as assessed by Autism Quotient (AQ) questionnaire. In the learning phase, participants repeatedly observed two different identities whose gaze direction and facial expression were manipulated to convey either a pro- or anti-social disposition towards the observer. These dispositions were determined by specific cue contingencies, of which participants were crucially not aware (as confirmed in the debrief). In the test phase, participants showed specific biases in their perceptual report of morphs of the two identities, suggesting that others’ dispositions can be learned implicitly, that is, without awareness of the cue contingencies. Importantly, this ability was correlated with AQ scores; participants with higher AQ scores showed significantly less implicit social learning. Future research will examine whether individuals with autism are even more impaired in implicit social learning ability, which may explain their difficulties in adjusting behavior to social demands, and whether it is limited to the social domain.

15:00
Dynamic Facial Expression Recognition in Low Emotional Intensity and Shuffled Sequences
SPEAKER: unknown

ABSTRACT. There have been several studies focusing on the dynamic aspects of facial expression recognition. However, the effects of dynamic facial expressions such as low emotional intensity and presentation in shuffled sequences remain unclear. In this study, utilizing actually unfolded dynamic facial expressions by a real expresser, we made one condition with regularly ordered sequences depicting dynamic changes from onset of the expression to the point at which it is most marked, and another with only to low emotional intensity (onset to 40 %), and then shuffled these sequences. We asked participants to identify emotions for each sequence, and then measured recognition accuracies and response times. The findings show although recognition accuracy was slightly low for the expression of fear, overall accuracy was significantly higher than chance even when the sequences were composed of shuffled frames. Additionally, the effect of shuffling did not appear in response times, but longer response times were observed for negative facial expressions with low emotional intensity. These results suggest that humans are sensitive to dynamic facial expression with low emotional intensity and subtle changes in facial movements, and also suggest that ordered temporal flow is not required for dynamic facial expression recognition.

15:00
Testing the effects of the familiarity and symmetry in facial attractiveness
SPEAKER: unknown

ABSTRACT. In two experiments we investigated the effects of symmetry (perceptual factor) and familiarity (memory factor) on facial attractiveness. From the photographs of original slightly asymmetric faces symmetric left-left (LL) and right-right (RR) versions were generated. Familiarity was induced in the learning block using repetitive presentation of twelve original (asymmetric) faces. In the test block of Experiment 1 14 participants rated the attractiveness of twelve familiar faces (previously presented in the learning block), twelve novel original faces and both LL and RR versions of all faces (24 X 3 in total). The same procedure was repeated in Experiment 2. Twenty-eight participants were presented with the same faces, but the faces from learning and test blocks were altered in respect to Experiment 1. Analysis of variance in both experiments revealed the main effects of symmetry and interaction symmetry-familiarity. Post hoc tests indicated that original asymmetric faces were most attractive in both experiments and in both familiarity conditions. These results suggest that facial attractiveness is not positively associated with symmetry, but rather with more „natural“ slight asymmetry. Additional analyses have shown that RR symmetrical versions are more attractive in familiar faces, while LL versions are more attractive in unfamiliar faces.

15:00
What personal factors lead to individual differences in categorizing facial expressions of emotion?
SPEAKER: unknown

ABSTRACT. Individuals vary in perceptual accuracy when categorizing facial expressions, yet it is unclear how these individual differences are related to cognitive processing stages at facial information selection, acquisition and interpretation. By presenting face images displaying six basic facial expressions of emotion with different intensities, we measured expression categorization performance from 104 healthy adults. The categorization accuracy was then correlated with their information selection (gaze allocation at diagnostic local facial regions) and interpretation abilities (personal traits assessed with Autism Quotient, anxiety inventory, and self-monitoring scale). The observers’ gaze allocation had clear impact on categorization accuracy of some expressions displayed at medium/high intensities. Specifically, longer gaze at the eyes or nose region were coupled with more accurate categorization of happy/disgust/surprise or sad expressions, respectively. Regarding personal traits, higher anxiety level was associated with greater categorization accuracy across all expressions, whereas higher autistic score was coupled with better recognition of sad but worse recognition of angry expressions. Furthermore, an individual’s anxiety level was positively correlated with the amount of gaze at the nose region for all expressions except for happy. The results suggest that both facial information selection and interpretation capabilities contribute to individual differences in expression categorization within non-clinical populations.

15:00
Internet Based Measurement of Visual Expertise in Radiological Skill
SPEAKER: unknown

ABSTRACT. The correct identification and diagnosis of abnormalities from radiographs is one of the best examples of real-world expertise in visual tasks. Accordingly, understanding and measuring the development of this skill has attracted great interest from visual perception researchers and radiology instructors alike. However, significant challenges remain in collecting behavioural data from trainees and experts who are dispersed geographically and whose availability is limited. We therefore developed a web-based task to measure visual diagnostic skill. Participants viewed a selection of pre-assessed radiographs and were required to identify and localise skeletal abnormalities. 42 final year medical students at the University of Sheffield and 12 consultant paediatric radiologists from across Europe completed the two-stage task. As expected, consultants were significantly more accurate at identifying abnormalities than medical students, and their localisation of abnormalities was significantly more precise. However, in contrast to previous research, we found that the experts took longer over the task than novices. The results validate the use of a web-based platform for studies of visual cognition expertise in a real-world domain. Future work will add eye-tracking measures to the behavioural task, while the ease of data collection will allow both longitudinal and large-scale behavioural datasets to be collected.

15:00
Investigating the relationship between human-likeness and eeriness for prosthetic hands
SPEAKER: unknown

ABSTRACT. In 1970, Mori hypothesised the existence of an ‘uncanny valley’, whereby stimuli falling short of being fully human are found to be creepy or eerie. Previously we demonstrated that more human-like artificial hands are rated as more eerie than clearly mechanical or real hands (Poliakoff et al., 2013). Here, we compared eeriness ratings (N=40 participants) for prosthetic hands pre-selected as more or less human-like, as well as mechanical and real hands. The less realistic prosthetic hands were rated as more eerie (mean = 6.59) than the more realistic prosthetic hands (4.90), the mechanical hands (4.49) and the real hands (1.23). In addition, the orientation of the hands (first person vs. third person) did not significantly affect the ratings. Thus, the notion of an uncanny valley (or peak of eeriness) was supported for the less realistic prosthetic hands, but the more realistic prosthetic hands were not uniformly found to be eerie. Indeed, the ratings of more realistic prosthetic hands varied more between individuals than the other categories, suggesting that individual differences in responses to prosthetic limbs, including familiarity, would be a fruitful avenue to investigate. These findings have implications for the design of more realistic or acceptable prosthetic hands.

15:00
Slots or resources? It depends on the type of visual memory.
SPEAKER: unknown

ABSTRACT. Is short term memory essentially based on slots or resources? The slot model claims that there are a limited number of slots with a fixed precision, while the resource model posits that resources can be distributed gradually, causing a trade-off between capacity and precision. The current research investigated this topic in a change detection task with oriented bars. To evaluate the precision of the memory representation we varied the size of the change (30°, 60°, or 90°). Furthermore we precued a location, to evoke different attentional strategies. We focused on two types of short-term visual memory, Fragile Memory (FM), and Visual Working Memory (WM). FM is the visual memory available before visual interference, and WM is the memory available after visual interference. It is generally found that FM has a much richer capacity than WM. Interestingly, we found that for FM, precision at the invalidly precued and uncued locations was similar. In contrast, for WM precision at the invalidly precued location was lower than precision at the uncued locations. This suggests that the slots or resources debate may not yield one answer. FM may be based on slots with a fixed precision, while WM may be based on limited resources.

15:00
Attention! Now That I’ve Got Your Attention Let Me Sway Your Judgement: Irrelevant, Salient Stimuli and Extreme Outliers Affect Decisions On Value
SPEAKER: unknown

ABSTRACT. We often have to make decisions on the basis of multiple sources of information. Previous work has found that people are able to accurately integrate values presented in Rapid Serial Visual Presentation (RSVP) streams to make an informed judgement of the overall value of the stream (Tsetsos, Chater & Usher, 2012). In this study we investigated whether people’s value judgements can be influenced by salience driven attentional processes. Experiments 1 and 2 examined whether the presentation of irrelevant salient red items in a stream influenced accuracy of the perceived value of the stream. The results showed that an irrelevant high or low value red item led people to judge the stream as having a higher or lower overall value, respectively, compared to when the red item was absent. Experiments 3 and 4 showed that extreme outliers presented in the RSVP stream captured attention automatically, leading to less accurate report of subsequent items in the stream. Taken together the results show that people’s valuations can be swayed by salient items and that outlier items automatically capture attention, leading to over-weighting of extreme values and less accurate judgements of value.

15:00
Impaired configural processing for other-race faces revealed by a Thatcher illusion paradigm
SPEAKER: unknown

ABSTRACT. Thompson (1980) described the Thatcher illusion, where participants perceive upright faces with inverted eyes and mouths as grotesque but fail to do so when they were inverted, presumably the result of disrupted configural processing in inverted faces. Furthermore, other-race faces are processed less configural (more featural) than own-race faces (e.g., Meissner & Brigham, 2001) and Thatcherisation and inversion lead to stronger impairments in configural processing of own- compared to other-race faces (e.g., Hahn et al., 2011). The present study tried to identify differences in processing own- and other-race faces (thatcherised or not) not only in upright or inverted positions but also for intermediate steps of 30°, following Carbon et al. (2007). Data of 20 participants showed a characteristic sudden increase in reaction times of correct trials for own-race faces once they were rotated by more than 90° (impairing configural processing), whereas there was no such difference for other-race faces. Accuracy was worst for thatcherised other-race faces with a steep decrease starting between 30° and 60° of rotation. Between 150° and 180° performance increased slightly, a minor effect Carbon et al. (2007) already observed in prosopagnosics (non-experts in face processing). Overall, results argue for non-expertise-based, probably featural, processing of other-race faces.

15:00
Do Great Apes also Prefer Curved Visual Objects?
SPEAKER: unknown

ABSTRACT. Several experiments have proven human preference for curved over sharp contours. We think that this kind of preference and similar ones might have partially given rise to the emergence of aesthetics as a phylogenetic trait. However, this does not mean that this preference could be shared with other species. Given that nonhuman primates also exhibit visual preferences, it is conceivable that humans’ preference for curved contours is grounded on perceptual and cognitive mechanisms shared with extant nonhuman primate species. The main aim of the present study was to test this possibility by comparing, under similar conditions, humans and great apes’ preference for curved and sharp contours. Our results revealed both humans and apes showed a preference for round -as opposed to sharp- objects, albeit under different presentation conditions. In particular, humans preferred round objects under brief presentation conditions but not under free viewing time conditions. In contrast, apes showed the reverse pattern, preferring curved objects under free viewing conditions but not under brief presentation conditions. So, our results cannot refute the possibility that such preferences evolved independently and converged on a common adaptive solution.

15:00
Threatening stimuli do not narrow attentional scope
SPEAKER: Yue Yue

ABSTRACT. The present study, using the flanker paradigm, explored whether threatening targets would narrow attentional scope. Flanker compatibility effect was expected solely for non-threatening targets since threat had been found to narrow attentional scope and interfere with flanker processing (Fenske & Eastwood, 2003).Employed were photographs of felines and canines, either threatening or not. The target images were presented as singletons or with flankers, compatible or incompatible with the target, either in valence (threatening, non-threatening) or the animal category. Participants performed two tasks by categorizing whether the target was (1) a ‘cat’ or a ‘dog’ and (2) threatening or non-threatening. Significant results in processing speed were found only for the animal classification task. (i) A consistent target valence effect emerged: threatening targets slowed down the performance. (ii) Regardless of the target valence, the flanker facilitating (compatibility) effect was found, but only when the target and the flankers were identical images. This latter finding implies that perceptual features of the stimuli, rather than their emotional content, affect target processing speed (cf. Horstmann, Borgstedt, & Heumann, 2006). Overall, the present findings do not support the hypothesis that attentional scope is narrowed by exposure to threatening targets and indicate that task requirements considerably influence performance.

16:00-17:45 Session 9A: The changing visual system: development and ageing

.

Location: B
16:00
Aging and Perception

ABSTRACT. The developed world is aging faster than ever before. Even in the absence of neurodegenerative disease, aging affects all kinds of human functions including perception and cognition. In most perceptual studies, one paradigm is tested and it is usually found that older participants perform worse than younger participants. Implicitly, these results are taken as evidence that there is one aging factor for each individual determining his/her overall performance levels. Here, we show that visual and cognitive functions age differently. We tested 131 older participants (mean age 70 years old) and 108 younger participants (mean age 22 years old) in 14 perceptual tests (including motion perception, contrast and orientation sensitivity, biological motion perception) and in 3 cognitive tasks (WCST, verbal fluency and digit span). Young participants performed better than older participants in almost all of the tests. However, within the older participants group, age did not predict performance, i.e., a participant could have good results in biological motion perception but poor results in orientation discrimination. It seems that there is not a single “aging” factor but many.

16:15
Tracking developmental shifts in facial expression processing strategies
SPEAKER: Louise Ewing

ABSTRACT. Children are widely accepted to process faces and facial expressions of emotion differently to adults, with adult-like processing expertise continuing to develop into early adulthood. Few studies, however, have explored the manner in which children are successful in their face categorizations and in particular the processing strategies they implement. Here we investigated the development of processing strategies for the categorization of emotional expressions with a large developmental sample: 65 young children (aged 5 – 8), 65 middle aged children (aged 9-10), 52 older children (aged 11 -13) and 20 adults. Across experimental trials we generated subsampled versions of expressive faces (fear, sadness, happiness, anger) by randomly sampling information from the images (across different spatial frequency bands and different locations in each image) using the Bubbles paradigm (Gosselin & Schyns, 2002). Results reveal clear, age-related shifts in the use of visual information during expression categorizations, which differs across the four emotions. Even the youngest children are adult-like in the way they selectively extract the critical information for happiness judgments, but processing strategies for fear and sadness are refined across development. Children’s performance with angry faces was relatively poor, but where participants were successful, they relied on similar information to the adults.

16:30
Development of the other-race effect in school-age Taiwanese children: Using a morphing face paradigm

ABSTRACT. Previous studies on the other-race effect (ORE) in school-age children mostly focused on recognition memory test. Here we explored the encoding advantage hypotheses (Walker & Tanaka, 2003) in school-aged Taiwanese children using a near-threshold face matching task. A total of 102 5- to 12-year-old children and 22 adults were tested with a sequential face matching task. The method of constant stimuli with swathes of morphed images of Caucasian and Asian female faces were used. In each trial, the participant viewed an Asian or Caucasian parent face followed by either the “same” face (0%) or a “different” morphed face (i.e., 15%, 30%, 45%, or 60% contribution from the other parent face) with equal probability. The psychometric functions on the rejection rates for Asian- and Caucasian-parent conditions were fitted with a sigmoidal function separately. The adults exhibited a smaller discrimination threshold and a sharper slope in the Asian condition, supporting the encoding advantage hypothesis. For children, the younger groups (aged 5-8) did not exhibit an encoding advantage for own-race yet; it appears to emerge around 9-10 and became apparent around 11-12. In sum, school-age children made steady progress in discriminating own-race faces while their ability to discriminate other-race faces remained relatively unchanged.

16:45
Neural correlates of face recognition in the first hours of life
SPEAKER: Carlo Lai

ABSTRACT. Behavioral studies suggested that newborns can show communicative competences and that they can visually recognize a face previously seen since from the birth. The neurobiological evidence of this ability was demonstrated by four-months of life, and there is a considerable scepticism about the neurobiological maturation necessary for this ability before three-months of life. 23 newborns (11 newborns M=4.7; DS=3.3 hours old were included in the analyses) performed the following visual procedure: a presentation of a face for 60s (Target); then 50 trials of Target, 50 trials of Unknown faces and 50 trials of a neutral stimulus, each trial lasted 2s. Event-related potential (ERP) analysis showed a difference amplitude in response to Target vs. Unknown on left occipito-temporal montage from 300ms and a shorter latency in response to Target compared to Unknown. Time-frequency analysis showed a higher Beta1-band activity in response to Target compared to Unknown at 500-600ms on occipital-temporal. Connectivity results showed higher implication in fusiform gyrus with known face. Findings suggest that the newborns have the ability to discriminate a familiar face from a stranger since from the birth; this result has a relevant clinical implication for the possibility to find early neural marker for psychopathology as the autism.

17:00
The effect of age on visual decisions and consequences for models of bi-stable visual perception

ABSTRACT. We studied the effect of age on visual perceptual decisions of bi-stable stimuli. We used two different stimuli: bi-stable rotating spheres and a binocular rivalry stimulus. At onset, both stimuli can evoke two different percepts: for the sphere clockwise or anti-clockwise rotation and for the binocular rivalry stimulus a percept that switches between the stimuli in the two eyes. The stimuli were presented intermittently for 1 second with a range of inter-stimulus intervals (0.1 – 2 seconds). Subjects ranged between 18 and 73 years old and were instructed to indicate which of the two percepts dominate at each onset of the bi-stable stimulus. Our results show that perceptual choices are more stable for older subjects for the binocular rivalry stimulus and not for the bi-stable rotating spheres. The results will be discussed in the context of current models for bi-stable visual perception.

17:15
Aging modifies the direction of the assumed light source
SPEAKER: Ayelet Sapir

ABSTRACT. When judging the 3D shape of a shaded image, observers generally assume that the light source is placed above and to the left of the stimulus. This leftward bias has been attributed to hemispheric lateralization or experiential factors shaped by the observers’ handedness, learning and usual scanning direction. As aging is known to be associated with loss of hemispheric lateralization in functional and resting state signals, in the current study, we measured the effect of aging on the assumed light source direction . A group of old adults over the age of 60, and a group of young adults judged the relative depth of the central hexagon surrounded by six shaded hexagons. We found a significant effect of age on the light source bias, with the older participants exhibiting a significantly decreased leftward lighting bias compared to the young participants. This result could be well accounted by the diminished hemispheric lateralization that occurs with ageing.

17:30
Sensitivity to horizontal structure and face identification in developmental prosopagnosia and healthy aging

ABSTRACT. Sensitivity to horizontal structure in human faces is related to identification performance in young, healthy observers (Pachai, Sekuler and Bennett, 2013). Here, we explored this relationship in developmental prosopagnosic subjects (DP) and older observers, two populations for which face identification is notably impaired. Specifically, we measured the performance of four groups in a 6-AFC identification task: older observers (mean age = 75), younger controls (mean age = 20), DPs (mean age = 43), and DP-matched controls (mean age = 43). On each trial, the target face was band-pass filtered to retain only horizontal, only vertical, or all orientation components. Additionally, target viewpoint either matched the response screen faces (i.e. front-facing) or was angled slightly to the side. Across all groups, sensitivity to horizontal structure, relative to vertical, was correlated with overall identification accuracy. Further, the older and DP groups performed significantly worse than their corresponding controls, their performance was reduced further when viewpoint variation rendered image matching impossible, and this additional decrement corresponded with decreased horizontal sensitivity, relative to vertical. These results extend the body of evidence relating selective horizontal processing to human face identification, and may have implications for alleviating the identification deficits experienced by many populations.

16:00-18:00 Session 9B: Attention: brain mechanisms

.

Location: A
16:00
Perceptual load degrades population orientation tuning in early visual cortex
SPEAKER: Luke Palmer

ABSTRACT. It is well established that when attending to a task high in perceptual load, visual cortex responses to unattended stimuli are reduced (for reviews see Lavie , 2005; Lavie et al. 2015). Furthermore, it was shown recently (de Haas et al., 2014) that the coding of location by neuronal populations in early visual cortex is less precise under high (vs. low) load. Here we investigate the coding of orientation in retinotopic cortex under low and high levels of perceptual load using voxel-based orientation tuning functions (VTFs; Serences et al, 2008; Saproo et al, 2010). Perceptual load was manipulated with an RSVP task at fixation demanding either a single-feature search (low load) or conjunction-of-features search (high load), while sinusoidal gratings of varied orientations were presented intermittently in the periphery. Localised cortical responses to these gratings were extracted and used to construct VTFs. In agreement with recent psychophysical work (Stolte, Bahrami, and Lavie, 2014) we report reduced amplitude and increased bandwidth of orientation response profiles in primary visual cortex under high perceptual load. These findings suggest that perceptual load not only lessens visual cortical response to stimuli outside the focus of attention, but also degrades neural population tuning to stimulus orientation.

16:15
Do early sensory P1 event-related potential modulations actually reflect oculomotor inhibition of return?
SPEAKER: Jason Satel

ABSTRACT. A great deal of work has investigated the relationship between modulations of early sensory P1 event-related potentials (ERPs) and inhibition of return (IOR). However, these studies have discouraged eye movements, resulting in an actively suppressed oculomotor system and an input-based form of IOR. In the real world, eye movements are rarely suppressed and the ‘true’ form of output-based IOR arises (Taylor & Klein, 2000). Recent experiments using combined eye tracking and electroencephalography have investigated oculomotor IOR by incorporating eye movements before targets appear (e.g., Satel, Hilchey, Wang, Story, & Klein, 2013). Although P1 modulations still arise when there is repeated peripheral stimulation, these reductions appear only in retinotopic coordinates (when there is no IOR), not in spatiotopic coordinates (when there is IOR). When there is not repeated peripheral stimulation (as when central arrows are used as stimuli), equivalent IOR is still generated, but there are no P1 reductions. Later modulations of the Nd component do, however, still arise in conjunction with IOR in spatiotopic coordinates and with endogenous stimuli. We propose that modulations of later ERPs, such as Nd and perhaps N2pc, reflect ‘true’, oculomotor IOR, whereas P1 modulations are simply the result of repeated peripheral stimulation.

16:30
Goal-directed orienting and target-set maintenance in the fronto-parietal attention network.
SPEAKER: Joyce Vromen

ABSTRACT. A network of frontal and parietal regions has been implicated in visual attentional control. However, the respective contributions of different brain regions to sub-processes underlying attentional control remains to be clarified. In the current study, we used a blocked functional magnetic resonance imaging (fMRI) design varying search difficulty and target-set complexity in a visual search task, to distinguish the functional variation in activity for goal-directed orienting and goal maintenance. Increased target-set complexity led to a greater response in the middle and superior frontal gyri as well as in the inferior parietal lobule, whereas increased search difficulty led to a greater response in the precuneus, middle frontal and occipital gyri. Thus, the current study provides evidence that goal-directed orienting and goal maintenance draw on different brain regions, with a potential integrating role for the middle frontal gyrus (Brodmann area 6) previously implicated in planning of complex actions. Distinguishing sub-processes involved in attentional control and their neurofunctional underpinnings is particularly helpful for differentiating attentional disorders and enhancing specificity of treatments.

16:45
Effects of constant and variable target colours in one-, two-, and three-colour search
SPEAKER: Anna Grubert

ABSTRACT. Attentional target selection is less efficient in multiple-colour relative to single-colour visual search (Grubert & Eimer, 2013; 2015), demonstrating capacity limitations of top-down search templates. These limitations may be even more pronounced when target colours are not constant, but change across trials, so that new search templates have to be activated on each trial. This prediction was tested in two experiments that compared target selection efficiency during one-, two-, and three-colour search. Target colours were indicated by pre-cues, and either remained constant or varied randomly across trials. RTs increased as a function of the number of possible target colours, and were slower in the variable relative to the constant colour presentation conditions. The N2pc component (an electrophysiological index of attentional target selection) emerged later as the number of target colours increased, confirming the costs of multiple-colour search templates on selection speed. However, the N2pc onset latency delays during multiple-colour search were identical with constant and variable target colours. We conclude that the additional RT costs for multiple-colour search that arise when target colours are variable are not generated during target selection, but during subsequent target identification and response selection stages.

17:00
The role of alpha oscillations in the Attentional Blink

ABSTRACT. The Attentional Blink (AB) phenomenon occurs when two targets, embedded within a rapidly-presented sequence of distractors, must be identified. When the second target occurs within around 500 ms of the first, participants frequently miss it. Recently, there has been some focus on the role of alpha (8-14 Hz) oscillations in the brain, their role in temporal attention, and more specifically in the AB. Previous studies show resting-state alpha predicts AB, and that alpha power immediately prior to masked stimulus presentation can predict stimulus perception; however, the role of pre-stimulus alpha in the AB has received relatively little attention. We measured continuous EEG during both resting state and an AB task. There was considerable individual variation in AB magnitude; we failed to replicate MacLean et al.’s (2012) finding that resting-state alpha predicted AB, but found a strong relationship between pre-stimulus alpha power and AB. Comparing AB to no-AB trials, alpha power was higher for trials in which an AB was present. In addition, during stimulus presentation, alpha phase differed markedly for AB compared to no-AB trials. The results support both attentional overinvestment and alpha entrainment accounts of the AB.

17:15
The influence of context on visual selectivity as indexed by the N2pc

ABSTRACT. Previous studies showed that attention can not only be top-down tuned to a target’s physical feature (e.g., orange), but also to its relative attributes (e.g., “redder”; Becker, 2012). In the present study, participants (n=24) searched for a target with a pre-defined color among target-context elements (e.g., an orange target in a yellow context, which is “redder”). Spatially unpredictive precues exhibiting various cue and cue-context colors were presented to probe whether attention was tuned to the physical or relative target color. Critically, we measured the N2pc component of the event-related potential to assess whether a cue attracted attention. We found that cues with the same relative color as the target elicited N2pcs, even when they had different physical colors (e.g., a red cue in orange context, which is “redder”, but has a different physical color). Conversely, cues with the opposite relative color produced no N2pcs, even when they had the same physical color (e.g., an orange cue in a red context, which is “yellower”, but physically matching). Evidently, attention can be top-down tuned to relative target features, so that only cues with the same relative features attract attention. Importantly, these results demonstrate that the N2pc is sensitive to these context-dependent attributes.

17:30
The focus of spatial attention during encoding determines the capacity and precision of visual face memory
SPEAKER: John Towler

ABSTRACT. Visual working memory capacity is widely assumed to be around three or four items. For complex objects such as faces, memory capacity is often restricted to a single item. There is debate about whether such severe capacity limitations arise during memory encoding or during the subsequent memory matching process. I will discuss recent event-related brain potential (ERP) studies designed to ascertain the locus of this capacity limitation. Participants had to encode one or two faces in memory displays and compare them to a subsequent test face. In two experiments, faces had to be encoded either simultaneously or sequentially. Specific ERP components were measured to track the attentional selection of faces (N2pc), the sustained maintenance of these faces in working memory (CDA), and the speed and precision of face identity matching processes (N250r). ERP findings reveal that focal attention is necessary to maintain individual faces in working memory. We suggest that the focus of sustained spatial attention during the initial encoding and maintenance of individual faces causes limitations in the precision and capacity of visual face memory.

17:45
Differential brain activity in overt and covert attention shifts: Evidence from co-registered eye-tracking and EEG
SPEAKER: Louisa Kulke

ABSTRACT. Attention improves visual processing and can be shifted either overtly by making an eye-movement or covertly, without fixating the attended object. Patient groups commonly show impaired ability to disengage from stimuli and shift to peripheral targets, indicating that disengagement requires additional neuronal control. Most previous EEG studies instructed subjects to covertly shift attention, thereby suppressing natural saccades. This study compared covert and overt attention shifts, both with and without a competing central target (requiring disengagement). Participants performed an attention shift task in which they either manually responded to peripheral targets while maintaining fixation (covert) or made a saccade towards them (overt). EEG and eye-tracking were combined to simultaneously measure brain activity and saccades. Event-related potentials differed between overt and covert shifts of attention. An early fronto-central positivity was greater for covert shifts and occipital responses showed significantly longer latencies for shifts requiring disengagement. These results provide insights to the mechanisms of attention shifts in a natural context, and can be used for testing non-verbal populations such as infants and children with communication disorders. The fronto-central component might reflect activation of the frontal eye fields as a crucial difference between covert and overt shifts of attention, reflecting attentional control.

16:00-17:45 Session 9C: Lightness and brightness

.

Location: C
16:00
An Empirical Model for Local Luminance Adaptation in the Fovea
SPEAKER: Peter Vangorp

ABSTRACT. The visibility and perception of contrast depends strongly on the state of luminance adaptation in early vision mechanisms. Naka and Rushton (1966) modeled luminance adaptation of individual photoreceptors or as pooled across a small retinal region. However, most adaptation models used in computer graphics and vision assume a larger ad-hoc pooling radius. In this work we propose an empirical model for the local luminance adaptation, based on new psychophysical experiments. A custom high-dynamic-range display was build to produce contrasts in excess of 100000:1 and luminance above 5000 cd/m^2. The display was used to measure baseline adaptation due to full-field luminance stimuli, and the adaptation due to various patterns of disks and rings. We discuss the predictive power of several candidate models, ranging from simple Gaussian-weighted luminance averaging to general averaging kernels in the log-luminance domain. We found that the pooling radius is smaller than the ad-hoc radius used in many applications, but larger than the extent of the laterally interconnecting retinal neurons. This suggests that luminance adaptation is also pooled in receptive fields in LGN or the visual cortex. Our predictive model can be used as an adaptation function in existing applications in vision and graphics.

16:15
Cortical model of object-centered lightness computation explains contrast and assimilation in a luminance gradient illusion
SPEAKER: Michael Rudd

ABSTRACT. I have recently proposed an object-centered model of cortical lightness computation (Rudd, 2010, 2013, 2014) in which local directed steps in log luminance are encoded by oriented spatial filters in early visual cortex, then the filter outputs are appropriately integrated along image paths directed towards the target at a subsequent cortical stage. A contrast gain control mechanism adjusts each filter’s gain on the basis of the outputs of other nearby filters. Here, I explain how this model accounts for the Phantom Illusion (Galmonte, Soranzo, Rudd, & Agostini, submitted), a new luminance gradient illusion in which either an incremental or decremental target can be made to appear as an increment or decrement, depending on gradient width. For wide gradients, incremental targets appear as increments and decremental targets appear as decrements. For narrow gradients, the reverse is true. I frame my explanation in the context of earlier modeling results that highly constrain the model parameters. These constraints reinforce the conclusion that the lightness computations must be object-centered and thus ‘midlevel.’ This conclusion is consistent with the model hypotheses that long-range contrast integration occurs in or beyond cortical area V4, following midlevel cortical computations related to image segmentation (completion, border ownership) in V2.

16:30
Illumination layout of the scene influences visual sampling

ABSTRACT. Illumination tends to vary in natural visual scenes, incorporating well-lit areas and darker regions. These darker areas have lower relative contrast and may therefore provide less detailed information. Recent research in the lightness domain, both using eye-tracking methodology and psychophysics, confirmed that there is a tendency to avoid shadowed regions when estimating color. We investigated a similar tendency but instead employed high level visual judgment tasks. In the two experiments we presented photographs of human faces half covered with a shadow. In the first experiment, we digitally modified photographs in order to obtain well-controlled and equal distributions of lower contrast areas, while in the second experiment we used photographs that already contained natural shadows. We asked our participants to judge beauty, age and profession of the depicted people. Using a state-of the-art eye tracker, we measured the first fixation, the number of fixations and the dwell time, both at the level of the whole image and for pre-defined regions of interest (such as the eyes, the mouth, etc.). For all measures and for all tasks, a clear preference to fixate the non-shadowed regions was found. This suggests that visual sampling preferences for well-lit areas extend beyond low-level visual tasks.

16:45
Effects of lateral interactions and adaptation on color and brightness induction
SPEAKER: Romain Bachy

ABSTRACT. We introduce a new method for measuring color and brightness induction that separates lateral interactions and adaptation effects, and examines the effect of edge-blur between test and surround. Observers fixated a central annulus (0.66 to 2.0°) on a 12°x12° surround. Annulus edges were sharp or blurred; surround color was modulated as a 0.5 or 4.0 second half-sinusoid between mid-gray and each of six poles of DKL space (light/dark, red/green, yellow/blue). When annulus color was steady at mid-gray, observers perceived an induced shift toward the opposite pole. Magnitude of the shift was estimated by nulling it with a fraction of the surround modulation added to the annulus, using a double-random 2AFC staircase. Trial blocks alternated surround modulation to opposite poles to maintain adaptation at mid-gray. This method revealed strong induction effects across 6 observers. There were no consistent induction asymmetries along any axis for fast presentations. For slower presentations, dark-light induction increased to further reduce asymmetry, suggesting asymmetry is not due to adaptation or to lateral interactions. Edge properties didn’t change induction magnitude for fast or slow presentations. Lateral interactions underlying induction are thus symmetric for color and brightness, and involve spatially opponent filters of modest widths instead of edge extraction.

17:00
Luminance and color correlations allow lightness constancy through a veiling luminance without borders

ABSTRACT. Seen through a veiling luminance (additive layer) without borders, lightness constancy is zero for a Mondrian but 100% for a 3D still life (Gilchrist & Jacobsen, 1983). We report nine experiments using a new apparatus that allows a light source to be placed either at the virtual location of the eye (thus eliminating both cast and attached shadows) or 25 degrees away from the eye. Three main findings. (1) Shadows produced significantly better lightness constancy (blacks look blacker; veil perceived), because adding a veil creates a positive correlation between contrast at shadow boundaries and the luminance of the underlying surface. (2) Adding colored patches to the Mondrian had no effect while adding colored objects to the 3D scene produced better constancy, due to a different correlation: Adding a veil creates a positive correlation between the saturation gradient across a curved colored object and the luminance gradient across it. (3) A weakly colored veil (24% purity) produced better constancy than a neutral veil, due to a further correlation: In this case the veil creates a negative correlation between the saturation gradient across a curved achromatic object and the luminance gradient across it. Control conditions showed that these correlations are used.

17:15
Coming to Terms with Lightness, Brightness, and Brightness Contrast: It's Still a Problem
SPEAKER: Mark McCourt

ABSTRACT. Lightness, defined as perceived reflectance, is under-specified because it can refer to three types of judgments. When an illumination boundary is visible observers demonstrate three independent dimensions of achromatic experience: perceived intensity (brightness), perceived intensity ratio (brightness-contrast), and perceived reflectance (lightness). Under homogeneous illumination achromatic experience reduces to two dimensions and lightness judgments are based on either brightness or brightness-contrast. We use the term inferred-lightness to refer to the independent dimension of lightness judgments that exists when there is a visible illumination boundary in order to emphasize that these lightness judgments are inferential and not directly based on appearance. Although the three types of lightness judgments are not comparable they are frequently conflated due to the underspecified definition of lightness as perceived reflectance. This confusion is compounded by inconsistent recognition by both observers and experimenters of when an independent dimension of inferred-lightness is available for matching. Clearly distinguishing when lightness matches are based on appearance (and do not differ from brightness or brightness-contrast matches) versus when lightness matches are based on the independent dimension of inferred-lightness, resolves confusion in the literature and provides insight into the mechanisms that are employed to tackle the fundamental inverse problem of vision.

17:30
Scotopic lightness perception
SPEAKER: Robert Ennis

ABSTRACT. The anchoring theory of lightness proposes that a white percept is linked to a scene’s brightest surface. Much evidence for some form of anchoring exists, but mainly from photopic conditions and hardly from scotopic conditions, when only rods respond. We tested whether anchoring occurs when scotopic. We printed 10 chips, equally spaced on the CIELUV L* scale (under D65). 6 naive observers first viewed the maximally and minimally reflecting chips at 277 cd/m^2 and memorized their percepts as 100% white and 0% white. Observers adapted to four light levels (~1.2x10^(-4) (scotopic), ~1.8 (mesopic), 28, and 277 cd/m^2) and viewed the chips, one at a time, in two, randomly-ordered blocks for foveal and peripheral fixation. Dark adaptation lasted twenty minutes. Observers reported the perceived amount of white per chip, in 10% steps along their memorized white scale. When photopic, observers assigned an average rating of 92.86% to the maximally reflecting chip. When mesopic, observers assigned an average rating of 54.64% to that chip. When scotopic, they assigned an average rating of 45%. This agrees with our phenomenological observation that the white chip appears gray when scotopic. We propose that "pure" cone activation drives white perception.

18:30-19:30 Session 10: CRS Lecture

CRS Lecture Glyn Humphreys

Location: A
18:30
The integrative self

ABSTRACT. Self-reference is known to influence a wide variety of cognitive processes - from perception and attention through to memory. Here we present an over-arching account of how at least some forms of self-reference effects come about. We argue that self-reference acts to enhance integration in information processing. We present examples supporting this argument based on the effects of self reference on (i) binding memories to their source, (ii) integrating different parts into whole objects, (iii) switching associations from one stimulus to another, and (iv) integrating different processes (attention and decision making). The results suggest that self-reference provides a form of 'integrative glue' across several levels of processing.