View: session overviewtalk overview
10:00 | Perceived duration of motion aftereffects is longer for individuals with more vivid visual imagery PRESENTER: Alan L. F. Lee ABSTRACT. Visual imagery is the ability to generate mental images in one’s “mind’s eye”. By studying how visual imagery ability is related to certain aspects of visual perception, we can better understand the processes underlying visual imagery. One such aspect that has been overlooked in the literature is the motion aftereffect (MAE). In the present study, we compared the perceived MAE duration in individuals with high and low levels of self-report vividness in visual imagery. Using an online questionnaire (VVIQ2, Marks, 1995), we abstained the VVIQ scores from ~160 valid participants, among which we targeted 36 participants, with 17 being in the high-VVIQ group (M = 122) and 19 in the low-VVIQ group (M = 94.2). On each trial, participants viewed, as the adapting stimulus, a random-dot kinematogram (RDK) (200 black and white dots; speed = 1.6 deg/s) for 50 seconds. The adapting RDK was presented to either the left or right eye only, with the order randomized across trials. After a one-second blank screen, we presented the last frame of the adapting RDK for 25 seconds as the test, either to the adapted eye or the non-adapted eye. During the test, participants pressed one of the two arrow keys to indicate the perceived MAE direction. They were told to hold the key as long as the MAE was perceptible. We found that perceived MAE duration was significantly longer in the high-VVIQ group (M = 9.86s, SD = 6.74s) than in the low-VVIQ group (M = 4.45s, SD = 4.12s; p = .008, Cohen’s d = 0.97). This group difference was weaker but still significant when MAE was tested on the non-adapted eye (Cohen’s d = 0.82) than on the adapted eye (Cohen’s d = 1.06). Our results suggest that motion perception is closely related to visual imagery ability. |
10:15 | Neural Correlates of Unconscious Prior Experience Utilization in Disambiguating Ambiguous Stimuli PRESENTER: Po-Jang Hsieh ABSTRACT. Disambiguation, also known as one-shot learning, is crucial for human evolution as it allows for adapting to ambiguous threats with limited exposure. Previous studies showed how prior information affects recognizing ambiguous stimuli consciously, but it remains unclear whether prior experience can be automatically or unconsciously applied to disambiguate stimuli. This research used fMRI to monitor brain activity during the Mooney Images Paradigm under discontinuous flash suppression. Findings replicated effects observed consciously and revealed neural disambiguation even in unconscious conditions, particularly in occipital and temporal regions like V1, V2, FG, IT, and MT. This finding suggests the brain automatically applies prior experiences even without conscious perception, indicating a disparity between neural and conscious recognition. |
10:30 | Dissociable effects of attentional modulation and perceptual suppression in V1 PRESENTER: Chen Liu ABSTRACT. Although attentional modulation in the primary visual cortex (V1) is well established, the effect of perceptual suppression in V1 remains controversial. To address this issue and to better understand the relationship between attention and consciousness, we employed 0.8-mm isotropic resolution CBV and BOLD fMRI at 7 Tesla to investigate the effect of attentional modulation and perceptual suppression in human V1. A 2 x 2 factorial design was used to control attention and awareness independently. Red Mondrian masks and green gratings were presented in alternating frames at adjacent but non-overlapping locations, either in the same eye (visible) or in different eyes (invisible). Subjects either attended to and reported visibility of the green grating (attended), or performed a letter detection task at fixation (unattended). Stimuli were presented in 30-s blocks, alternating with 18-s fixations. An attentional cue was presented at 2 s before each stimulus block. In all four conditions, fMRI time courses showed a transient response followed by a sustained plateau. Attentional modulation and perceptual suppression on the transient response showed a double dissociation: the effects of perceptual suppression were identical between attended and unattended conditions, while the effects of attentional modulation were identical between visible and invisible conditions. These findings demonstrate the independent effects of attentional modulation and perceptual suppression on V1 activity, which may reconcile the discrepancies in previous studies (Watanabe 2011, Science; Yuval-Greenberg 2013, J Neurosci). |
10:45 | Predicting the modality and intensity of imagined sensations from EEG measures of oscillatory brain activity PRESENTER: Derek Arnold ABSTRACT. Most people can experience imagined images and sounds. There are, however, large individual differences, with some people reporting that they cannot have imagined audio or visual sensations (Aphants), and there are others who report having unusually intense imagined sensations (hyper-phantasics). These individual differences have been linked to activity in sensory brain regions, driven by feedback. We therefore predicted that imagined sensations should be associated with distinct frequencies of oscillatory brain activity, as these can provide a signature of interactions within and across regions of the brain. We have found that we can decode the modality of imagined sensations with a high success rate (~75%), live while people participate in an experiment, and provide neurofeedback on this to motivate participants. Moreover, replicating many past studies, we have found that the act of engaging in audio or in visual imagery is linked to reductions in oscillatory power, with prominent peaks in the alpha band (8–12 Hz). This, however, did not predict individual differences in the subjective intensity of these imagined sensations. For audio imagery, these were instead predicted by reductions in the theta (6–9 Hz) and gamma (33–38 Hz) bands, and by increases in beta (15–17 Hz) band. Visual imagery intensity was predicted by reductions in lower (14–16 Hz) and upper (29–32 Hz) beta band, and by an increase in mid-beta band (24–26 Hz) activity. Our data suggest there is sufficient ground truth to the subjective reports people use to describe their imagined sensations, such that these are linked to the power of distinct rhythms of brain activity. |
11:00 | The mystery of continuous flash suppression: A two-photon calcium imaging study in macaque V1 ABSTRACT. Continuous flash suppression (CFS) has been widely used to study subconscious visual processing, in which the perception of a high-contrast stimulus presented to one eye is suppressed by flashing Mondrian noise presented to the other eye. It remains elusive whether and how the responses of V1 neurons, most of which receive binocular inputs, are affected by CFS. To address this issue, we employed two-photon calcium imaging to record responses of thousands of superficial-layer V1 neurons to a target under CFS in two awake, fixating macaques. The target was a circular-windowed square-wave grating (d=1°, SF=3/6 cpd, contrast=0.45, drifting speed=4°/s). The flashing stimulus was a circular Mondrian noise pattern (d=1.89°, contrast=0.50, TF=10 Hz). The stimuli were presented for 1000-ms with 1500-ms intervals. The square-wave grating at various orientations was first presented alone to either eye to identify oriented-tuned V1 neurons and estimate their respective ocular dominance indices (ODI). Then the grating target and the flashing noise were presented dichoptically to measure neuronal responses under CFS. The results show that the flashing noise nearly completely wiped out the orientation responses of neurons preferring the noise eye or both eyes, in that the population orientation tuning functions were suppressed with no measurable or very wide bandwidths. The flashing noise also significantly suppressed the orientation responses of neurons preferred the grating eye, but in a less degree, and the tuning bandwidths were still measurable, which increased from 11-13o to 19-21o (half-width at half-height). The neuronal responses under CFS can be fitted with a gain control model, in which the flashing noise produces ODI-dependent and partially orientation-tuned intra- and inter-ocular inhibition. Consequently, the high-contrast stimulus evidence may not be rendered into subconsciousness by flashing noise as many studies assumed. Instead, it is severely compromised under CFS and likely unable to enter consciousness. |
11:15 | Veridical and consciously perceived information interact in guiding behavior PRESENTER: Marjan Persuh ABSTRACT. A compelling proposition suggests that our visual systems distinguish between information for perception and action, yet consensus remains elusive despite numerous studies using visual illusions where object properties diverge between veridical and perceived. In exploring response priming, evidence hints that only the physical attributes of prime stimuli govern motor responses. Across three experiments, we investigated the interplay of physical and consciously perceived locations in response priming, leveraging the well-known flash-lag illusion—a scenario where a briefly flashed disk and moving bars, perceived at the same location, appear displaced. Participants rapidly responded to the target disk's location presented above or below static bars. In the first experiment, maintaining the physical location of the prime disk constant resulted in both the disk and moving bars appearing at the same spot. Responses to the target disk consistently showed a bias influenced by the prime disk, indicating that the illusory perception of the prime location primed rapid motor responses. In the second experiment, we inverted the physical and perceived location of the prime. After estimating the illusion size for each participant, we presented the prime disk either above or below the moving bars, aligning the perceived location with the moving bars. Motor responses were moderated by the physical location of the disk, revealing that the visuomotor system utilized the veridical prime location. In the third experiment, we juxtaposed physical and perceived locations, situating them on opposite sides of the moving bars. Under this arrangement, motor responses remained unaffected by primes. Our experiments underscore that visuomotor systems integrate both sources of information—veridical and consciously perceived location—to guide behavior. |
Multisensory integration is one of the key functions to obtain stable visual and non-visual perception in our daily life. However, it is still a challenging problem to comprehensively understand how our brain integrates different types of modal information. How does our visual system extract meaningful visual information from retinal images and integrate those with information from other sensory modalities? Recent technologies, such as virtual reality (VR) and/or augmented reality (AR), can provide scalable interactive and immersive environments to test the effects of external stimulation on our subjective experiences. What do those technologies bring to our research? We invite world-leading scientists in human perception and performance to discuss the psychological, physiological, and computational foundations of multisensory integration, and methodologies that provide insight into how non-visual sensory information enhances our visual experiences of the world.
Organizers: Hiroaki Kiyokawa (Saitama University) and Juno Kim (University of New South Wales)
[11] Friendliness and Hostility in Action: Encoding/Decoding Principles and Cultural Influence ABSTRACT. In an unfamiliar culture when verbal communication is ineffective, body movements help us to tell friends from foes. However, how actions lead to intention expression and understanding and how the cultures influence this fundamental and critical capacity need to be clarified. We approached these endeavors by recruiting professional performers (42 Japanese, 41 Taiwanese) to our lab. We instructed them to interact with an imaginary friendly alien (scenario 1) or an imaginary hostile alien (scenario 2) for 10 seconds. Because extraterrestrial lives neither understood human culture nor spoke human languages, performers needed to communicate solely through body language. Their movements, recorded by a motion capture system (Vicon), were converted to dynamic point-light animations. An additional 53 observers (24 Japanese and 29 Taiwanese) viewed these animations and reported the perceived friendliness/hostility on a scale from 0-friendly to 100-hostile in an online experiment platform. From subjective reports, similar cues were used to express (by performers) and detect (by observers): from motion-related cues such as kinematics (e.g. slow-friendliness, fast-hostile), posture (open-friendliness, closure-hostile) to abstract cues such as intention (e.g. aggression, greeting-gestures) and imagined context (e.g. perceived emotion, daily-action). The intensity rating showed that the perceived hostility was significantly higher in hostile animations than in the friendly animations, and a positive correlation of the intensity rating between JP and TW raters (95.42%, p < 0.001 for JP animations; 94.89%, p < 0.001 for TW animations) indicated a high perception consistency between these two modes across two cultures. Interestingly, Japanese observers perceived higher hostility when Japanese performers interacted with an antagonistic alien than Taiwanese observers, highlighting a cultural-specific sensitivity toward negative expression. To summarize, our reports provided the first accounts of encoding and decoding principles of friendliness-hostility body expression. Our demonstration of cultural modulations encourages future research in this direction. |
[13] Culture Matters: Performance and Perception of Human-like Body Motion between Taiwan and Japan PRESENTER: Xiaoyue Yang ABSTRACT. Humans are sensitive to conspecific movements and might have a distinct way of perceiving whether a motion is human-like. Because cultural norms heavily modulate our non-verbal communication, we investigate whether there is any cultural impact on expressing and detecting human-like body movements. 43 Japanese and 41 Taiwanese professional performers were instructed to use whole-body movements to demonstrate that they were real humans (instead of AI or machines) when their motion was recorded by a motion capture system (Vicon) in our laboratory. The recordings were processed into 57-dynamic-point-light displays (PLDs) to remove factors unrelated to motion (e.g. face, background). An additional 99 observers (50 Taiwanese and 49 Japanese) viewed all PLDs and judged whether real humans or AI made each PLD. From the interviews, Taiwanese performers used kinematics cues most frequently to convince the viewers of their human identities (e.g., smooth, continuity, flexible, soft) rather than non-human agents (e.g., rigid, repetitive). Japanese performers utilized more contextual cues (e.g. festival dance, children’s games, shared human experiences, etc) in their motion. The objective assessment of PLDs via Motion Energy Analysis (MEA) also revealed interesting cultural characteristics: Taiwanese PLDs contained significantly higher motion energy than Japanese PLDs (p < .001). For Taiwanese PLDs, the most Human-like PLDs peaked at 1-2 Hz and AI-like PLDs peaked at 0-0.25 Hz. Japanese PLDs contained consistent energy across all frequency bands. For humanness perception, Taiwanese observers were significantly more likely to make real human reports than Japanese observers (p < .041), regardless of the origin of the PLDs (JP/TW), suggesting a more dominant role of observers’ cultural backgrounds than the performers'. Our study provided the first report on cultural-specific body expressions to differentiate real human motions from AI-generated movements. It highlights how environmental factors modulate non-verbal communication from both senders and receivers. |
[15] Exploring Mental Health Self-stigma, Self-identification, and Person Perception in a Subclinical Population ABSTRACT. Objectives: Individuals with high levels of internalized mental health stigma are more likely to have poorer health outcomes and engage in discriminatory behaviors against others with mental ill health (MIH). However, self-identifying as having MIH can buffer against the negative impacts of self-stigma. This study investigates the influence of mental health self-stigma and self-identification on person perception in a subclinical population in Singapore. Method: Participants (N=83) rated 36 person images paired with descriptions of MIH symptoms or non-MIH behaviors on trustworthiness and competence. They then completed measures of self-stigma, self-identification, anxiety, and depression. For robustness, faces and descriptions were randomly paired for each participant, with an equal number of faces per race and sex presented. Results: When person images were paired with MIH symptoms, they were rated significantly less competent and trustworthy compared to when paired with non-MIH behaviors. Participants with higher self-stigma scores rated stimuli significantly lower on competence but not trustworthiness. A novel two-way interaction between MIH symptom labels and self-identification on trustworthiness ratings was observed. This interaction demonstrated a gentler decrease in trustworthiness ratings for person images paired with MIH labels when participants had higher self-identification. Conclusion: The findings support the persistence of mental health stigma and suggest internalisations of stigma on competence-relevant domains. The results also highlight the benefits of positive self-identification on reducing stigmatizing behaviors, shedding new light on the potential positive effects of peer support in individuals with subclinical MIH experiences. This study contributes to the limited research on self-stigma and person perception in subclinical populations and underscores the importance of addressing mental health stigma and fostering positive self-identification. |
[17] Does premenstrual syndrome affect emotion recognition? ABSTRACT. Premenstrual syndrome (PMS) is characterised by recurring physical and affective symptoms that arise during the luteal phase of a woman’s menstrual cycle. Mood disturbances are often reported in PMS, and we examined whether that might have repercussions on social cognition, specifically, the ability to interpret facial expressions. Forty-one participants were grouped as individuals with (N = 23) or without (N = 18) PMS, based on scores calculated from a daily record of menstrual symptoms over two consecutive menstrual cycles. In a subsequent menstrual cycle, all participants completed questionnaires measuring several dimensions of mood (e.g., anger, depression), once during their follicular phase, and once in their luteal phase. At the same two instances, they viewed a series of facial expressions that were intended at conveying one of several emotions (happy, angry, disgusted, or fearful) and intensities (i.e., subtle to extreme), and classified them with a keypress. Severities of negative affect (p = .022), anger (p = .005), depression (p = .032), and total mood disturbance (p = .030) were generally higher during the luteal phase compared to the follicular phase, and this increase was comparable between the two participant groups. As for expression classification, all participants found it difficult to accurately recognise subtle expressions of disgust (p ≤ .004) during the luteal phase, relative to the follicular phase. Further, individuals with PMS exhibited some unique negative biases. They classified neutral faces (p = .003) and subtle intensities of anger (p ≤ .013) more often as angry during the luteal phase than in the follicular phase. These biases were not observed in individuals without PMS. Our findings suggest that classification of subtle facial expressions is generally unstable across the menstrual cycle. More importantly, PMS may introduce distinguishable biases on emotion classification that do not appear to be contingent on mood disturbances. |
[19] Parity and gender influences in mental jigsaw puzzles: A secondary eye-tracking study ABSTRACT. This secondary analysis, derived from a preprint (https://doi.org/10.31219/osf.io/vkctj) and preregistered (https://osf.io/5yux6 and https://osf.io/r2j7a), investigates the influence of parity and gender on mental jigsaw puzzles (FT, fitting task) and traditional object mental rotation tasks (MT, matching task). While previous research noted behavioral gender and parity differences in MT, FT remains underexplored. Thirty college students (14 female, 16 male; balanced for gender-specific analysis) engaged with three-dimensional objects, either for FT (male fixed at 0° with female) or MT (male fixed at 0° with male), analogous to electrical connectors. Parity was adjusted by mirroring male objects at 0° (congruent, incongruent). Participants, responding via foot pedals, were tested across six rotational angles (0°, 60°, 120°, 180°, 240°, 300°) on both rotation sides (left, right), with their eye movements tracked over 576 trials. Using both parametric and non-parametric repeated measures ANOVA, distinct behavioral patterns were observed: in FT, incongruent conditions exhibited significantly shorter reaction times (RTs) at 240° and longer RTs at 0°, 60°, and 300° on the left side, and 300° on the right side (all p < .05). MT, under incongruent conditions after collapsing rotation sides, showed increased RTs at 60° and 300° (all p < .01). Error ratios were biased higher under congruent conditions for both tasks, particularly at mid-range angles (120°–240°; all p < .001), extending to 300° in FT on the left side. Longer fixation times were noted under congruent conditions in FT, and more fixations per saccade in MT under incongruent conditions (both p < .05). Despite extensive analysis, no significant gender differences were observed across these metrics in the congruent condition (all p > .05), suggesting minimal impact of gender in these tasks. These findings imply nuanced similarities and differences in behavioral parity trends and cognitive strategies across tasks, emphasizing the need for further investigation. |
[14] Cultural Variance in the Emotion Perception of Body Actions by Asian Performers ABSTRACT. While many studies have examined cultural variances in emotion perception of facial expressions, there is relatively less research on the cultural variance in emotion perception conveyed through body actions. Previous studies on facial expressions suggested an in-group advantage, wherein people perform better at recognizing emotions expressed by members of their cultural group. However, these studies mainly focused on emotion recognition, neglecting other important dimensions, including arousal and valence. Building on the hypothesized in-group advantage for emotion perception of body actions in the literature and incorporating additional dimensions of emotion, the present study investigated the perception of seven emotions (i.e., joy, sadness, anger, fear, disgust, surprise, and contempt) expressed by Asian performers across four dimensions (emotion recognition, confidence in recognition, arousal, valence). Participants (Asian: N = 41; non-Asian: N = 26) were engaged in an online experiment to watch 70 motion videos and respond to their emotion perceptions. Results revealed that Asian participants had higher accuracy and confidence in recognizing sadness, anger, and surprise. Besides emotion recognition differences, Asian observers reported higher perceived arousal for joy, sadness, disgust, and contempt and higher perceived negativity for sadness, anger, and contempt, indicating cultural variances across multiple emotional dimensions. While significant relationships were discovered between cultural contact and perception of certain emotions, no significant relationship was found between individualist tendency and emotion perception, emphasizing cultural exposure rather than attitudes towards self and community. This study contributes to cross-cultural studies in emotion perception, calling for further investigation into potential variances in the underlying neural mechanisms. |
[16] Prolonged Visual Perceptual Changes Induced by Short-term Dyadic Training: The Influential Roles of Confidence and Autistic Traits in Social Learning ABSTRACT. As social creatures, we are naturally swayed by the opinions of others, which largely shape our attitudes and preferences. However, whether social influence can directly impact our visual perceptual experience remains debated. We designed a two-phase dyadic training paradigm where participants first made a visual categorization judgment and then were informed of an alleged social partner’s choice on the same stimulus. Results demonstrated that social influence significantly modified participants’ subsequent visual categorizations, even when they had been well-trained prior to the dyadic training. This effect persisted for an extended period of up to six weeks. Diffusion model analysis revealed that this effect stemmed from perceptual processing more than mere response bias, and its strength was inversely related to the participants’ confidence and autistic-like tendencies. These findings offer compelling evidence that our perceptual experiences are deeply influenced by social factors, with individual confidence and personality traits playing significant roles. |
[18] Different cognitive mechanisms underlie absolute and relative evaluation of images ABSTRACT. In life with huge digital data, we need to choose some pictures for display from hundreds of pictures. There are two possible methods to evaluate images. One is to evaluate preference for each image one after another (absolute evaluation). The other is to compare two images to decide which one is preferable to the other (relative evaluation). Both of the evaluations should be based on the same mental process if preference is uniquely decided. However, interestingly, some studies showed that the evaluation of the two methods was different. In this study, we investigated what causes the difference between the two types of evaluation methods. To understand the neural process underlying the difference, we conducted behavioral experiments. We recorded the facial expressions and EEG signals of participants while they were performing absolute and relative image evaluation tasks during the experiment. The experiment showed that preference scores of the absolute and relative evaluations were not correlated, and the relative differences in scores of a certain pair of images from absolute evaluations are sometimes opposite in scores from relative evaluation. These results suggest that distinct cognitive mechanisms underlie relative and absolute evaluations. Next, we trained a machine learning model to predict absolute and relative evaluation results from facial expressions and EEG signals. For both types of evaluations, when predicting one participant’s evaluations, facial/EEG features of the same participant successfully predicted evaluations. Furthermore, for absolute evaluations, features from other participants improved prediction performance, suggesting that there are common facial/EEG features across participants. On the contrary, for relative evaluations, features from other participants did not improve predictive performance, suggesting facial/EEG signals related to relative evaluations differ greatly across participants. In summary, we showed that absolute and relative evaluations are two different mechanisms from behavioral data analysis and machine learning analysis. |
[20] Empowering Attires: The Role of Clothing in Countering Stereotypes ABSTRACT. As economic inequality grows, understanding how status cues shape social perception becomes increasingly crucial. The pervasive stereotype of incompetence continues to profoundly shape the way women and minorities navigate social interactions, with studies showing the strategic use of language to convey competence. Likewise, clothing conveys competence cues through subtle economic status cues. However, the role of attire in countering stereotypes remain underexplored. This study thus investigates how attires are employed across gender and race to project competence and mitigate negative stereotypes. In two studies, participants (Study 1: 20 Black and 20 White men, 20 Black and 20 White women; Study 2: 50 Black and 50 White men, 50 Black and 50 White women, all residents of the U.S.) read about 20 scenarios describing social situations. Half demands competence presentation (competence-relevant situations; e.g., presenting in an exhibition), whereas the other half involve lower stakes (competence-irrelevant situations; e.g., gathering with friends). Given each situation, participants choose an outfit from five options randomly drawn from a gender-specific pool of ~75 clothing images. These clothes and situations were rated for professionalism by two separate groups of independent raters. To ensure robustness, separate clothing images were used for each study, study 2’s protocol and predictions were preregistered. Initial results revealed that competence-relevant situations prompted more professional attire choices. Black and female participants chose more professional attires in competence-irrelevant situations. Black women opted for more professional clothes in competence-relevant situations than White women, while White men opted for attires more casual than Black men in competence-irrelevant situations. Additional studies examine the motivations behind selecting clothes choices. By shedding light on how different groups utilise attires to strategically navigate their social landscape, this research underscores its potential to counter negative stereotypes and contribute to our understanding of the subtle yet powerful ways in which they are resisted. |
[22] Trustworthiness Judgement in Short Videos is Influenced by Speakers’ Facial Emotion and Attire PRESENTER: Zihao Zhao ABSTRACT. Previous research found that attire and emotions influenced social attributes of face images such as trustworthiness and competence. However, their effects on the trustworthiness of short videos on social media such as TikTok remain unclear. The current study explored how facial emotions and clothes of the speaker influence trustworthiness judgement in short videos. Thirty-two participants (Mean Age = 32.16, 17 females) viewed 192 short videos (4 clothes * 3 emotions * 2 display modes * 2 speaker genders * 4 news contents) in a random order and rated trustworthiness of speaker and content in the same trial on a 0 (lowest trust) to 100 (highest trust) scale after each video. Open-source computer vision algorithms were used to transform static real-person images into realistically speaking individual’ videos. Repeated measures analysis of variance showed speaker and content trustworthiness ratings were influenced significantly by emotion (F(1.57, 48.74) = 17.42, p < 0.001; F(1.57, 48.63) = 6.78, p = 0.005, respectively) and uniform ( F(1.80, 55.71) = 8.48, p = 0.001; F(1.90, 58.99) = 4.97, p = 0.011, respectively). Pairwise comparisons with Bonferroni correction found angry speakers were rated as less trustworthy than happy (t = 4.11, p = 0.001) or neutral speakers (t = 5.69, p < 0.001). Angry speakers' content was also less trusted than neutral speakers’ (t = 3.49, p = 0.004). Furthermore, speakers in doctor uniforms were trusted more than those in casual clothes (t = 4.24, p = 0.001). However, content from speakers in doctor uniforms received higher trustworthiness ratings than those in casual clothes, but not significant (t = 2.59, p = 0.086). The current study found both emotion and attire influenced our judgement of trust toward short videos’ speaker and content. It provides insights into the underlying mechanisms of judgment of trust and fake news detection. |
14:00 | Rhythmic TMS over human right parietal cortex strengthens visual size illusions PRESENTER: Lihong Chen ABSTRACT. Rhythmic brain activity has been proposed to structure visual processing. Here we investigated the causal contributions of parietal beta oscillations to context-dependent visual size perception, which is indicated by the classic Ebbinghaus and Ponzo illusions. On each trial, rhythmic TMS was applied over left or right superior parietal lobule in a train of five pulses at beta frequency (20 Hz). Immediately after the last pulse of the stimulation train, participants were presented with the illusory configuration, and performed a size-matching task. The results revealed that right parietal stimulation significantly increased the magnitudes of both size illusions relative to control vertex stimulation, whereas the illusion effects were unaffected with left parietal stimulation. The findings provide clear evidence for the functional relevance of beta oscillations for the implementation of cognitive performance, supporting the causal contribution of parietal cortex to the processing of visual size illusions, with a right hemispheric dominance. |
14:15 | Verification of Hermann grid illusion using machine learning PRESENTER: Yuto Suzuki ABSTRACT. There are various types of optical illusions that humans experience, and multiple approaches are being used to explain the mechanisms. Machine learning could be one of the promising methods. In this study, we attempted to clarify the mechanism of Hermann grid illusion by reproducing the illusion using machine learning. In the evaluation experiment, observers evaluated the strength of illusion for 568 Harman grid illusion images with different grid thicknesses, number of intersections, angle, and brightness contrasts on a seven-point scale. After that, we created a model that learned each image and its strength of illusion using CNN. Then, we calculated the strength of the illusion of the test images and determined the correct answer rate. In addition to the conventional model, we created one that incorporates the ON-center receptive field structure, which is thought to explain the optical illusion mechanism, orientation-selective structure, and structure with both visual systems. We compared the correct answer rates of those four types of models in a machine-learning experiment. We confirmed that the Hermann grid illusion has orientation selectivity in the evaluation experiment. In the machine learning experiment, the model with the structure of the visual system had a relatively more stable correct answer rate than the conventional model, suggesting its validity. It was also suggested that the correct answer rate may increase by combining the structures of the visual system. Our results of evaluation and machine learning experiments suggested that the Hermann grid illusion may involve a mechanism using ON-centered receptive fields and orientation selectivity. Further model improvements are needed to clarify the mechanism of the Hermann grid illusion. |
14:30 | Classical orthonormal polynomials as activation functions for implicit neural representations to preserve high frequency sharp features. PRESENTER: Annada Prasad Behera ABSTRACT. Neural networks with ReLU activation functions, although are popular choices in many machine learning applications, are strongly biased towards reconstructing low frequency signals. Higher frequency representations are essential to manifest sharp features in images and 3D shapes. The current strategy to enhance high frequency representations in neural networks is to use a sinusoidal activation with increasing frequency. However, such activation introduce periodicity in the network as sinusoidal functions are periodic and also introduce ``ringing artifacts'' in image and 3D shape reconstructions due to Gibbs overshooting phenomenon. Noting that sinusoidal functions with increasing frequencies are only an example of a more general class of functions called complete orthogonal systems that approximate any arbitrary function, the authors have explored other ``classical'' orthogonal systems like Legendre, Hermite, Laguerre and Tschebyscheff polynomials as activation functions to address above mentioned issues and compared them against the sinusoidal functions and non-orthogonal systems such as power series. In this study, the authors demonstrate how these functions can be used as a neural network layer, compare their convergence rates with different optimizers and increasing polynomial degrees, and assess their accuracy in various applications, including solving differential equations, image reconstruction, 3D shape reconstruction, and learning implicit functions. |
14:45 | The inhibition of return and the eye fixation patterns for perceiving bistable figures PRESENTER: Chien-Chung Chen ABSTRACT. Bistable figures can generate two different percepts and make observers’ perception spontaneously reverse. Some evidence has pointed out that visual attention plays an important role in object perception because it can help us selectively focus on some features within the figure. According to the saliency model proposed by Itti and Koch (2000), visual attention can be attracted to locations of high saliency. After attention stays at the same location for a while, the local saliency will be suppressed. It thus makes visual attention shift to a new location, which is called inhibition of return (IOR). Based on this, we assumed that there are several features contained in a bistable figure, and those features imply different interpretations of the figure. IOR can make our attention shift between different regions containing those different features, and thus result in percept reversal. We used an eye-tracker to record the observers’ eye movements during observation of the duck/rabbit figure and Necker cube, and also recorded their percept reversals. In Exp. 1, we found that different fixation patterns were shown under different percepts. Also, the fixation shift across different regions occurred before the percept reversal. This supports the idea that what we perceive depends on where we look. In Exp. 2, we examined the influence of inward bias on the duck/rabbit figure. We found that it had a significant effect on the first percept, but this effect diminished over time. In Exp. 3, we added a mask to the attended region to remove the local saliency. This manipulation increased both the number of percept reversals and the number of fixation shifts across different regions. That is, the change in local saliency can cause a fixation shift and thus reverse our perception. |
15:00 | Interoception affects the moving rubber hand illusion PRESENTER: Hiroshi Ashida ABSTRACT. Rubber hand illusion (RHI) refers to the phenomenon that a hand-like object is felt as our own hand after we undergo synchronous visuo-tactile stimulation on our own hand and the object. We have suggested that emotional states could affect the RHI, and speculated that interoception might mediate the link (Kaneno & Ashida, 2023). Tsakiris et al. (2011) reported that people with lower interoceptive sensitivity are more susceptible to RHI, but it remains controversial with many studies failing to replicate it. In this study, we examined the relationship between interoception and a variant of RHI that is induced by active movement of participants’ finger (i.e. “moving RHI”, Kalckert & Ehrsson, 2012). The participant’s index finger was linked to the rubber-hand finger so that the participant’s invisible finger movement was reflected in visible movement of the rubber finger. Similar but asynchronous finger motion was produced by the experimenter as a control. The RHI was quantified as a difference between the questionnaire scores under synchronous and asynchronous conditions. We measured interoception in two ways: interoceptive accuracy (IA) by the conventional heartbeat counting task and interoceptive sensitivity (IS) by questionnaires on broader aspects of interoception. We found that moving RHI was stronger for those with higher interoceptive sensitivity, which was not clear in classic RHI. The pattern of results is apparently opposite from that in Tsakiris et al. (2011), but it is consistent with the finding of Ma et al. (2023) of stronger out-of-body experience in a virtual avatar for the higher-IA group, when active walking was involved. Participants’ own movement is considered to be crucial for linking interoceptive and exteroceptive information as to the sense of body ownership. Our results also suggest the need of multiple interoception measures, as one measure alone may not be always reliable. |
14:00 | The Effect of Dynamic Visuospatial Working Memory on Motor Control PRESENTER: Garry Kong ABSTRACT. Working memory is thought of as a key pillar of human cognition, supposedly because it acts as a foundation through which other cognitive abilities can build upon. Despite this, there is very little definitive proof that any aspect of working memory enables another cognitive function. Here, we demonstrate that visuospatial working memory is bidirectionally linked to fine motor control, i.e., that fine motor control is impaired when visuospatial working memory is loaded and that doing a fine motor control task impairs visuospatial working memory. We used a dual-task paradigm, where participants viewed a memory stimulus, then moved their finger from one side of a touchscreen monitor to a dot on the other side. On some trials, once the participant began their finger movement, the destination dot was translated vertically, and the participant had to adjust their movement to land on the new location. Once their finger reached the destination, they were then asked to recall the memory stimulus. The memory stimulus was either a moving trail of dots (dynamic), or three colored dots (static). When the memory stimulus was dynamic, we found that the time required to adjust to the change in destination was increased compared to when there was either no memory load, or a static one. Furthermore, we found that not only did the possibility of needing to adjust their planned finger motion decrease their memory accuracy, but actually correcting their movement impaired memory even more. For the static memory stimulus, the time required to adjust to the change in destination was increased compared to no load, but memory accuracy was not impacted by either the possibility of needing to adjust their motion, or actually correcting it. We conclude that there is a bidirectional relationship between dynamic visuospatial working memory and fine motor control. |
14:15 | Second responses in visual working memory experiment ABSTRACT. Visual working memory (VWM) allows for storing detailed visual information on short time scale. Despite intensive research, there is no consensus on the nature of representations in VWM. According to the slot models, memory representation is constrained by a highly limited number of discrete memory units where the element information is stored. This leads to a prediction that memory performance should have high-threshold, all-or-none characteristics. On the other hand, detection theory – based resource models assume that VWM representations can store more elements, but precision is limited by noise. According to this view, VWM performance should have low-threshold, degrees-of-certainty characteristics. In this study, a two-response n-alternative forced-choice technique developed originally in psychophysics was applied to a VWM change detection task. Observers were shown n Gabor elements (1.5 cpd) in randomized orientations for 200 milliseconds. After a blank retention interval (1500 milliseconds), one of the elements had changed in orientation. The observer’s task was to indicate the changed element. In addition to the first, main response, observers were allowed to make a second choice (the second-best guess). The number of elements was varied (3, 4, 6 and 8). The slot and resource models make different predictions for the accuracy of the second response when the first response is incorrect. According to the slot model, incorrect responses happen when the changed item was not stored in VWM, and thus the second response performance should be at the chance level. The resource model predicts that incorrect responses were caused by noise, and second responses should be above the chance level. The results show that second response performance was significantly above the chance level for all set sizes. However, it was also below what is predicted by a simple, independent Gaussian noise-limited resource model. Nonetheless, more elaborate resource models could explain the results. |
14:30 | Decoding the Enigma: Benchmarking Humans and AIs on the Many Facets of Working Memory PRESENTER: Mengmi Zhang ABSTRACT. Working memory (WM), a fundamental cognitive process facilitating the temporary storage, integration, manipulation, and retrieval of information, plays a vital role in reasoning and decision-making tasks. Robust benchmark datasets that capture the multifaceted nature of WM are crucial for the effective development and evaluation of AI WM models. Here, we introduce a comprehensive Working Memory (WorM) benchmark dataset for this purpose. WorM comprises 10 tasks and a total of 1 million trials, assessing 4 functionalities, 3 domains, and 11 behavioral and neural characteristics of WM. We jointly trained and tested state-of-the-art recurrent neural networks and transformers on all these tasks. We also include human behavioral benchmarks as an upper bound for comparison. Our results suggest that AI models replicate some characteristics of WM in the brain, most notably primacy and recency effects, and neural clusters and correlates specialized for different domains and functionalities of WM. In the experiments, we also reveal some limitations in existing models to approximate human behavior. This dataset serves as a valuable resource for communities in cognitive psychology, neuroscience, and AI, offering a standardized framework to compare and enhance WM models, investigate WM's neural underpinnings, and develop WM models with human-like capabilities. Our source code and data are available at: https://github.com/ZhangLab-DeepNeuroCogLab/WorM |
14:45 | Visual Working Memory Load Impairs Detection Sensitivity: A Re-entry Load Account PRESENTER: Chi Zhang ABSTRACT. Recent studies have shown that an increased load on visual working memory (VWM) impairs visual detection, indicating that VWM load can influence visual perception. In this study, we investigated whether it is the VWM load itself (VWM load account) or the volume of re-entry signals generated by VWM representations (re-entry load account) that impacts visual detection. The feedback signal account posits that the volume of feedback signals, rather than the VWM load, is what modulates detection sensitivity. To explore this question, participants were tasked with performing a visual search task while simultaneously detecting a peripheral meaningless shape during the maintenance phase of a VWM task. They were required to memorize four discs into VWM. A critical aspect of the study was that, in half of the trials, these four discs could potentially form two subjective contours, a phenomenon where re-entry signals are particularly significant. The VWM load was expected to be significantly reduced in the with-contour condition compared to the without-contour condition. Conversely, the re-entry load was anticipated to be significantly higher in the with-contour condition. The VWM load account would predict that detection of the peripheral shape would be less successful in the without-contour condition than in the with-contour condition; whereas the re-entry load account would predict the opposite effect. Across three experiments, we consistently found evidence supporting the re-entry load account. This suggests that VWM's influence on visual perception is mediated through the re-entry signal, rather than being a direct result of the representational load alone. |
15:15 | Neural mechanisms of feature binding in working memory PRESENTER: Yang Cao ABSTRACT. Working memory (WM) is acknowledged as a system capable of manipulating stored information for upcoming goals, albeit with a limited capacity. Binding various features into a unitary entity in WM is crucial for enhancing its capacity to effectively support ongoing cognitive tasks. However, the neural mechanisms governing binding in WM remain unsettled. To gain a comprehensive understanding of the neural mechanism underlying feature bindings in WM, we employed a change detection task with color-location conjunctions as stimuli and functional magnetic resonance imaging (fMRI) techniques. Participants were asked to memorize two types of information: the bindings of color and location (i.e., the binding condition), or either the color or location information (i.e., the either-memory condition). In addition, the neural activities corresponding to different conditions were modeled through graph-based network analysis, enabling us construct functional brain networks and conduct a comprehensive whole-brain analysis to examine the neural activity involved in feature bindings. The results identified a collaborative network that operates through a central workspace encompassing the somatomotor area (SMA), insular, and prefrontal cortex (PFC), underpinning the effective processing of bindings. Within these regions, we observed increased local efficiency and stronger connections during bindings. Notably, connections within this workspace significantly correlated with condition classification (binding vs. memorizing-features-separately) and behavioral performance. Among these regions, SMA, characterized by a shorter intrinsic timescale, responded more rapidly to visual input, carrying rich temporal information with more connections, and potentially served as the starting point during binding processes. These results highlight a dedicated workspace with sufficient and valid internal connections, facilitating successful binding through collaborative regional interactions. |
15:30 | Neural Substrates of Working Memory Maintenance PRESENTER: Sirui Chen ABSTRACT. Working memory (WM) serves as a crucial yet limited memory buffer, temporarily storing and flexibly manipulating information in real-time. One hypothesis suggests that items are retained in memory through sustained rhythmic activities, particularly involving alpha (8 -12 Hz) and theta (4-7 Hz) oscillations. However, employing multivariate methods, researchers have successfully tracked memory content based on the topographic distribution of alpha, not theta, activities, leading to an enigma regarding the role of theta oscillation in memory maintenance. To investigate this, we measured oscillatory brain activities while participants completed a classic WM task, involving memorizing the location of a target shape. Additionally, they completed a search task, serving to establish the period of encoding/localizing the target. This setup allowed us to determine when the maintenance period began in the WM task, as it shared the same encoding/localizing period with the search task. We employed an inverted encoding model (IEM) to track the neuronal selectivity corresponding to the attended/memorized location, and examined phase distributions to explore the synchronization of oscillations during memory maintenance. Considering recent evidence linking saccadic eye movements to WM, we also incorporated horizontal electro-oculargram (EOG) data into our analysis to gain a more comprehensive understanding of the mechanisms involved in WM maintenance. The results showed that IEM decoding over alpha oscillation tracks memory maintenance, which is correlated with theta phase. This implies that theta oscillation might occur for controlling information maintained in WM. We also found sustained horizontal eye movements during memory maintenance, which is independent of neural oscillations. Note that, these identified neural correlates of WM maintenance showed differentiation between high and low performances, emphasizing their roles in successful memory maintenance. Overall, we provide evidence for alpha and theta’s different functions within WM, as well as the participation of the oculomotor system in WM maintenance. |
15:45 | Linking behavioral and neural estimates of trial-by-trial working memory information content PRESENTER: Ying Zhou ABSTRACT. How is working memory (WM) information represented in the brain? Neural and computational models have used data aggregated over hundreds of trials to argue for different perspectives on how population neural activity encodes individual memories. The two main perspectives are information rich representations such as in probabilistic coding models (a probability distribution over the whole feature space), and information sparse representations, such as in high-threshold ( a precise feature value) or drift models (a value with a confidence interval unrelated to the direction of drift). The use of aggregate data represents a key inferential bottleneck that critically limits the ability to adjudicate between different formats of individual memory coding in WM. This study used a powerful method to link behavioral and neural estimates of WM representation on individual trials. We asked participants (n = 12) to memorize a motion direction over a brief delay. After the delay, instead of making a single report about the memorized direction, they indicated their memory by placing 6 “bets”, resulting in a distribution over the 360° direction space that reflected their probabilistic memory representation on individual trials. Additionally, we used a Bayesian decoder to estimate the posterior of the memorized direction given the fMRI signal during memory maintenance on individual trials. Comparing the shape of the behavioral and neural estimates on individual trials, we found significant correspondences in their mean and width, and critically, a significant correspondence in their asymmetry. The correspondences were found in the visual hierarchy with meaningful WM representations in occipital, parietal and frontal regions. These results indicate (1) individual WM representations are complex probability distributions that contain more information than that can be deduced from aggregate data; (2) WM neural representations contain rich and complex information about WM, with meaningful asymmetry information influencing behavior. |
16:00 | Unveiling the neural dynamic of the interaction between working memory and long-term memory PRESENTER: Zhehao Huang ABSTRACT. Recent research has challenged the conventional understanding of fixed working memory (WM) capacity, by demonstrating an increase in WM capacity for real-world objects with prolonged encoding time. This suggests a potential interaction between WM and long-term memory (LTM), where LTM aids in WM storage, extending its capacity temporally. However, direct evidence supporting this interaction is still lacking. To explore this phenomenon, we conducted a study measuring WM capacity with different encoding times while recording intracranial electroencephalogram (iEEG) signals. Our behavioral results showed that prolonged encoding time correlates positively with enhanced performance. Expanding upon these observations, our iEEG results unveiled nuanced changes in neural oscillatory patterns. Specifically, we observed a concomitant increase in the duration of excitatory high-frequency (60-140 Hz) and inhibitory low-frequency (8-30 Hz) signals as encoding time lengthened. Further dissecting the neural dynamics during the encoding period, we discovered intriguing patterns of temporal representation synchronization and phase-frequency coupling. Correct trials exhibited heightened temporal representation synchronization and stronger phase-frequency coupling compared to incorrect trials, with these differences accentuating with prolonged encoding durations. We calculated the Granger causality between low and high-frequency signals, revealing that the high-frequency signal exhibited predictive capabilities over the low-frequency signal. Notably, we further focused on the hippocampus (a brain region associated with the LTM system), and observed that only contacts in this region showed specific activities linked to behavioral performance. Overall, our findings indicate that prolonged encoding time induces systematic neural activities linked to high-frequency signals, primarily occurred in the hippocampus, subsequently enhancing WM capacity. |
16:15 | Different states of hippocampus during the formation of new memories PRESENTER: Yuanyuan Zhang ABSTRACT. Memory plays an important role in supporting various cognitive processes. In last centuries, although the hippocampus (HPC) is deemed as a core brain region for the formation of new memories, it still lacks direct neuronal evidence supporting its involvement in memory formation. To investigate how the HPC works during memory formation, the present study analyzed intracranial electroencephalogram (iEEG) recordings obtained from eighteen neurosurgical patients engaged in a simple working memory task, in which participants had to memorize two fixed orientations (e.g., 45 and 135) across all trials, involving the repetition of memorizing these two orientations over time (i.e., memory rehearsal). Behaviourally, memory rehearsal of two fixed orientations significantly improved memory performance. With a multivariate approach, we successfully decoded orientation memory from theta (4 – 8Hz) power in the HPC, middle temporal gyrus (MTG) and prefrontal cortex (PFC), but not in the inferior parietal (IP). In the first section (first 90 trials), orientation memory was represented in the neocortex (MTG and PFC) before being detected in the HPC; yet this pattern was reversed in the second section (last 90 trials), orientation was initially detected in the HPC before being represented in the neocortex. This suggests that the HPC encoded information from the neocortex before forming a long-term store, afterwards the HPC altered its role transitioning to aiding in memory retrieval accordingly. Moreover, we detected more ripple activities when the HPC became involved in memory retrieval in the second section. And hippocampal gamma couples to ongoing theta in the first section, suggesting a stable link between gamma and theta oscillations during the formation of new memories. Altogether, these findings provide compelling neuronal evidence supporting the involvement of the HPC in memory formation, and how it works. |
15:30 | The Significance of Complexity in the Appreciation of Abstract Artworks and Music PRESENTER: Rongrong Chen ABSTRACT. The theory of Taste Typicality suggests that individuals’ typical aesthetic tastes exhibit a consistent pattern across different modalities, serving as a crucial factor in understanding the diverse aesthetic experiences among the general population (Chen et al., 2022). Building upon prior research on the role of visual complexity in shaping individuals' visual preferences, here we aim to further investigate the impact of complexity in shaping collective aesthetic preferences across both visual and auditory domains. To evaluate visual aesthetic appreciation, we instructed 28 undergraduate students (16 males and 12 females) to instinctively select their preferred paintings from a pair of Ely Raman's abstract artworks presented simultaneously for a brief duration of 500 ms. A higher selection rate of paintings with higher image complexity would suggest a preference for complexity in the visual aesthetics. To evaluate auditory aesthetic appreciation, participants were exposed to Western tonal music for a mere five seconds before engaging in a simple Go-no-go choice reaction task. Longer delay in the reaction time for the Go-no-go task were interpreted as a signal of heightened engagement with the preceding music. Our findings revealed a notable inclination towards complexity, as evidenced by a selection rate significantly above chance, particularly for paintings with higher entropy in its image statistics (one-sample t-test: t= 2.49,p = 0.019). Participants also displayed a preference for more intricate musical compositions, as evidenced by a significant delay in reaction time in the Go-no-go task when compared to simpler music (p < 0.001). Moreover, individuals who showed a preference for images with a greater range of color and brightness variations also tended to prefer complex music (r = 0.39, p = 0.039). These findings hold promising implications for the enhancement of various applications by integrating personalized color and music choices that align with users' preferences for complexity. |
15:45 | Chthulucene visions: the contemporary obsession with seeing, and the denial of the tangible ABSTRACT. This paper investigates the notion of vision in the age of the Chthulucene, the era that begins with our awareness of environmental crisis, following Donna Haraway’s teachings (Haraway, 2016). I am inquiring into the contemporary reliance on the sense of sight, fostered by the massive adoption of digital devices. I am observing how the imposed dependency on digital devices and online connection for socio-economic purposes, has drastically rerouted the meaning of sensorial experience (Flusser, 2000). I am observing that the apparent freedom in the use of these digital gadgets, is a baleful trap, with the potential consequence of reduced scope, and nuance of the lived sensorium (Classen, 2012); reduced sense of emplacement (Howes, 2005); the ties of the current hyper-digital obsession with accelerated reality (Stiegler, 2014); bio-technologies of bodily control and monitoring (Haraway, 1990); the obsession with communication technology and systems of regulation of human behaviour (Foucault, 1995). By recording first-hand experimental investigation into the perception of various species of time (mechanical, integral, etc.), space (in terms of human perception) and the human body (Merleau-Ponty, 2005); this experiment relies on feedback from the author’s own body to bring the processing of sense-phenomena into focus. One example of this is an experimental design the researcher calls soundography in which space is probed and mapped through sound (Pellegrini, 2022). This examination documents an attempt to regain what the body may have lost in the massive adoption of today’s technical and digital prosthetics. The researcher’s experience over the course of the investigation suggests this methodology could be extended across a broad range of sensory input and applied pedagogically or therapeutically to other fields. Is there is an alternative pathway for digital technology and devices to become a positive addition to our freedom of choice, rather than a grid of pre-established options (Zielinski, 2006)? |
16:00 | A Study on the Relationship between Poster Image Design Techniques and Topics PRESENTER: Yung-En Chou ABSTRACT. In Taiwan's design education, many departments encourage students to participate in international design competitions to accumulate design experience, and poster works are considered one of the highest forms of artistic design. The utilization of poster images affects people's understanding of the posters. In the Taiwan International Student Design Competition (TISDC), the topic varies each year, so this study explores the relationship between images and topics. Using "content analysis," this study analyzes the image designs in the gold-medal poster works in the visual design category of the TISDC from 2008 to 2023, and categorizes topics according to Peirce's semiotic theory of sign types (icon, index, symbol), which can be considered as the applications of perception and attention and as the specific performance of people’s thoughts and action. The results show that the use of symbol is the highest, reaching 59%, while the usage of icon is 27%. This implies that symbol is the most frequently used way. Additionally, we find that images are obviously related to the topics. Through analysis, we categorized the poster topics into four categories: "Innovation," "Social," "Cultural," and "Environmental." In the "Innovation" topic, all three types of imagery (icon, index, and symbol) were applied. In the "Cultural" topic, icon and symbol were employed in the imagery. For both "Environmental" and "Social" topics, symbol were utilized. Therefore, it is recommended that designers consider topics when designing imagery. |
16:15 | Modeling color preference based on the distance from the memory colors in a color space PRESENTER: Songyang Liao ABSTRACT. “Mere Exposure Effect” describes that unreinforced exposure increases positive effects on a novel stimulus. It has been shown to account for preference for a wide range of stimuli, yet the possibility that it may affect color preference. We hypothesized that there would be a relationship between color preference and memory colors of the mere exposed objects: the colors closer (more similar) to the memory colors in a perceptual uniformed color space would be preferred, and vice versa. To test this hypothesis, we first predicted the abstract memory colors of frequently encountered fruits and vegetables and then examined the relationship between color preference and memory color. Our findings partially supported the hypothesis, revealing that mere exposure induced a preference for red, yellow, and green, whereas no such effect was observed for purple and blue. Subsequently, a multiple regression model was developed based on this memory-preference relationship, by the color's location on CIELAB color space, and the distance to the memory colors of the fruits and vegetables, together with a constant k. This model explained 64% of the variance in our data, and all variables made a significant contribution to the model. We propose that the differential preferences observed for red, yellow, and green compared to blue and purple could be attributed to ecological adaptations over the course of primate evolution. Colors signaling ripeness or nutritional richness, or those with greater familiarity due to prior exposure, are more likely to attract individuals due to associated benefits. Conversely, blue and purple, not relied upon by primates, reflect distinct color preference strategies. |
Talk delivered via Zoom.