Understanding what actions and traits facilitate social media campaigning on transgender issues through perceptions of the BBC and its media coverage
ABSTRACT. Social media is a powerful tool for both support and attacks on the transgender community. It offers a place to express identity but also facilitates the spread of divisive news stories. How traditional news outlets like the BBC report on trans issues can ignite divisive online discourse about gender recognition policy and transgender rights.
Smaller studies have hinted at how these debates take shape – news sharing, emotional reactions, and group dynamics all play a part. But how do campaigns gain traction on social media, and what fuels these opinion clashes online?
This research examines transgender community subreddits. What members think about the BBC, how they react to its transgender coverage, and how they communicate.
Using Reddit as a data source, this research employs a qualitative analysis of posts and comments on relevant subreddits. Theoretical frameworks include the Online Disinhibition Effect, Uses and Gratifications Theory, Social Identity Theory, the MAD Model of Moral Contagion, and Moral Outrage
This study aims to go beyond what we already know about how trans people experience news and social media. It could help researchers, traditional media companies, social media platforms, and the trans community itself understand the powerful forces that make these online spaces so influential.
This paper is currently in data analysis stage. The paper will be completed by 30 April.
A Mixed Methods Investigation of Social Media Use and Perceptions of Online Toxicity Among LGBTQ+ Young Adults
ABSTRACT. Individuals who identify as LGBTQ+ experience disproportionately high levels of cyberbullying and online toxicity compared to individuals who do not identify as a gender or sexual minority. The heightened prevalence of adverse mental health outcomes, including suicidality, within LGBTQ+ communities underscores the importance of research to identify and mitigate online toxicity targeting these populations. Research that seeks to identify and mitigate online toxicity toward LGBTQ+ individuals is thus timely, critical, and potentially lifesaving.
The aim of the present study was to gain insight into platform-based differences and more nuanced aspects of online toxicity targeting LGBTQ+ social media users. We employed a mixed methods approach to develop an online survey that included quantitative data as well as open-ended qualitative responses. Young adults (N = 400; age: M = 22.42, SD = 1.97) who self-identified as LGBTQ+ completed an online survey through Prolific about their social media use, perceptions of general and LGBTQ-specific toxicity on social media, and beliefs about mechanisms for reducing such toxicity. Quantitative results indicated that YouTube was the platform most regularly-used within the sample (86.3% of respondents), followed by Instagram (73.5% of respondents), TikTok (66.3% of respondents), Twitter (64.3% of respondents), and Reddit (61.8% of respondents). Among regular users of the respective platforms, Twitter, TikTok, Reddit, and Facebook were perceived as the highest in general toxicity; Facebook, Twitter, and Reddit were rated the highest in toxicity directly targeting LGBTQ+ individuals. Proportionally, however, Instagram, Twitter, YouTube, and TikTok were perceived as containing more LGBTQ+-specific toxicity than general toxicity. The exclusion of transgender individuals, deliberate misuse of preferred pronouns, and homophobia based on religious beliefs were the most common forms of LGBTQ-specific toxicity reported.
A total of n = 160 participants provided open-ended responses for qualitative analysis. Five coders reviewed a subset of responses to develop an initial list of themes for each open-ended question. Each response was then independently coded by two research team members, with discrepancies resolved by a third team member. In line with the quantitative results, the targeting of transgender individuals was a predominant theme in the qualitative analyses. Common themes with respect to how toxicity targeting LGBTQ+ users is typically addressed within LGBTQ+ spaces included platform/moderator responses, negative responses such as “attacking” and “doxing” toxic users, and positive responses attempting to educate toxic users on LGBTQ+ issues. The most frequently reported responses to LGBTQ-specific toxicity outside of LGBTQ+ spaces were a complete lack of response or insufficient level of response by a platform and negative responses such as “attacking” and “doxing” toxic users. Overall, these findings not only shed light on the unique social media experiences of LGBTQ+ individuals, but also illuminate potential recommendations for improving the online experiences and mental health outcomes of these marginalized individuals.
Facilitating Invariance on a measure of FoMO between Blacks and Whites in the United States: Validity and Reliability of the FoMOs-5
ABSTRACT. The purpose of this study was to validate the existing 10-item Fear of Missing Scale (FoMOs-10) for use specifically with African American individuals in the United States. This self-report questionnaire has been used in several research studies, albeit with varying degrees of reliability between diverse groups. For example, while the initial validation study reports using international samples of respondents, evidence is lacking regarding applicability of the results to non-majority cultural groups. Research on the FoMOs-10 also exists with diverse international groups as evidenced in studies where the measure has been translated and adapted into other languages. Of note, the FoMOs-10 authors conceptualized the fear of missing out (FoMO) phenomenon as a unidimensional construct, yet most of the other studies utilizing the FoMOs-10 report various multidimensional models of FoMO for their samples. Within the United States in particular, this suggests that there could be divergence between how Black and White people respond to the questionnaire’s items, especially given known differences in how anxiety and social motivation manifests for non-majority individuals. Accordingly, it is difficult to generalize FoMOs-10 results to the African American population with confidence, and this limits the research and clinical utility of the measure.
In this presentation, we will highlight a two-year process of collecting data over four separate rounds, between 2021-2023, in an effort to test and ultimately refine the FoMOs-10 to ensure psychometric equivalence and determine appropriateness of the measure for use with Black or White individuals in the United States. Each round of data collection received institutional review board approval and was conducted completely online using the Qualtrics surveying platform. After providing informed consent, respondents completed a demographic data questionnaire and at minimum, the FoMOs-10. Data from round 1 (N=946) was part of a larger data collection effort examining FoMO and potential correlates such as the need to belong, problematic phone and social media use, and clinical indicators of problematic mental health such as anxiety and depression. Analyses of these data from just the Black and White respondents demonstrated a latent factor structure which differed from that of the initial validation of the FoMOs-10. Principal components and parallel analyses both suggested that FoMO, as assessed via the FoMOs-10, was comprised of two components which collectively accounted for just over 60% of the variance observed. The second round of data collection (N=184) included a confirmatory factor analysis based in part on the results from Round 1, with results suggesting either a two or three factor solution, although goodness of fit indices in stronger support of the two-factor model. Round 3 (N=521) pulled in all of the items from the original pool of items tests by the authors of the FoMOs-10 during their item winnowing phase and included alternative wording for five items that performed poorly in previous rounds of analyses, with results again suggesting a two-factor model with goodness of fit indices strongest around a shortened five item version of the FoMOs. Factor 1 consists of only two items that collectively seem to tap into an external social comparison aspect of FoMO, with the second factor consisting of three items representing an internalized aspect of social comparison. The fourth round of data collection (N=387) was used solely for invariance testing, and where variance was found between Black and White individuals on the FoMOs-10, invariance was observed on the resulting FoMOs-5.
Results collectively speak to the validity of the FoMOs-5 for use with Black and White individuals in the United States. The measure can be reliably used in research or clinically-oriented studies where the FoMO phenomenon is of interest, either as an independent or dependent variable. The five-item version of the measure is psychometrically sound while also maintaining the core items that were initially developed by the FoMOs-10 authors. When using FoMOs-5 in future studies, researchers can be confident that any differences in responding observed between diverse groups of people are not due to measurement error but rather a clinically-significant and relevant outcome variable of focus.
Exposing Online Islamophobia: Insights from Corpus Linguistic and Thematic Analyses
ABSTRACT. In an increasingly interconnected digital world, the emergence of numerous online platforms has created abundant opportunities for communication, collaboration and knowledge sharing. Yet in turn, has ushered in a new era of challenges marked by the pervasive presence of online harms. This has demanded innovative methods to elucidate the ever-evolving harms taking place online. This research was conducted as part of my ongoing PhD work and delves into the existence of Islamophobia on Twitter. This was achieved through employing a comprehensive approach by combining corpus linguistic analysis and thematic analysis.
A total of 102,290 tweets were examined to illuminate the dynamics of online Islamophobic discourse. The corpus linguistic analysis outlined significant linguistic patterns associated with Islamophobic content on Twitter. This quantitative approach generated 16 multi-word units such as banislam, banmosques and againstislam. The corpus linguistic analysis addressed anonymity, membership length, postage frequency and follower count and its association with Islamophobic language present on Twitter. Furthermore, a thematic analysis was conducted on a subset of 600 tweets to dig deeper into the underlying themes and behaviours characterising Islamophobia online. This qualitative exploration allowed for a nuanced understanding of the diverse manifestations of Islamophobia on Twitter. Five themes were generated: Islam is not a religion, western supremacy, peaceful majority is irrelevant if silent, differential credibility and propaganda of hate - evidence of pro-social behaviour was also explored.
Discussions involved drawing parallels between the current research and part of my published work, which investigated Islamophobia on Twitter and YouTube during the covid-19 pandemic. These comparisons revealed the evolution and persistence of themes associated with Islamophobic content across different online platforms. This research also touched upon the broader applicability of corpus linguistic methods, in assessing other forms of online harms. This research concluded by introducing potential recommendations, aiming to inform policy and suggest intervention strategies, to foster a safer and more inclusive online environment.
Pursuing Perfection or Striving Towards “Superwoman”? – Investigating the Roles of Social Media and Individual Differences in Eating Disorders
ABSTRACT. The popularity of social media-based lifestyle trends targeting young women have been on the rise in recent years. While such trends have had a pronounced influence on young women’s self-perceptions and dieting behaviors, one particularly salient example of this online archetype, the ‘That’ Girl social media trend, remains empirically uninvestigated. ‘That’ Girl can be defined as a social media-based lifestyle trend or wellness archetype that promotes (a) wellness, (b) productivity, (c) beauty, and (d) mindfulness. Since October 2023, the #thatgirl hashtag has garnered over 16.5 billion views on the TikTok social media platform, exemplifying the significant cultural reach of this online phenomenon. Despite the lack of prior research, the ‘That’ Girl social media trend demonstrates conceptual overlap with the high self-expectations underlying trait perfectionism, as well as several key components of an empirically-established concept known as the “Superwoman” Ideal (e.g., an emphasis on physical attractiveness, successful achievement across multiple social roles). Both trait perfectionism and the “Superwoman” Ideal have been linked to increased eating disorder symptomatology, predominately amongst young women. Additionally, social media use, broadly, as well as social media trends, specifically, have been associated with the development of eating disorders in women, further underscoring the importance of examining interactions amongst social media usage, trait perfectionism, and the “Superwoman” Ideal in predicting maladaptive eating behaviors and mental health outcomes for women.
The current (pre-registered) study examined the relationships between endorsement of the “Superwoman” Ideal, disordered eating, and body dissatisfaction with extent of social media use as a proposed moderator. Generalized perfectionism was also assessed as a theoretically-related construct and additional predictor of both disordered eating and body dissatisfaction. Measures of age, BMI, and use of specific social media platforms (e.g., image-based, video-based, text-based) were collected for exploratory purposes. These variables were assessed using a 12-minute online self-report survey of U.S. women ages 18-25 (N = 407) administered via Prolific. Participants described their eating behaviors using the 12-item short form of the Eating Disorder Examination Questionnaire (EDEQ), body-related feelings and cognitions using the Body Dissatisfaction subscale of the Eating Disorder Inventory (EDI), endorsement of the "Superwoman" Ideal using the 27-item "Superwoman" Scale (SWS), extent of social media usage using the 9-item General Social Media Usage sub-scale of the Media and Technology Usage and Attitudes Scale (modified) (MTUAS), and perfectionistic cognitions using the 25-item Perfectionism Cognitions Inventory (PCI).
A series of moderation analyses revealed main effects of both “Superwoman” Ideal endorsement and social media use on disordered eating, whereas only social media use emerged as a significant predictor of body dissatisfaction. Specifically, stronger endorsement of the “Superwoman” Ideal and greater social media use were associated with higher levels of disordered eating symptomatology, and greater social media use was linked with increased body dissatisfaction. There was, however, no evidence that degree of social media use moderated the relation between endorsement of the “Superwoman” Ideal and either outcome. Exploratory analyses identified greater LinkedIn usage as a negative predictor and greater Twitter usage as a positive predictor of disordered eating and body dissatisfaction. For endorsement of the “Superwoman” Ideal, however, greater YouTube usage emerged as a negative predictor while greater Facebook and Pinterest usage emerged as positive predictors of enhanced “Superwoman” endorsement. This research contributes to a growing body of literature emphasizing the importance of considering social-identity and individual differences, as well as more nuanced aspects of social media use (i.e., particular platforms, content, behaviors, and use motives), when investigating the potential for negative mental health outcomes from social media engagement. These findings will ultimately aid in the future development of timely and more narrowly-focused social interventions for mitigating eating disorder pathology in women.
Harmful online content and cross-cultural challenges faced by adolescents: Findings from focus groups conducted in Brazil and Australia
ABSTRACT. Background: The internet has profoundly changed social dynamics, interpersonal relationships, and has greatly impacted on the developmental processes (social and psychological) of children and adolescents. Researchers have been concerned about the influence that excessive internet use can have on the development of adolescents, especially when its use is accompanied by harmful behaviors and poor supervision by parents or other adults. Unregulated content such as sexual content, inappropriate health advice, strategies for committing suicide or hurting oneself, and content related to drug use, all pose clear risks to the healthy psychosocial development and physical safety of adolescents.
While literature has pointed out the importance of supervision when adolescents are online, there is a scarcity of evidence-based programs designed to help parents and professionals provide structured support to prepare them better psychosocially for the content they may encounter. There is also a lack of recent cross-cultural studies into the concerns adolescents and adults have about adolescents' use of the internet and social media, particularly for newer social media platforms such as TikTok and OnlyFans.
As part of an ongoing research project involving researchers from Brazil and Australia, this work-in-progress study will present the findings of soon to be conducted focus groups. The focus groups will identify the self-reported contemporary concerns and experiences of adolescents in Brazil and Australia regarding harmful online content, as well as the perceived concerns of parents, educators and health service professionals, and the effectiveness of any supervision strategies they currently employ.
Method: Participants will be adolescents recruited from Brazil and Australia (aged 12-18; n=50), and parents, educators and health service professionals from Brazil and Australia (n=50). Separate focus groups for adolescents and adults (8-12 participants in each) will be conducted by an experienced researcher with skills in qualitative studies, with assistant researchers to aid in transcriptions and record/observe participant reactions during the process. All focus groups will include structured and semi-structured questions covering the key online harm themes identified in the literature, specific experiences of harmful content encountered online, and strategies adults have put in place to supervise or guide adolescent internet use. All interactions will be audio-recorded and transcribed, and data from Brazil will be translated into English to enable a collaborative process for conducting thematic analyses to identify recurring patterns, themes, and insights related to the research objectives, guided by the principles of grounded theory. Two independent coders will code the data, and discrepancies will be resolved through discussion to ensure reliability and validity of the findings.
Results: It is hypothesized that findings from the focus groups will be mostly consistent with previous literature, specifically regarding the types of risks and concerns for adolescents when online, but will provide novel insights into concerns around newer social media platforms, emerging safety concerns, and effective and ineffective supervision strategies. It is also hypothesized that there will be more similarities between data collected in Brazil and Australia than differences. Analysis of the data will inform the development of a youth and adult support model on how to engage psychosocially with problematic unregulated content. The development of this instructional material will include psychoeducation and digital citizenship evidence-based approaches, that will inform parents/adults of effective ways to supervise and guide young people of differing ages online.
Overall findings may further provide insights to support parents and professionals in positive educational practices cross-culturally between an English and Portuguese speaking population, via the development of online video and community resources.
An Update on the State of Brain Computer Interfaces
ABSTRACT. An Update on the State of Brain-Computer Interfaces
A Symposium Proposal for Cyberpsychology and Social Networking 2024
Chair and Discussant: Galen Buckwalter, PhD
If there are any challengers to artificial intelligence for the crown of most hyped yet feared technology it is brain-computer interfaces, or BCI’s. Proponents speak of the restoration of mobility and communication for such disabilities as spinal cord injuries, stroke, and ALS as well as enhancement to human performance, cognition and well-being. But few would deny the ethical, social, and security concerns that accompany any system that digitizes even crude metrics taken directly from human brains.
This symposium hopes to demystify the shroud of hype and fear around BCI with updates from scientists who are using different types of BCI’s while addressing the ethical issues posed and the lived experience of persons with advanced BCI’s. This multidisciplinary gathering is poised to be a nexus of ideas, methodologies, and groundbreaking developments in neurotechnology.
Participants and topics
Justin Asbee, PhD (University of Arkansas) will discuss The Adaptive Neural Systems Neural-Enabled Prosthetic Hand (ANS-NEPH) system, designed for real-world use, wirelessly communicates sensor data from the hand to an implanted neurostimulator that activates specific nerve fibers, producing targeted sensations.
David Bjanes, PhD (Caltech) will discuss the neuroprosthetics program at Caltech which uses implanted Utah arrays to allow the brain to communicate directly with external devices, such as computers or prosthetics, bypassing traditional neuromuscular pathways. Sensory stimulation is also delivered.
Nigel Pederson, MD (UC Davis) will discuss Intracranial EEG (iEEG) monitoring and stimulation, advanced techniques used in neuroscience and clinical neurology to study brain activity and treat neurological disorders, respectively. These methods involve the direct placement of electrodes on or within the brain, providing a highly detailed and localized view of electrical activity that cannot be achieved with non-invasive methods like scalp EEG.
Scott Kerick, PhD (US Army Research Laboratory) will discuss promising findings from frontal theta neurofeedback training. This involves a process where participants receive real-time feedback on their frontal theta brainwave activity. The feedback system monitors the participant's brainwave activity and provides immediate feedback, usually in the form of visual or auditory cues, indicating the level of frontal theta power being generated.
Korosh Mahmoodi, PhD (Carnegie Mellon University) will present on complexity synchronization and EEG. Complexity synchronization refers to the coordination and interaction of complex systems or processes, such as brain activity measured through electroencephalography (EEG). Analyzing the synchronization of EEG signals at different frequencies and brain regions can help understand how information is processed and integrated in the brain.
Galen Buckwalter (psyML) will discuss his experience as a research psychologist/participant with a currently implanted BCI. The entirety of the human experience of undergoing the rigors and gratifications of the process will be included.
Summary Engaging a diverse array of stakeholders, from researchers and clinicians to engineers and policymakers, this symposium is purposefully designed to foster collaborative innovation in the neurotechnological sphere. More than showcasing individual technologies, we will explore the integration of these approaches into composite treatment and research strategies, tackling the multifaceted challenges and opportunities that lie ahead. By bringing together a community committed to leveraging neurotechnology for better patient outcomes, we aim to inspire collective action toward a future enriched by the promises of neuroscience and psychology.
Charge delivered via multi-channel intra-cortical micro-stimulation (ICMS) modulates intensity and reaction times across in human primary sensory cortex in two participants
ABSTRACT. Intracortical brain-machine interfaces (BMIs) hold tremendous potential both as medical devices and for scientific discovery of cortical circuity, cognitive functionality and the mechanisms of information representation. For patients whom suffered spinal cord injuries (SCI), neurodegenerative diseases, loss of speech or sensation, BMIs may enable the restoration of lost functionality and improvements the quality of life and autonomy. For scientists and engineers, BMIs offer a unique opportunity to observe single-unit firing activity of neurons in locations throughout the cortex, such as motor (M1), posterior parietal (PPC), somatosensory (S1) and prefrontal cortices (PFC). By combining precise temporal and spatial resolution signals with advanced signal processing techniques, BMI devices can decode an extraordinary amount of detailed information: motor planning and intent, high-level cognitive goals, speech and language, and dysregulated neural activity. Furthermore, bi-directional BMI’s can write information into cortical networks through electrical stimulation, creating novel sensory percepts, visual stimuli and stabilizing dysregulated neural networks. Following the twin aims of restoration and scientific discovery, our work focuses on three main areas: motor control, sensory restoration and the neural representation of high-level cognitive functionality (such as goals and intention, speech and language).
In addition to the typical cortical targets for motor BMI’s (such as primary motor cortex for low-level, kinematic and trajectory information for commanding joint angles and limb position), high-dimensional cognitive signals are available from posterior parietal cortex (PPC). Even small populations of PPC neurons exhibit “partial mixed-selectivity”, encoding a rich, high-dimensional mix of variables including: task goals, bilateral limb representation, speech and language, coordinate frames, grasp and object properties, visual and somatosensory information. As BMI devices rapidly progress to control higher-degrees of freedom, we’ve investigated these representations towards integrating more intuitive ways for restoring control of dexterous movements for SCI patients.
Bi-directional BMIs can create naturalistic somatosensory feedback, vital for restoring fluid, dexterous control. By modulating activity of somatosensory (S1) populations in the brain via spatial or temporal patterns of intra-cortical micro-stimulation (ICMS), experiences of cutaneous and proprioceptive sensations can be evoked on the hand or arm. By utilizing novel multi-channel stimulation patterns, we have measured comparable reaction times between ICMS and intact sensation, optimal integration windows between visual and ICMS, characterized ICMS intensity responses and somatotopic projected fields over 8 years of implantation.
Speech brain-machine interfaces translate brain signals into words or audio outputs, enabling communication for people who’ve lost their speech abilities due to diseases or injury. While important advances in decoding vocalized, attempted and mimed speech have recently been made, our investigations of high-level cognitive functions in posterior parietal discovered single neuron activity modulated by internal speech, devoid of movement. We’ve demonstrated an online internal speech decoder, capable of reaching an average of 79% accuracy (chance level 12.5%) in our first participant.
Three human tetraplegic participants were implanted with NeuroPort microelectrode arrays (two participants with six, one with four) across a range of cortical targets in somatosensory, motor, pre-motor, pre-frontal, and posterior parietal cortices. Participants were able to control virtual end-effectors such as a cursor and 3D control of a prosthetic hand. Sensory feedback via single- and multi-channel electrical stimulation patterns elicited naturalistic somatosensory percepts. This work builds on our investigations to use BMIs to discover the building blocks of future therapeutic devices. These findings are significant advances toward development of state-of-the-art sensorimotor and cognitive BMIs.
ABSTRACT. This presentation describes stereoelectroencephalography and is part of the Brain Computer Interface symposium. Stereoelectroencephalography (EEG) is a method by which neurologists approach the invasive investigation of epilepsy in patients with focal epilepsy who do not adequately respond to anti-seizure medications. The method rests principally in the analysis of semiology – the array of a patient’s subjective experiences and objective behaviors that occur during a seizure. Combined with scalp EEG recordings, neuroimaging, and other methods, hypotheses are reached about which networks of the cerebral cortex are likely involved in the seizures, enabling a plan to be created for the implantation of depth electrodes through small burr holes in the skull into the cerebral cortex, and sometimes subcortical structures.
The intracerebral electrophysiology obtained from depth electrode placement has high spatial resolution, unlike the low spatial resolution of scalp EEG. After implantation, the patient is monitored in a hospital setting for several days to weeks. The sEEG captures data during both spontaneous seizures and when seizures are induced by direct electrical stimulation. This electrophysiological data is then analyzed to identify the likely origin region or network of seizure onset and early seizure propagation. This information is crucial for determining the feasibility and approach for surgical interventions by the neurosurgeon, such as resective surgery, radiofrequency or laser ablation, and neuromodulation.
This presentation will include work in the Epilepsy and Systems Neuroscience Laboratory at the UC Davis Department of Neurology. Work in the lab utilizes sEEG to study and treat epilepsy. Research focuses on the dynamics and interactions of large-scale brain networks, particularly in cognition, sleep, and epilepsy (UC Davis BME Grad Group) (UC Davis Health). Collaborating with colleagues, the Epilepsy and Systems Neuroscience Laboratory has integrated virtual reality (VR) technology with sEEG to study neural correlates of recognition memory and déjà vu. This innovative approach allows researchers to immerse patients in virtual environments while recording brain activity through sEEG.
This combination of sEEG and VR promises to provide more ecologically relevant stimuli for cognitive research and perhaps for some clinical settings, such as the elicitation of reflex seizures (that occur in response to a particular perceptual experience). By using VR to create controlled, immersive experiences, the Epilepsy and Systems Neuroscience Laboratory can better study the brain's response in more naturalistic settings, potentially leading to improved therapeutic strategies for epilepsy and other neurological disorders.
Peripheral Neurostimulation to Elicit Percepts at the Hand: Technology to Investigate Tactile Sensations, Their Interpretation, and Their Impact on Sensorimotor Function
ABSTRACT. For people with upper limb amputation, current prosthetic hand technology is insufficient to meet their needs. In particular, the lack of tactile feedback from the prosthesis makes it difficult to manipulate objects with the hand. Notably, the lack of tactile information leads to an over-reliance on visual feedback and greatly increases the attentional demands of everyday tasks.
Our research group is addressing this clinical need for enhanced prosthetic technology by providing sensations in real-time to intuitively convey information from sensors in the prosthetic hand. This approach builds upon the observation that nerve fibers that carry information from the hand can be electrically stimulated (using electrodes in or on the upper arm) to elicit sensations that are referred to the phantom hand – that is, they are felt as if they are coming from the missing hand.
The Adaptive Neural Systems Neural-Enabled Prosthetic Hand (ANS-NEPH) system includes a neurostimulator and electrodes implanted in the upper arm, an instrumented prosthetic hand, and a prosthesis-mounted electronics module. The prosthesis-mounted electronics module receives sensor information from the hand prosthesis and wirelessly transmits computed stimulation values to an implanted neurostimulator, which utilizes fine-wire longitudinal intrafascicular electrodes (LIFEs). LIFEs enable the neurostimulator to selectively activate small groups of fibers within a peripheral nerve fascicle and produce localized percepts in the phantom hand. With all components either fully implanted or built into the prosthetic device, the system is designed to be easily worn and used in real-world environments. Under an investigational device exemption from the US Food and Drug Administration, a first-in-human multi-site clinical trial (NCT03432325) is currently being conducted with the ANS-NEPH system.
The primary focus of this research and development effort is to create a system that provides high functionality and allows use of the prosthesis in an intuitive manner, reducing attentional demands. While our goal is to provide a functional and easy-to-use system, this technology also provides an extraordinary set of capabilities to control sensations. By modulating stimulation, we can perform novel and informative investigations of tactile sensations, how they are interpreted, and how they impact function and body image.
In this presentation, we describe specific features of the system and how they enable studies that may provide new knowledge of sensory processing, sensorimotor control, and sensorimotor learning – and how they may inform the next generation of neural-enabled sensory technologies. For example, researchers can explore differences between synchronous and asynchronous feedback as well as percepts that are congruent or incongruent with the task performed. Information from an array of sensors in the hand can be manipulated and integrated to change the nature and quality of elicited sensations. By utilizing this system in conjunction with neural imaging technologies, such as fNIRS or EEG, we can characterize the impact of neural stimulation on brain activity. Since the ANS-NEPH system can be used on a regular basis over extended periods, this system also enables long-term investigations of sensory learning and sensorimotor learning that can include longitudinal characterization of brain activity patterns associated with specific sensations and motor tasks. Finally, sensations referred to the phantom hand may impact body image and the sense of embodiment of the prosthesis.
In summary, neurostimulation of peripheral nerves provides an opportunity to further our understanding of tactile sensation as well as its role in motor control and motor learning. This paradigm might, in turn, produce new knowledge that can accelerate the development of prosthetic technology that approaches the functionality of a biological hand.
Acknowledgements:
This work was supported by U.S. Army Medical Research Acquisition Activity (W81WXH1910839) and the National Institute of Biomedical Imaging and Bioengineering (R01 EB023261).
ABSTRACT. Context: To alter a complex system, such as the brain, we need to have a control system with a compatible complexity. The complexity of a signal derived from a system, such as electroencephalography (EEG), is associated with its inverse power law (IPL) index of the distribution of time distance between two consecutive events τ, ψ (τ ) ∝ 1/τ µ, with µ called the temporal complexity measure. The events get extracted from the signal under study as the times that it passes from one amplitude level or stripe to another for a given number of stripes. The temporal complexity index is in the range of 2 < µ < 3, where most healthy biological systems have a µ close to 2, and as µ departs from 2, the system loses its complexity towards either regularity (µ≈1) or randomness (µ≈3).
Two complex periodic signals with given complexity µ and periodicity T have been used to represent biological interacting driver and driven systems [1]. Each signal was produced by the subordination of a cosine wave with a given T, i.e., the cosine wave in real-time (t = 1,2,3,…) was stretched to event time (τ = τ1, τ2, τ3,..) where the τ s belong to the distribution with assigned IPL index of µ. Then, the driven signal was treated as an agent (i.e., an entity that can sense its environment and change it through its decisions) that adapted the phase of the signal, using reinforcement learning, to coordinate with the driver signal. It has been shown that the driver signal with higher complexity can change the complexity/periodicity of the less complex driven signal [1]. This model was used to explain the experimental results of an arm-in-arm walking experiment of a patient with gate disorder with a healthy nurse as effective noninvasive rehabilitation.
Hypothesis: The agent used in [1] can modify a complex electroencephalographic signal for use as a noninvasive perturbation stimulus.
Methodology: We used one EEG channel recorded from a patient during brain surgery [2] and then let the agent of [1] interact with it, trying to anti-coordinate with it. Therefore, rather than the driver system being a subordinated cosine wave with a given µ and T, we used a signal (i.e., EEG) with unknown complexity. Moreover, rather than trying to coordinate, the agent tries to adaptively anti-coordinate with the driver signal. We set the complexity and periodicity of the driven signal to be µ=2 and T=100, respectively.
Results:
In Figure 1 the agent created a signal (red curve) in such a way that opposes the input EEG signal. The variance of the difference between the two signals is less than 0.001.
Conclusion: Our results confirm that an adaptive agent of [1] is able to interact with complex biological systems in order to alter their complexity and periodicity as non-invasive biofeedback. These preliminary results have implications for advancing future neurofeedback systems.
Integrating Insights from Human-Computer Interaction and Neuropsychology: Developing Novel Memory Tasks
ABSTRACT. Memory is a vital cognitive function impacting various aspects of human life, yet traditional neuropsychological tests often fall short in predicting real-world functioning. This symposium bridges the fields of Human-Computer Interaction (HCI) and Neuropsychology to introduce innovative memory tasks. Presentations will discuss novel approaches that blend insights from both disciplines, emphasizing ecologically valid assessments sensitive to individual differences and neurocognitive processes.
Central to our discussion is the utilization of technology to develop memory tasks that mirror real-world scenarios. By drawing upon HCI principles, novel assessments can incorporate immersive experiences and multisensory stimuli, capturing the complexity of memory function more accurately. Through gamification, these tasks can be made engaging and interactive, fostering participant motivation and adherence.
Our presentations will highlight technical advancements in memory assessment, such as the integration of eye tracking and EEG recordings within virtual environments. These tools not only provide rich data but also enable researchers to examine cross-modal learning dynamics and the temporal dynamics of memory decay.
By embracing these technological and conceptual innovations, we aim to advance memory assessment methodologies and deepen our understanding of memory function across diverse populations. Through collaborative efforts between Neuropsychology and HCI, we can pave the way for more effective interventions and treatments aimed at preserving and enhancing memory function.
Promises of Human Computer Interaction for Neuropsychology
ABSTRACT. Human computer interaction (HCI) is a sub-specialty of computer science emerging in the 1980’s concerned with understanding and optimizing the ways that people interact with computational systems. HCI melds together its respective influence of methods from psychology and behavioral sciences, principles from design, and technologies from computing. The application and methods that have evolved in the field of HCI show immense promise for creating innovative measures for understanding cognitive and clinical phenomena and introducing interventions. For instance, research in behavior change has been applied and adapted with great success in clinical and public health contexts using digital technologies — e.g., mobile applications that assist with smoking cessation or weight management. In our collaborative work, we explore how eye tracking and virtual reality technologies can be applied to better measure memory phenomena consistent with déjà vu and epilepsy, for instance. In this symposium, I will outline some highlights from the field of human computer interaction and the research opportunities they present in neuroscience.
Novel Methods of Assessing Memory with Technology: Addressing the Need for New Theory and Measures in the Setting of Minimally Invasive Epilepsy Surgery
ABSTRACT. The rise of minimally invasive neurosurgical procedures (e.g., stereotactic laser ablation) coupled with technological advances have revealed gaping holes in cognitive theory and our ability to thoroughly assess such constructs. Being able to create focal surgical destruction zones has revealed a mismatch between existing structure-function theory of the brain and post-surgical results. For example, extant research literature often focuses on the involvement of the medial temporal lobes in memory or the fusiform gyrus in semantic memory/language, yet these highly precise lesional studies are showing theory to often be incomplete or incorrect. In the setting of SLA in epilepsy surgery, some of our worst post-surgical memory outcomes occur when extra-medial TL regions are destroyed rather than medial TL structures. This is likely because cognitive theory has been based on indirect, correlative measures of brain function (e.g., fMRI) or large lesions in the brain resulting from disease or surgery. Additionally, most clinical measures of cognitive and emotional functioning are kept simplistic in nature to allow for the most straightforward interpretation. For example, memory testing is usually done in a sensory domain specific manner (e.g., visual vs. auditory) rather than allowing for integration of memory features (e.g., visual, auditory, semantic, autobiographical, historical being integrated and assessed wholistically). We highlight emerging weaknesses in theory as well as shortcomings in cognitive assessment, and present data to demonstrate how novel tests can be developed using videography, gamification, internet delivery to allow for longer windows of delayed recall, and updated theory to better assess neural network interactions.
Emory Multimodal Learning Task: A Gamified Cognitive Task for Serial Assessment of Memory
ABSTRACT. In recent years, technological advancements have enabled the creation of sophisticated cognitive assessments that simulate real-world scenarios to better understand and evaluate human memory functions. One of the promising developments in this field is the creation of gamified cognitive assessments, which have the potential to significantly enhance our ability to measure memory in individuals with neurological disorders. By engaging participants through virtual environments, these assessments unlock a multimodal evaluation of memory, allowing for the evaluation of various sensory modalities and meaningful semantic content.
Drawing on these advancements, we have created a gamified assessment tailored to explore memory vulnerability in neurological disorders. Comprising two core segments—a learning phase and a delayed recall segment—the assessment places participants into a virtual town, where they encounter videos depicting everyday scenarios. Through these engaging virtual experiences, participants learn novel information spanning faces, names, locations, and objects, mimicking real-life interactions and fostering memory integration. The subsequent delayed recall phase, facilitated by an interactive interface, prompts participants to retrieve and contextualize the previously presented information.
This dual-phase framework has undergone validation among epilepsy patients, revealing its efficacy in gauging the integration and recall of visuo-spatial, semantic, and episodic details. Notably, the findings underscore the potential of this gamified approach, not only in capturing intricate cognitive processes but also in refining cognitive assessments within clinical contexts.
Novel Methods of Assessing Memory with Technology: Addressing the Need for New Theory and Measures in the Setting of Minimally Invasive Epilepsy Surgery
ABSTRACT. The rise of minimally invasive neurosurgical procedures (e.g., stereotactic laser ablation [SLA}, neuromodulatory devices, focused ultrasound) coupled with technological advances in computer science and videography have revealed gaping holes in both cognitive and emotional theory and our ability to thoroughly assess such constructs. Being able to create extremely precise and focal surgical destruction zones has revealed a seeming mismatch between existing structure-function theory of the brain and post-surgical results. For example, extant research literature often focuses on the involvement of the medial temporal lobes in memory or the fusiform gyrus in semantic memory/language, yet these highly precise lesional studies are showing theory to often be incomplete or even incorrect. In the setting of SLA in epilepsy surgery, we have found that some of our worst post-surgical memory outcomes occur when extra-medial TL regions are destroyed rather than medial TL structures. Similarly, language may be more disrupted by a temporal pole lesion. This is likely because cognitive theory has often been based on indirect, correlative measures of brain function (e.g., fMRI tasks) or large lesions in the brain either occurring as the result of disease (e.g., stroke) or surgery (anterior temporal lobectomy for epilepsy). Additionally, most of our clinical measures of cognitive and emotional functioning are kept simplistic in nature to allow for the most straightforward interpretation. For example, memory testing is usually done in a sensory domain specific manner (e.g., visual vs. auditory memory) rather than allowing for integration of memory features (e.g., visual, auditory, semantic, linguistic, autobiographical, historical all being integrated and assessed wholistically).
Our presentation will review these emerging weaknesses in theory as well as the shortcomings in cognitive and emotional assessment. We will present data to highlight how novel tests can be developed using videography, artificial intelligence for precise scoring and interpretation, internet delivery to both further telehealth and to allow for longer windows of delayed recall, and updated theory to better assess neural network interactions. We will demonstrate the advantages of these novel tasks, providing preliminary case examples exhibiting our ability to measure untapped functions that were being missed when using traditional, classic psychometric measures.
Physiological response to a virtual reality simulation for preoperative stress inoculation
ABSTRACT. Background
This paper describes the development of an immersive virtual reality (VR) simulation designed to reduce preoperative state anxiety in patients undergoing breast cancer surgery. We evaluate the capacity of the simulation to induce an emotional response as measured by the participants’ galvanic skin response (GSR).
Preoperative anxiety is associated with a range of poor postoperative health outcomes. A few studies have measured the impact of VR-based preoperative stress inoculation (i.e., pre-emptive exposure to stressful environments) with mixed results. To our knowledge, this is the first fully interactive simulation of an oncology surgery induction procedure for stress inoculation, and the first preoperative VR study to measure emotional impact using GSR.
A previous paper on this initiative studied the feasibility and utility of the system with data from a case series (n=7, 4 randomized to the simulation condition) derived from a larger feasibility trial. This complementary paper describes the design of the simulation, especially as it pertains to factors influencing presence and immersion, and measures the impact of the intervention on quantifiable physiological responses on a larger cohort.
Design and Method
A custom VR simulation was designed to allow participants to experience the setting of an operating room and the key preoperative stages, all the way through administration of general anesthesia. An iterative and collaborative development process incorporated the input of various subject matter experts. Simulation was delivered through an Oculus Rift S off-the-shelf headset.
Interactivity is provided through a self-avatar, animated with head and hand tracking, with which the various simulated medical personnel (e.g. anesthesiologist) interact directly. Interactions include attaching a virtual pulse oximeter and a mask. The simulation progression is controlled either by participant action (e.g. reclining on the operating table when prompted), or triggered by the person administrating the simulation in contexts where the patient would not have agency in a real setting. Operating room conditions such as noise and lighting were adjusted to mimic real life conditions, and validated by subject matter experts.
Eleven patients undergoing breast cancer surgery were randomized to the simulation condition and used the simulation two weeks prior to their surgery. GSR was selected as the physiological measure of choice, based on earlier research on phobia exposure therapy linking GSR with a sense of presence and with the effectiveness of the desensitization process. GSR was measured before the VR experiment to provide a baseline, and then during the simulation itself.
Results and Discussion
Out of the 11 participants randomized to the VR intervention group, 6 yielded usable GSR data. Three minute samples for both the baseline and the intra-simulation measurements were compared with a 95% confidence interval on the mean. 5 out of 6 showed a statistically and clinically significant increase (> 1μS) in GSR. The sixth participant showed a small but not clinically significant (< 0.1μS) increase in GSR.
The iGroup Presence Questionnaire assessed the presence associated with the VR intervention, defined as ”the sense of being in the virtual environment” along three subscales: involvement (range -12 to 12), spatial presence (range -15 to 15) and realism (range -12 to 12). The participants for which there was GSR data (n=6) reported low involvement (mean = 1) and low realism (mean = 1) scores, but high spatial presence (mean = 8) for a medium combined presence score (total mean = 10).
Early results are encouraging, showing that the described simulation can induce a physiological response consistent with the participants’ subjective evaluation of presence. While this was a limited experiment, it provides a basis for a larger randomized controlled trial to be conducted in the future.
Human body odors modulation on affective processing of social-emotional virtual environments.
ABSTRACT. 1. Introduction
Recent research emphasizes the impact of olfactory stimuli on human social interactions, influencing behavior and decision-making processes. Both synthetic and naturally occurring odors play a significant role in modulating human responses within social-emotional contexts. Human body odors (HBOs) have been found capable of conveying socially relevant information such as cooperation and aggression, even triggering heightened anxiety in immersive virtual reality environments.
While much attention has been given to investigating emotion elicitation through visual stimuli like facial images and videos, little research has focused on how HBOs modulate affective processing in relation to other social-emotional cues. This gap highlights the prominence of virtual reality methodology, offering controlled environments to replicate social-emotional scenarios with heightened immersion.
In response, we have developed social-emotional virtual environments (VEs) aiming to elicit varied affectivity states. Our study seeks to investigate the potential influence of HBOs on emotional processing within these environments, utilizing emotional self-reports as the primary metric for assessment. This research contributes to understanding the intricate interplay between olfactory cues and human social dynamics.
2. Method
The study included 149 adults ranging from 18 to 55 years old divided into a group exposed to HBOs (N=77) and a control group (N=72). HBOs were previously collected from the donor’s armpit sweat while watching emotional movie clips to elicit specific emotional states. HBOs were then classified through standardized methods and stored. During the odor exposure procedure, HBOs were released through a purposely developed olfactometer (with airflow distributed through nose-positioned vials).
Both groups experienced three VEs designed to evoke emotions of happiness, fear, and neutrality, using visual and acoustic stimuli within a semi-immersive CAVE system. Each VE lasted 60 seconds, featuring social interactions conveyed by pre-programmed avatars. The experimental group additionally experienced HBOs while viewing the VEs, in a randomized order. Participants then self-reported emotional valence and arousal using a 9-point Likert scale, providing insights into their affective responses.
3. Results
Results showed significant effects of VE on affectivity states, both in terms of valence and arousal. Participants reported emotional experiences aligned with the type of VE encountered, whether positive, negative, or neutral. Specifically, there was a significant difference between positive and negative VE, and neutral and negative VE. Conversely, negative VEs induced greater arousal levels than positive and neutral ones. Moreover, participants in the HBOs group generally reported higher valence and lower arousal compared to the non-HBO group. Interestingly, there was also a significant interaction between groups and VE, indicating emotional valence differences, in the negative VE. Indeed, researchers discovered a notable distinction between the group exposed to positive VEs and the group exposed to negative ones. Similarly, there was a noticeable difference between the group exposed to neutral VEs and the group exposed to negative ones.
4. Conclusion
The novelty of this study lies in its exploration of how HBOs influence affective processing within social-emotional VEs. Findings demonstrate the modulating effect of HBOs on the affective processing of social-emotional VEs in terms of perceived valence and arousal. Future directions include investigating differences across emotion-related HBOs in the affective processing of social-emotional situations.
Virtual Simulation of Magnetic Resonance Imaging as a method of exposure and preparation for the feared examination.
ABSTRACT. Introduction: MRI is an important non-invasive neuroimaging tool used for detection of various abnormalities in the brain morphology and functioning. The cooperation of the patient during this procedure is crucial – also given the time demands and financial costs. However, various concerns or insufficient knowledge of the procedure may lead to patient’s refusal to participate in the study or complications during the examination (premature termination, excessive movements due to discomfort or panic, etc.) [1]. The occurrence of adverse events is even more pronounced in anxiety disorders.
The aim of this study was to design and validate a realistic VR experience simulating the virtual magnetic resonance imaging (vMRI) procedure as a preparatory intervention for anxiety disorders. This procedure should enable 1) to educate the patients and prepare them for the planned procedure; 2) to provide experience of individual steps of the procedure.
The study sample (n = 37, 21 females) included patients (n = 18) with claustrophobia or cleitrophobia and a healthy control group with no reported fear related to MRI (n = 19).
Methods: The VR environment and the 3D model of MRI machine 3T Siemens Prisma were created based on real MRI lab at NIMH in Czechia. VR app was presented using VR headset (HTC vive Pro or Oculus Quest 2).
The study procedure included the following steps:
1) Before virtual MRI exposure (vMRI) all subjects responded in a set of questionnaires addressing current anxiety level (STAI-6, [2]), MRI related anxiety by the MRI Fear Survey Schedule (MRI-FSS, [3]), claustrophobia severity by the Claustrophobia Questionnaire (CLQ, [4]) and subjective fear of the upcoming real-life MRI procedure.
2) During the vMRI the subjects evaluated subjectively experienced level of anxiety - Subjective units of distress (SUDS scale 1-10) in six steps of the simulated MRI procedure (e.g. entering the control room, head coil mounted, during scanning procedure, etc. ).
3) After the vMRI: current level of anxiety (STAI-6), anxiety experienced during MRI procedure – MRI -Anxiety Questionnaire (MRI-AQ [5]) and fear of upcoming real-life MRI procedure.
4) All subjects were scheduled for a real MRI procedure performed in a separate session. After the real procedure the MRI-AQ questionnaire was repeated.
In this study, we aimed to validate the VR simulation of the MRI procedure as a tool applicable as a VR exposure and preparation for the real MRI examination. Two groups of subjects (phobic and healthy controls) were compared in subjective anxiety measures and various outcome measures. Our findings suggest the vMRI simulation tool can provoke fear related to MRI scenario in phobic patients that could be potentially used in a form of exposure therapy. Importantly, the tool was successfull in decreasing patients’ anxiety before scheduled MRI procedure enabling them to undergo an MRI scan. Based on verbal reports the simulation was useful in informing about the procedure. However, repeated exposure including mock MRI procedure is needed to prepare the phobic patients with severe claustrophobia for the real MRI examination.
Acknowledgements: Pilot study was supported by the project of the Internal Funding Competition NIMH grant no.318A_2020, the ongoing research has been supported by program Cooperatio, Neuroscience, Charles University.
1. Munn Z, Moola S, Lisy K, Riitano D, & Murphy F. (2015). Radiography,21(2), e59-e63.
2. Marteau TM, & Bekker H. (1992). British journal of clinical Psychology, 31(3), 301-306.
3. Lukins R, Davan, I. G., & Drummond, P. D. (1997). Journal of behavior therapy and experimental psychiatry, 28(2), 97-104.
4. Radomsky AS, Rachman S, Thordarson DS, McIsaac HK, & Teachman BA. (2001). Journal of anxiety disorders, 15(4), 287-297
5. Ahlander BM, Årestedt K, Engvall J, Maret E & Ericsson E.( 2016). J Adv Nurs.,72(6), 1368-80
Democratizing Psychophysiological Research: Leveraging Open Resources for Experimental Settings and Data Analysis of Flow, Engagement, and Performance.
ABSTRACT. Introduction: Contemporary research often grapples with economic constraints and a shortage of specialized and updated software tailored to specific research needs. The landscape is, however, evolving: the emergence of affordable wearable sensors, increasingly often considered the gold standard for research due to their reliability and high sampling frequency is a resource for the researcher in psychology. Meanwhile, platforms such as GitHub, Hugging Face, PhysioNet, and Kaggle facilitate the democratization of science by enabling the sharing of scripts and datasets, thereby fostering collaboration and accelerating progress. Yet, assembling resources for studies, particularly those integrating psychophysiology, remains complex, especially for the crucial need for synchronized data. Within this context, this work proposes the development of an experimental setup stemming from shared resources, that allows the presentation of game stimuli and the synchronized recording of facial movements and cardiac data, with the aim of creating a new sharable instrument.
Objectives: The experimental goal for which this framework is designed is to elicit and detect states of flow and engagement and the physiological activations associated with them. As stimuli, several levels of different difficulty of a single Tetris-style video game are employed; the player’s activity needs to be saved and synchronized with data coming from the cardiac wearable device Polar H10 and videos coming from the webcam. The final scenario can also be adapted to other research questions or theoretical constructs.
Framework Design: Key to our approach is the punctual synchronization of data streams, through precise timestamps and the utilization of established libraries such as LabStreamingLayer. By recording and storing essential metadata alongside physiological and behavioral data, our framework enables the reconstruction of temporal sequences, crucial for in-depth analysis and interpretation.
Drawing from shared codes on GitHub, our framework is divided into three Python scripts, all intended to run on the same machine, ensuring consistency in local timestamp management. 1) The Tetris component allows the presentation of the stimuli and the collection of gaming performances and timestamps for various in-game events; it is also possible to insert a countdown of desired length to record an initial physiological baseline. 2) The PolarH10 module enables the Bluetooth connection with the device and the detection and saving of cardiac Interbeat Intervals (IBI) with a 1000 ms precision, a sampling rate that surpasses the required gold standard value for Heart Rate Variability (HRV) calculations (250 Hz). Additionally, the synchronization with activities and facial expressions enables the assessment of the resting baseline states and the reactions to specific stimuli/events. 3) The video component ensures that each frame is tagged with multiple timestamps, facilitating the synchronization to perform analysis through facial expression recognition softwares.
Conclusions: The scripts have been structured into a GitHub repository according to FAIR (Findable, Accessible, Interoperable, Reusable) principles. The software provides a practical implementation for the study of psychophysiological responses. In conclusion, leveraging open resources for experimental setups and data analysis not only addresses economic and software limitations but also promotes collaboration and accelerates scientific progress in psychophysiological research.
Behavioral indicators of presence: Finger tapping, pupil size and machine learning.
ABSTRACT. The degree to which observers experience “presence”– the sense of being there – in virtual reality is known to correlate with outcomes in clinical, training, educational, and entertainment applications. The most widespread approach to measuring presence is to provide users with survey questions during or after their VR experience, which may disrupt the task, relies on recall of subjective experience, and reduces prolonged experience to a single aggregate estimate. Although useful, surveys lack the granularity required to develop detailed cognitive models of presence, which has hindered theoretical progress. Many researchers have tried physiological measures, such as EEG or heart-rate, to obtain a real-time gauge of the changing experience in a virtual environment, although this approach comes with its own challenges (e.g., difficulty obtaining and interpreting the measures). Our goal was to assess easy-to-collect behavioral measures as they occur over time, and to see how well a machine learning model can distinguish between degrees of presence. In two studies we evaluated finger-tapping and pupillometry as potential behavioral indicators of presence. Finger-tapping is an easy-to-collect indicator, and eye tracking metrics like pupillometry have become the norm in VR head-mounted displays. We predicted that variance in inter-tap-intervals (ITIs) and pupil size could predict presence condition, and a feedforward neural-net classifier would be able to identify high versus low presence conditions at the individual subject level based on these measures. In Experiment 1, participants walked the “virtual plank” tapping to a rhythm at elevated height (high-presence condition) or on the ground (low-presence condition) to manipulate the degree of presence experienced (following earlier studies). Aggregate inferential results were mixed. Standard presence-survey results indicated that plank height did correspond to these conditions (p = .04), while inter-tap interval variance from finger tapping did not show a difference between high or low presence conditions (p = .38). On the other hand, machine classification of the time-series data (finger tapping rate and pupil size) strongly predicted presence condition at the individual participant level. For finger-tapping, the classifier identified high or low presence condition at 77% accuracy. Pupillometry data yielded 70% accuracy. In Experiment 2, participants watched two 360-degree videos twice, with or without sound, to manipulate presence. Each video was analyzed separately. Surveys confirmed that sound increased presence for both videos (all ps < .05), but pupil variance did not follow this pattern. The neural-net classifier was unable to replicate the high accuracy of Experiment 1, with accuracies of 57% and 55% for the videos. Our results demonstrate that finger-tapping is a promising indicator of presence in VR and is especially sensitive when analyzed via machine-learning classification. While results for pupillometry were mixed, we believe that pupillometry and other eye-tracking metrics merit further investigation with more refined machine-learning methods, potentially in combination with finger-tapping or other behavioral indicators such as movement data.
Using Eye Gaze to Differentiate Internal Feelings of Familiarity in Virtual Reality Environments: Challenges and Opportunities
ABSTRACT. As reported in Castillon et al. [Human-Computer Interaction (HCI) International, in press], our group previously found that the sensation of familiarity in a virtual environment can be automatically detected using machine learning models trained on eye-gaze features. Twenty-six CSU students participated for course credit. Participants were immersed in virtual reality (VR) scenes in the HTC Vive Pro EYE while their eye-gaze patterns were recorded. First, in a study phase, participants were sequentially immersed within a series of distinct scenes for approximately 10 seconds apiece. Then, in the test phase, participants were then sequentially immersed in a series of novel scenes, half of which contained the same spatial layout (i.e., arrangement of elements) as scenes from the test phase. Similar to in prior research (Okada et al., 2023), in the test phase, participants indicated when a scene felt familiar by pressing a button on the VR hand-controller. Each button-press indication was accompanied by a verbal indication of if the scene reminded them of anything specific from earlier or if it simply felt familiar without an identifiable reason. An issue with our prior work (Castillon et al.) is that we relied on the button-press to indicate when familiarity was occurring; this design makes it difficult to ascertain at which point in time the eye-gaze features might reflect the VR hand-controller button-press as opposed to the onset of the subjective sensation of familiarity. Even in immersive VR where the eyes are fixated within the scene throughout, it remains possible that a small window of time prior to the button press manifests a physiological eye-gaze signal that corresponds to the behavioral response of pressing the button, rather than to the sensation of familiarity. Moreover, the onset of familiarity happened quickly (usually within 3 seconds of a scene’s onset), which also presents challenges for developing a pipeline to detect the transition into a state of familiarity. The classification of internal states using eye-gaze features is most common with mind-wandering, which typically happens deeper into a task and across a longer time-window. The rapid onset of familiarity requires an alternative analytical approach to self-reports of familiarity states in VR environments. The current presentation focuses on addressing these challenges to improve the automatic detection of familiarity's onset in VR environments. One approach we explore is to examine eye-gaze patterns among scenes that had the same spatial layout as an earlier scene (thus were experimentally familiarized) compared to scenes that did not (thus were not experimentally familiarized). To account for the confounding effects of the button press, we first do this analysis considering only instances in which the button-press occurred (indicating the subjective presence of familiarity) and then independently repeat the analysis under the converse situation, considering only instances in which the button press did not occur. By holding the button-press constant across different potential levels of familiarity with a scene, the issue of how rapidly the button-press occurred and whether eye-gaze patterns reflect familiarity-detection vs. the behavioral response is rendered moot, allowing a direct comparison of different levels of familiarity across eye-gaze patterns. For scenes in which the button-press occurred, we also compare instances where recall of an earlier scene succeeded with instances where recall failed, allowing us to determine the feasibility of differentiating between instances of recall success and failure using eye-gaze patterns. This work helps to shed light on the eye-gaze correlates of several internal cognitive processes while simultaneously addressing some methodological challenges to developing a pipeline for automatically detecting the onset of a familiarity sensation from eye-gaze patterns.
ABSTRACT. The concept of presence has been a topic of interest in scientific literature for several decades, appearing in various fields including technology, neurophysiology and psychology. In general, it is defined as the feeling of being in a distant environment (Sheridan, 1989). In particular, the sense of presence has been studied as a subjective phenomenon distinct from the immersive properties of the technological environment used (Mestre, 2006). Assessing presence in cyberpsychology studies has led to the identification of a number of interesting avenues for further reflection.
Presence has been assessed in numerous studies utilizing virtual reality, particularly in psychotherapeutic settings. Theoretical concepts pertaining to the feeling of presence have facilitated the development of assessment tools, such as the Immersive Tendencies Questionnaire (Witmer and Singer, 1998), the Co-presence Questionnaire (Slater and al., 2001), the Social Presence Questionnaire (Biocca and al., 2001) and the ITC Sense Of Presence Inventory (Lessiter and al., 2001). Scales more suited to specific technologies are documented in the literature, including the Telepresence in Videoconference Scale (TVS) (Bouchard et al., 2011). Most of these tools have subscales to assess specific dimensions, including the realism of the environment, engagement, co-presence, physical presence, spatial presence and social presence.
The various assessment tools utilized in this context incorporate sensory aspects, and the development of physical and behavioral measures of presence is underway (Slater and al., 2021). However, the methodologies employed remain essentially self-reported and subjective, as is the case with the majority of psychometric tools.
In the context of the Internet of Things (IoT) and artificial intelligence (AI), it is becoming possible to quantify certain variables that can be used to complement behavioral assessment. This enables the proposal of the concept of presence, which is essential but still relatively poorly understood.
(a) UX, sensors and assessment of presence
Chair : Lise Haddouk
(b) Title of all the presentations:
- “Various perceptual motor styles during locomotion in virtual reality”
Danping WANG1,2, Ioannis BARGIOTAS3, Yunchao PENG3, Nicolas VAYATIS3, Lise HADDOUK3, Pierre-Paul VIDAL3,2,1
1Plateforme d’Etude Sensorimotricité, CNRS UAR2009, Université Paris Cité, Paris 75006, France
2Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China.
3Centre Borelli, CNRS, SSA, INSERM, Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, Paris 75006, France
- “What we (don’t) know about Presence”
Pedro Gamito1, Jorge Oliveira1
1Lusofona University, HEI-Lab
- “Mental load at complex man-machine interface and the resulting enmachinment”
Pierre-Paul VIDAL1,2, Ioannis Bargiotas 1, 2, Dimitri Keriven1,2, Axel Roques1,2
1Centre Borelli, CNRS, SSA, INSERM, Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, Paris 75006, France
2Centre Borelli, CNRS, SSA, INSERM, Université Paris Saclay, Université Paris Cité, ENS Paris Saclay, Gif sur Yvette 91190, France
- "Multimodal assessment of anxiety: a complementary approach between IoT and psychometrics"
Donovan Morel1, Pierre-Paul Vidal1, Lise Haddouk1
1Centre Borelli, CNRS, SSA, INSERM, Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, Paris 75006, France
- “A meta-analysis of the effects of video and audio resolution on presence in virtual environments”
Julien Nelson1
Centre Borelli, CNRS, SSA, INSERM, Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, Paris 75006, France
- “Pilot study on the use of a multimodal neurophysiological data collection device during a psychometric assessment task”
Lise Haddouk 1, Urme Bose1, Bryan Hilanga1, Danping Wang1, Yannick James2, Pierre-Paul Vidal1
1 Centre Borelli, CNRS, SSA, INSERM, Université Paris Cité, Université Paris Saclay, ENS Paris Saclay, Paris 75006, France
2 Thalès, France
(c) Name of all presenters:
Danping Wang
Pedro Gamito
Pierre-Paul Vidal
Donovan Morel
Julien Nelson
Lise Haddouk
Dreamscape Learn is a collaborative venture between Dreamscape Immersive and Arizona State University, merging the most advanced pedagogy with the entertainment industry’s best emotional storytelling. Dreamscape Learn redefines how we teach and learn in the 21st century, while aiming to eliminate student learning gaps.
Advanced AI Virtual Humans for Learning, Teaching and Intelligent Tutoring
ABSTRACT. Virtual humans are defined as characters that can act as intelligent agents to perform a multitude of tasks. They are more complex than simple chat bots in terms of their depth of interaction and have graphical interfaces representing human like characters. They have mostly been used in video games, movies, and tutoring or learning systems where a user interacts with them through speech, text and other multimodal inputs like gestures. They represent a captivating frontier where diverse disciplines converge to revolutionize training effectiveness across various domains, ranging from clinical settings to educational institutions, museums, and virtual worlds. It is a growing area in both research and development that will be more prevalent with the rapid increase and leverage of new neural network deep learning systems and large language models (LLM). These virtual human agents embody cognition, imbue learning objectives, and intertwine game mechanics and entertainment elements to elevate engagement and interest for effective learning.
To effectively use virtual humans in training and tutoring system and integrate them into educational technologies is a major research area that explores how to successfully apply these technologies to offer increased engagement for the student or user of the system. Including work in how to design proper virtual human agents, the relationship between the learning domains, the cognitive assets relevant to learning for the student and also how to properly model this in the virtual human to communicate the learning objectives.
Traditional methods often struggle to captivate learners and facilitate deep understanding. The promise of using virtual humans and intelligent tutoring systems can offer a powerful and creative alternative to classes or static learning environments and provide greater flexibility in providing learning to those that might not otherwise be exposed to it. One important topic of this discourse is the fusion of creativity with AI, characterized by the creation of immersive, believable characters and systems that enhance engagement and foster learning objectives and provide an ability to adapt and respond dynamically to user interactions, ensuring tailored learning experiences.
Despite these benefits it is still not well known as to the effectiveness of integrating game and entertainment aesthetics to enhance engagement and methods to evaluate the outcome of using virtual humans in these systems and the gains resulting from the learning experiences and the impact on the learner. Additionally, there needs to be more discussions in the ethics, cultural sensitivity and diversity, and data privacy and security. How to integrate these systems into existing curricula and measure outcomes is also of great importance.
In this symposium, some precursory results will be presented as to the effectiveness of using virtual humans as virtual patients for clinical therapy training and learning characters in gaming environments and as pedagogical agents in intelligent tutorial systems. Furthermore, there will be discussions on how to better design and build virtual agents, ways to enhance the realism, methods to make generative virtual human agents and means to leverage advanced AI and how it can contribute to more creative learning environments. Virtual humans, intelligent tutoring systems, and their creative integration are central in this narrative. This symposium will also discuss building virtual standardized patients for clinical training and lessons learned from many years of work. The future potentials of using virtual human agents is so great that much effort will be put forth in this area at an increasing rate over the next several years and the exposure to all of the issues is of great relevance to the medical field and all cyber and technological interfaces.
Enhanced Learning with Virtual Humans: Advancing Clinical Training with an AI-Based Grading Systems
ABSTRACT. Advanced technology has allowed us to take the next steps in improving clinical training. Next generation training systems allow for faster training that improves practical skills and patient interactions. Traditional clinical training relies on subjective evaluations from trained professionals and professors. The traditional methods can often result in inconsistent feedback and limited scalability. This presentation will discuss the development of a virtual human platform that utilizes a Bidirectional Encoder Representations from Transformers (BERT)-based natural language processing (NLP) model. We will discuss how the system can easily be adapted to change the training scenario and quickly retrain the NLP model. The virtual human platform brings patient interactions to life, providing a learning experience that's much richer and more engaging than traditional training methods.
Clinicians in training often encounter issues that keep them from accessing a broad range of patient interactions, which are essential for acquiring patient interaction skills. Our system addresses this gap by providing scalable, consistent, and comprehensive training experiences. It utilizes a BERT-based NLP model to facilitate interactions with clinicians at all levels. The virtual human platform offers a simulation of various patient personalities and medical histories within a controlled environment. The BERT model processes the trainees' inputs and determines the virtual patient's responses. This offers the ability to have realistic conversation that mirrors real-life patient scenarios. Providing conversations that feel real, along with behaviors that mimic true human interactions, is crucial for deeply engaging students and enriching their learning experience.
One element that is commonly missing from simulations and Virtual Reality (VR) training systems is grading systems. A distinctive feature of our system is its grading mechanism, which evaluates clinical trainees' performance by comparing their responses to 'gold standard' questions. These gold standard questions are predetermined ideal questions crafted by trained experts. This comparison yields a comparability score for each question asked by the clinician. Using thresholding this score is converted into a more meaningful grade for the student. The grading process is automated and can assess question accuracy, relevance, and comprehensiveness of the information gathered. The grading system in the virtual human training system includes an after-action report that provides trainees with the opportunity to review their entire dialogue with the virtual patient. These reports not only present the compatibility scores and final grades but also offer detailed feedback that highlights the clinicians’ strengths as well as provide recommendations for areas of improvement. This reflective practice is crucial for improving learning and clinical skills over time. Given the variable nature of human conversation, full grading of a trainee's performance can only be completed after the interaction with the virtual human concludes. Performing the analysis at the end of each session allows the system to thoroughly evaluate the trainee's responses and provide a comprehensive assessment.
The development of this virtual human system was collaborative and involved input from educators. Feedback from initial tests with trained clinicians was fundamental for refining the system's verbal capabilities and grading algorithms. The scenarios included in the platform were developed in close cooperation with medical professionals to ensure their accuracy. Future enhancements of the system are planned to include advanced AI techniques and additional training scenarios. The inclusion of sentiment analysis of the clinicians’ questions will offer an opportunity to evaluate the students based upon the tone of their voice. Analyzing the sentiment of the student’s voice will offer the ability to provide feedback about how well they are building rapport with a patient. The virtual human will also be able to decide from this the best emotion to use to respond to the question. These advancements will further improve the system’s applicability and educational effectiveness across various clinical specialties.
A Review of the Academic and Motivational Effects of Pedagogical Agents on K-12 Learners
ABSTRACT. Pedagogical agents (PAs) are visually-present characters within multimedia learning environments. To date, meta-analyses have consistently shown that PAs can support learning (Castro-Alonso et al., 2021; Davis et al., 2023; Schroeder et al., 2013) and that they are are particularly effective in K-12 learning contexts (Schroeder et al., 2013). However, it is less clear what impacts PAs may have on learners’ motivation and if their influence differs by specific motivation construct (e.g., the design of one agent may promote self-efficacy but not interest). In this presentation we will (1) present an updated review of research on PAs for K-12 learners over the last decade demonstrating a consistent positive impact on learning outcomes, (2) summarize our meta-analyses examining the effects of PAs on specific motivational constructs, and (3) present our ongoing work to design a toolkit and associated research plans for creating PAs that are welcoming, inclusive, and relatable to a wide range of learners.
ABSTRACT. ‘Consequential’ conversations, as used here, are those that involve challenging content, may lead to adverse outcomes, and require deft social interaction skills to navigate. The partner(s) in the conversation may be difficult to deal with, emotional or confused, or focused on an agenda. The topic of conversation may be sensitive, charged, controversial, or zero-sum. These conversations are not uncommon to professionals in law enforcement, security, military, business, and healthcare. Consider: Police officers de-escalating volatile situations, through dialog, with unstable individuals; service personnel managing encounters with foreign civilians to establish trust or gain intelligence; doctors speaking with adolescents regarding risky behaviors or family members regarding grave medical diagnoses.
Software applications designed to train or assess dialog within consequential conversations have, for many years, employed virtual characters. Virtual characters are, paradigmatically, multimodal embodied conversational agents—responsive partners with which a learner communicates to navigate a given situation or achieve a goal. Virtual character applications have various advantages, including their reproducibility, safety and controllability, ease of distribution, and objectivity, and the ability to introduce intelligent tutoring. Typically such applications have engaged a learner using realistic virtual participants in a realistic setting. Realism has increased dramatically as technology and capability have improved, so that today’s characters can be made to be lifelike in appearance, allow for natural language interaction, and use advanced behavior models to react or respond to learner actions appropriately to the context.
Unexpectedly, perhaps, this researcher has moved toward lower fidelity in some recent applications. This change in approach derives from several conditions: Difficulty in developing suitable models to meet learner expectations; resource and usability constraints; learner preferences; reappraisal of the purpose of training or assessment and affordances of underlying technology. The shift has been neither total nor abrupt, but, in retrospect, unforeseen. Lessons learned will be presented and discussed that might benefit developers of new applications addressing consequential conversations.
From Philosophy to Pixels: Tracing the Evolution of Emotion in Virtual Humans Across History and Technology to Create Dynamic Interplay
ABSTRACT. Philosophical discourse initiated by figures like Spinoza and Hegel continues to fuel debates about the intricate interplay between emotions and the mind-body relationship (Damasio, 1994). Contemporary theorists, notably Antonio R. Damasio, challenge the longstanding dichotomy between mind and body, advocating for a more integrated approach that blends neuroscience with philosophical inquiry (Damasio, 1994). This integrated perspective is crucial for developing virtual humans that authentically probe the complexity of human emotions.
In traditional methodologies, the creation of virtual humans has been limited to replicating human affect through compartmentalized techniques: facial expressions via Paul Ekman's Facial Action Coding System (FACS), bodily movements through Laban Movement Analysis (LMA), and vocal expressions enhanced by specific modulation and machine learning (ML) technologies (Ekman, 1978; Laban, 1966). This segmented approach often fails to capture the holistic expression of human emotions within digital constructs. Our research proposes and compiles comprehensive frameworks for generating emotional responses in virtual humans, tracing a lineage of movement analysis and taxonomies of viewer/actor relationships and emotional sensemaking beginning with South Asian taxonomies from 200 BCE, advancing into contemporary AI/ML techniques that currently are advancing discourse surrounding virtual expression today.
Our historical exploration of performative affect is enriched by cinematic and theatrical theory, particularly philosophical insights from "post-cinema" discourse illuminated by Shane Denson and Julia Leyda (Denson & Leyda, 2016). In the terrain of what we might call "post-cinema," the Spinozan notion of affect melds seamlessly with emerging technologies, promoting a co-evolution with technology that Brandard Steigler would call “technogenesis” (Stiegler, 1998). This paradigm shift, particularly for front-end designers, underscores a necessity of creating new, whole, and dynamic taxonomies that are inclusive of emerging cinematic grammar when designing virtual humans, where designers incorporate findings from neuroscience, artificial intelligence, machine learning, narratology, and integrate them with real-time CGI workflows.
By embracing these integrative processes, we can adopt systems that embody Alfred North Whitehead's concept of “process thought”—a philosophy that sees reality and experience as fundamentally constituted by processes of becoming, continuous change, and interrelationships. This philosophical approach suggests that the design and interaction with virtual humans should thus be dynamic and responsive, capable of evolving over time, reflecting the fluid, ever-unfolding nature of our human experience and emotion. In this context, when a human and a virtual human engage in a theoretical conversation, back-and-front-end computation must do more than just create emotional responses for the virtual entity in real time. Instead, these systems should be dynamically theatrical, and utilize real time data to encourage humans to pose more thoughtful and profound questions to their virtual counterparts, enabling them to learn effectively from the responses they receive in a dialectic manner. This approach prioritizes continuous inquiry, adaptation, and interplay between subject and object, thus aligning with the process-oriented nature of philosophy in the realm of HCI. In turn, through multiple historic and industrial precedents found in our cultural products, we posit that it is imperative to engineer experiences that does not simply replicate human emotions but integrate complexity and theatrics to guide users into a psycho-social landscape that fosters deeper understanding and exploration of specific subjects or learning objectives.
References
Damasio, A. R. (1994). Descartes' Error: Emotion, Reason, and the Human Brain. New York: G.P. Putnam's Sons.
Denson, S., & Leyda, J. (Eds.). (2016). Post-Cinema: Theorizing 21st-Century Film. Falmer: REFRAME Books.
Ekman, P. (1978). Facial Action Coding System: Investigator's Guide. Palo Alto: Consulting Psychologists Press.
Stiegler, B. (1998). Technics and time, 1: The fault of Epimetheus (R. Beardsworth & G. Collins, Trans.). Stanford University Press.
Laban, R. (1966). Choreutics. London: Macdonald & Evans.
Whitehead, A. N. (1929). Process and Reality. New York: Free Press.
Advanced AI Agents for Virtual Humans and Patients
ABSTRACT. To achieve success in using Virtual Humans as proxies for patients, therapists or any other embodied conversational character requires more advanced agent architectures that leverage the current state of art in Artificial Intelligence and Large Language Models (LLM) and associated systems, such as Natural Language Processing, voice recognition, voice generation, face and gesture behavior generation. One of the goals over the four decades of AI research into cognitive architectures, behavior trees and state machines was to embody agents with human like believable behavior and act as either autonomous agents or agents that can be directed in tasks to allow for learning objectives or to have more engagement with the participant. Virtual humans have been used widely in applications ranging from games as non-player characters, museum guides and virtual patients for clinical therapist training. However, there still remain many challenges in designing, building and using these systems such as knowledge acquisition from subject matter experts, defining proper sets of question and answer pairs, changing mental and physical states of the character over time, storing the data for multiple interaction sessions, integration into learning platforms and generating believable behaviors from the internal knowledge representation states for embodied conversations to name a few.
Recently there has been an explosion of LLM models that capture the essence of all human knowledge capable of generative natural language conversational discourse with humans on a variety of topics. There are also ways to fine tune the models to be more specific with a subset of knowledge that is more relevant to the task on hand. This has added benefits of keeping that data secure and training the model in ways that are more relevant to the task. One area of research is using the LLM as an agent framework to act as the knowledge base in which questions are asked to the LMM in certain frameworks to retrieve relevant information, or describe tasks and break them down for reasoning processes and also to use as a large memory store. However it should be noted that these LLM architectures are still limited in designs of what humans are fully capable of performing.
One key way to use these LLM’s is the design process to build virtual human agents, and those specifically acting as virtual patients. One of the greatest challenges is being able to extract out the knowledge from a clinical session that is needed to put into a virtual patient system so it can act as a character with specific mental disorder that would be appropriate for training. Another is to make sure that the information used in the design of the character is relevant and realistic in terms of race, ethnicity and gender. People have many nuances actions in terms of the way they talk, both in voice generation and in content, their behaviors and how they interact with someone else that are based on many attributes, personality and emotional state. It is important to make sure the symptoms and behavior of any specific psychology issues being displayed is correct to prevent falling into the uncanny valley of mental behavior and producing non-realistic, strange, or laughable behavior. It can be dangerous for a student learning about a disorder, such as PTSD, if the system generates conversations and behaviors that don’t match in realistic ways that will result in poor teaching environments and outcomes. However the benefit of using such technologies both in design and in live systems cannot be ignored, especially as LLM’s and agent architectures get more powerful. Work on developing these systems over the years will be discussed.
ABSTRACT. Symposium Title: Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aviation Technologies.
The Federal Aviation Administration (FAA) is responsible for setting the minimum requirements and guidance that prescribe the medical standards and medical certification process for civil aviation in the United States (U.S.). Historically, pilot medical certification has been presumed to enhance the safety of the U.S. National Airspace System by restricting pilots with specified medical diagnoses. However, the lack of empirical evidence linking medical diagnoses to operational performance has cast doubt on the efficacy of risk management through this mechanism. In order to foster data-informed decision-making processes, the present study aims to establish a foundational understanding of the relationship between operational performance and medically assessed factors. This ongoing research will initially focus on gathering pilot performance data from various high-demand, safety-critical flight tasks, and analyzing them alongside neurocognitive and physiological metrics. The performance of our model is expected to vary depending on the specific task and metric, underscoring the importance of tailoring assessments to the operational context. While some operational tasks align with historically assessed neurocognitive metrics, others show stronger correlations with other human performance measures; our study will employ a combination of these measures. Our objective is to shed light on the intricate complexities involved in modeling operational performance based on medically assessed precursors, thereby opening avenues for enhancing the sensitivity and specificity of the medical certification process. Future data collection efforts will be required, including cross-validation across diverse operational environments and subpopulations of pilots with varying medical diagnoses.
Challenges Associated with Cognitive Assessment for Pilot Certification
ABSTRACT. Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aviation Technologies
Human factors play a crucial role in aviation safety. According to data published by the National Transportation Safety Board (NTSB) of the 11,739 aviation accidents with completed investigations that occurred between 2012 and 2021, 7,335 involved task performance, 3,339 were influenced by actions and decisions, and 1,263 were assessed as involving psychological factors. The NTSB parses broader factors contributing to adverse aviation incidents into subcategories. The top two subcategory factors contributing to aviation accidents were issues controlling the aircraft and problems related to decision making and judgment. Maintaining the safety of the National Air Space (NAS) necessitates that individuals in safety-sensitive roles undergo medical certification and certain conditions necessitate that medical certificate applicants undergo neurocognitive evaluation. The FAA has developed specific test protocols for the evaluation of individuals with known risk factors that have the potential to adversely affect cognitive skills. Protocols vary depending on the issue triggering the assessment. The goal of the evaluations is to assess if the applicant evidences neurocognitive difficulties that are aviation relevant and that present a risk factor for safe operator performance. The evaluation is conducted by board-certified or board eligible neuropsychologists and results may be reviewed by FAA neuropsychologist or consultants. The FAA provides training via a yearly seminar and individuals experienced in providing FAA neuropsychological evaluations provide consultation to examining neuropsychologists to help with challenging cases. Despite the effort put forth to improve and refine the process, there are challenges involved in defining the aeromedical significance of neurocognitive test findings and in developing decision rules that can be clearly stated, applied consistently, verified, and refined. Currently neuropsychologists make judgements about an applicant’s performance on conventional tests and on an aviation specific cognitive measure, the relevance of low scores on specific measures, and how to reconcile varying task performance on tests evaluating specific neurocognitive domains. Implied are judgements that translate test performance into risk assessment. Data exist related to cognitive demands associated with aviation and cognitive capabilities associated with those demands. Nevertheless, the continued challenge is to translate test scores into a risk assessment for adverse or unsafe events that is data driven and that reduces the subjectivity in the evaluation process. The process of risk assessment is itself complicated when adverse events occur infrequently, when their occurrence is affected by external factors, and when potentially significant errors do not culminate in a visible event that can be incorporated into a risk assessment model. In this presentation we discuss the current approach to incorporating neurocognitive and behavioral assessments into the medical certification process and issues in decision making. We will discuss information gleaned from the study of accidents. We present decision making models related to sensitivity and specificity, population-based probabilities, and impairment rating methods. The presentation will underscore the need for continued study of how best to define aeromedically significant cognitive deficits and how the current project being undertaken by the FAA can inform and improve the medical certification process. Cyber issues related to investigating statistical models and databasing, and analytics will be noted. The longer-term potential for implementing modern assessment methods including the potential for function lead techniques will be discussed.
Data-Driven Methods of Modeling Human Function in Operations
ABSTRACT. Symposium: Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aviation Technologies.
Applied psychological research has traditionally operated within a theory-driven scientific rationale, modeling itself after numerous preceding natural sciences. However, mathematically modeling the ability of humans to perform complex tasks safely is challenging due to the various factors that can affect performance at a given point in time. Hence, the development of computational models should focus on risk assessment rather than on predicting the occurrence of a specific adverse event.
The current work on Pilot Functional Capability Assessment adopts a data-driven modeling approach and serves as an exemplar for the integration of empiricism-dominant methods into applied research contexts. The goal of the research is to strengthen the ties between measures used to evaluate cognitive capabilities for medical certification and the skills necessary to operate aircraft safely under different conditions. The current presentation will expound upon the process of mapping applied research objectives onto the existing Cross-Industry Standard Process for Data Mining (CRISP-DM). The current presentation will also discuss the fundamental shifts in methodological outcomes, assumptions, and pitfalls within the context of functionally assessing a pilot’s operational capabilities.
Extended Reality at the Federal Aviation Administration: Exploring Potential Use for Functional Capability Assessment
ABSTRACT. Symposium Title: Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aviation Technologies.
The Federal Aviation Administration’s (FAA) primary goal is to provide the safest, most efficient aerospace system in the world, which it often seeks to do by integrating new, promising technologies into the aviation system. Continuous advancements in extended reality (XR) technologies have enabled new applications that could play a vital role in supporting this mission. In line with ongoing efforts related to modeling operational performance under various medical and environmental conditions, we will discuss some of the FAA’s current XR capabilities and explore the anticipated future application of virtual reality (VR) technology to pilot functional capability assessment. Additionally, there will be a focus on features (e.g., integrated eye-tracking, haptic feedback solutions) and benefits (e.g., footprint, portability, and cost relative to dedicated simulators) that has the potential to make VR an ideal candidate for this purpose. We will also discuss important human factors considerations gleaned from previous XR research that will need to be assessed (e.g., simulator sickness, device ergonomics, potential field-of-view limitations) before these technologies can be safely integrated in this context. Such research might pave the way for more efficient, accessible, and empirically validated assessments of medical certification requirements.
Federal Aviation Administration Aeromedical Computerized Cognitive Screening Research: Screening Tests Selection and Development of Pilot Normative Data
ABSTRACT. This paper is part of the symposium: Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aviation Technologies.
The remarkable level of safety of aviation operations within the United States (U.S.) National Airspace System is due in part to the requirements and guidance that prescribe the medical standards and medical certification procedures for civil aviation. In the U.S., the Federal Aviation Administration (FAA) is responsible for setting the requirements and guidance that prescribe the medical standards and medical certification procedures for civil aviation. These requirements are mandated by U.S. Code Title 49 SUBTITLE VII PART A subpart iii and articulated in the Guide for Aviation Medical Examiners. Healthy cognitive functioning is critical for the successful piloting an aircraft and for maintaining a safe aerospace system. Pilots operate in a cognitively demanding environment, making decisions under conditions of unique stress. The outcome of a pilot’s decisions has the potential to impact hundreds of lives; thus, it is critical that the pilot operates at optimal cognitive ability. Not all pilots are administered a cognitive screening test as part of the physical examination process, and the FAA does not require baseline cognitive testing. The FAA requires cognitive screening only for the evaluation of pilots with known or suspected neurological or neuropsychiatric conditions associated with the potential for aeromedically significant cognitive impairment. In cases such as head trauma, stroke, encephalitis, multiple sclerosis, substance dependence or abuse, other suspected acquired or developmental conditions, and certain medications used for treatment, additional cognitive screening by a qualified psychologist or neuropsychologist, with additional training in aviation-specific topics, may be required by the FAA for pilot medical certification. The primary purpose of cognitive screening is to identify cognitive deficits that would render the pilot unsafe to perform their duties. Current research by the FAA’s Civil Aerospace Medical Institute (CAMI) seeks to identify computer-administered test batteries that may be used for cognitive screening in the pilot medical certification process, and to develop pilot normative data for those tests. Computer-administered cognitive screening tests used for medical certification must meet FAA requirements for test security, method of test administration, breadth of cognitive domains assessed, and other factors. Further, cognitive screening tests must have pilot normative data for interpretation of test results; normative data represent the range of performance on each subtest of a cognitive screening test battery for a neuropsychologically intact pilot population. This presentation provides an overview of CAMI’s ongoing, multi-year research effort, including the test selection process and a discussion of the norming effort to collect data from 960 pilots.
Transference of Emerging Aviation Functional Assessment to Commercial Space Operations
ABSTRACT. This paper is part of the symposium: Maintaining FAA’s Culture of Safety Through Human Factors Research and Innovation in Light of New and Emerging Aerospace Technologies.
Commercial spaceflight missions have been made accessible to a broad range of participants with minimal formal experience, training, or exposure to spaceflight operations. The Federal Aviation Administration (FAA) mission statement to provide an efficient and safe National Aerospace System (NAS) and encompasses the provision for guidance and regulations to ensure safety. This expands the population of interest to spaceflight participants but not spaceflight crew members or ground crew. An informed-consent regime allows them to be informed and determine if they will fly on a launch vehicle. The increasing availability of commercial spaceflight operations offered to the public necessitates an updated understanding of the human factors and medical requirements that affects an individual’s ability to perform safety-critical roles during missions as it complies with 14 CFR § 460.15. Advancements in computational models set the foundation for the development of a functional assessment tool to determine individual’s physical, cognitive, and psychological readiness to safely preform spaceflight missions.
The presentation will discuss the current theory and methodology established under the FAA Human Factors Research Pilot Functional Capability Assessment and how it informs the extended research on commercial spaceflight functional assessment and medical requirements. Additionally, the presentation will review the process in which decision-making algorithms support an assessment of whether an individual has the cognitive-behavioral resources to perform in both procedural and emergency spaceflight events. In closing, considerations for emerging aerospace technologies (extended reality and virtual reality) and the implication to the proposed cyber-based Spaceflight Functional Assessment will be discussed.
ABSTRACT. Despite extensive research in addressing technical aspects of cybercrime, there is a significant gap in understanding how psychological factors impact scam compliance in differing contexts. This review evaluates studies from diverse disciplines, including psychology, criminology, and behavioral science, to identify key personality, cognitive, emotional and social factors that influence victimization. Covering a broad spectrum of fraudulent activities—including investment fraud, relationship scams, mass marketing fraud and phishing—the review aims to provide an overview of recent empirical research focusing specifically on data collected from actual scam victims. The review applied the PRISMA-P methodology to systematically search and screen literature from multiple databases to identify 18 empirical studies. Findings revealed that personality traits such impulsivity and trust, cognitive factors such as authority bias, and other emotional and social risk factors are recurrent themes found to influence scam compliance. However, there is considerable variability in research methodologies, scam contexts, and reporting of results. This variability underscores the need for more detailed, context-specific investigations into different scam types as the psychological factors that influence scam compliance differs by scam type and context. The review concludes with recommendations for future research, emphasizing the importance of examining specific scam contexts and improving study designs to better understand scam compliance.
Human-Centric Approaches to Cybersecurity: Integrating Psychological Insights and Emotional Engagement
ABSTRACT. In an era where cybersecurity threats are both prevalent and evolving, the integration of human-centric approaches has become critical in fortifying defenses against cybercrime. This paper explores the intersection of psychology and cybersecurity, emphasizing the role of emotional engagement and cognitive insights in enhancing security measures.
Despite advances in technological defenses, the human element remains a significant vulnerability. Psychological factors, including bounded rationality, hyperbolic discounting, and optimism bias, often lead individuals to underestimate risks and make poor security decisions. This behavioral economic perspective highlights the need for tailored interventions that address cognitive biases and promote better decision-making in cyberspace.
The impact of cybercrime extends beyond financial losses, imposing severe psychological costs on victims and their networks. Symptoms akin to post-traumatic stress disorder (PTSD) are common among cybercrime victims, underlining the necessity for psychological support alongside technical solutions.
To combat these challenges, it is important to begin the integration of immersive technologies like virtual and augmented reality (VR/AR) in cybersecurity training. By engaging users emotionally and providing experiential learning environments, VR/AR can enhance the effectiveness of training programs, making them more compelling and memorable. Additionally, AI-driven worlds may further improve individual training. Gamification further contributes to this approach by transforming mundane security practices into engaging activities, thereby increasing user participation and retention.
The psychological well-being of cybersecurity professionals is crucial for maintaining robust defenses. The high-stress environment and constant vigilance required in cybersecurity roles lead to significant burnout and fatigue, impairing professionals' ability to respond to threats. Addressing these issues through stress management programs, psychological support, and systemic changes in organizational practices is essential for sustaining effective cybersecurity teams.
Moving forward, a comprehensive cybersecurity strategy must integrate technological advancements with human-centric approaches that leverage psychological insights and emotional engagement. By understanding and addressing the human factors in cybersecurity, we can build more resilient defenses and foster a culture of security awareness and psychological resilience.
Beyond Password protection: How cultural values, perceived vulnerability and security self-efficacy influence cybersecurity behavior in the United Kingdom and Saudi Arabia
ABSTRACT. Concerns around cybersecurity are increasing globally, and previous research has emphasized the human factors that contribute to cybersecurity risks. However, cultural differences have been understudied in the context of such human factors, particularly in relation to moving beyond reductionist conceptualisations of culture or relying on nationality as a measure of cultural difference. With individuals’ behaviours and perceptions shaped by collective norms as well as psychological variables, this is an important area for further investigation. The current study examined the role of cultural values (using Schwartz’s Theory of Basic Human Values) and perceived vulnerability and security self-efficacy (linked to Protection Motivation Theory) alongside demographic variables in relation to cybersecurity behaviours among individuals in the United Kingdom (n=201) and Saudi Arabia (n=211). Participants completed self-reported measures via an online survey. A hierarchical multiple regression analysis indicated that demographic variables (gender, country and age) explained 8.7% of the variance in cybersecurity scores, which increased to 43.2% of variance explained with the inclusion of the main study variables. Results showed that cultural values linked to lower self-enhancement (and higher self-transcendence) as well as perceived vulnerability and security self-efficacy predicted cybersecurity behaviours among individuals across distinct cultural and country contexts. Females also engaged in higher cybersecurity behaviours overall. The findings are important in showing the relative importance of both cultural and psychological factors in relation to cybersecurity, which has implications for informing more relevant intervention and prevention strategies.
Ongoing research into the behaviors and perceptions of ethical hackers.
ABSTRACT. The Issue to be discussed:
Despite substantial advancements in product innovation and investments in cybersecurity measures, organizations continue to confront challenges in the fundamental implementation of these cybersecurity measures effectively. The perpetual struggle to fortify digital domains against evolving threats highlights a significant vulnerability in cybersecurity strategies: the human element. A substantial portion of cybersecurity breaches can be attributed to human errors or inadequate adherence to established security protocols. This persistent challenge underscores the critical need to understand the complex dynamics influencing the behavior of cybersecurity practitioners, aka ethical hacking. Understanding these dynamics is vital as these professionals are central to the proactive defense against cyber threats.
Brief background:
Cybersecurity has developed from a relatively obscure concern to a globally recognized risk, with estimates projecting cybercrime costs to reach $10.5 trillion annually by 2025. The human factor is a significant component of cyber risk, accounting for up to 95% of successful cyber-attacks due to errors or non-compliance by an organization's own staff. This scenario necessitates a deeper exploration into the psychological underpinnings that influence cybersecurity practitioners' acceptance and adherence to security measures. Traditional approaches have predominantly focused on technological solutions and policy frameworks. However, these strategies often overlook the individual and collective motivations of the practitioners who implement and manage these cybersecurity measures.
What my research is doing about it:
My ongoing doctoral research is dedicated to unraveling the underlying factors that influence the acceptance behaviors of cybersecurity practitioners, aka ethical hackers. In August of 2024, I will be employing a mixed-methods approach that combines qualitative and quantitative research techniques (e.g., Q Methodology) at the world’s largest hacker conference, DefCon32. The study focuses on how practitioners' perceptions of hacking and cybersecurity influence their operational effectiveness. By analyzing various factors such as individual behavior influencers, self-determination, personality traits, and organizational culture, the research seeks to develop a comprehensive understanding of what drives or deters cybersecurity practices among these professionals. This investigation includes examining established models such as the Technology Acceptance Model 3 (TAM) and theories of behavioral intention to provide a nuanced analysis of the motivators and demotivators in cybersecurity behavior. I will discuss the research and preliminary outcomes at 27th Annual Cyberpsychology conference if selected.
Implications/Outcomes:
The findings from this research are poised to offer significant contributions to both organizational and societal security. By identifying key determinants that influence cybersecurity practitioners' behavior, the study aims to inform the development of more effective cybersecurity strategies that account for the human factors influencing security practices. This understanding can lead to enhanced implementation of security measures, reduced vulnerability to cyber threats, and ultimately, a more robust digital security infrastructure. Additionally, the insights gained could also contribute to the broader field of cyberpsychology, providing valuable knowledge that could influence future academic research and practical applications in cybersecurity policies and practices. The ultimate goal is to foster a cybersecurity culture that enhances practitioners' acceptance and adherence to security protocols, thus strengthening the overall security posture of organizations.
I have spoken on the topic of my doctoral research into cybersecurity behaviors and the psychology of hackers, however the nature and preliminary outcomes of my DefCon study will be new.
This could be a poster or oral presentation, though an oral presentation is the preferred method.
Cyberpsychology is not just cybersecurity, but there’s no cybersecurity without cyberpsychology
ABSTRACT. Our shared and digitally connected world keeps people connected despite demographic, geographic and a wealth of potential cultural obstacles. However, everything existing in cyberspace has both human and hardware operating in the physical world which supports the online world. Similar to the physical world going back centuries, the digital world in cyberspace is comprised of actors ranging from the nefarious to the altruistic, from black to white to grey hackers, and from analysts whose job it is to maintain online safety and ultimately the end user. As with most tools invented to increase what can be thought of as productivity, and foster pro-social benefits to society, inevitably, bad actors will try to exploit and take advantage of the technologies people come to rely on to fulfill daily life tasks. From social engineering to cleverly disguised phishing attempts, and both misinformation and fake news, combatting these cybersecurity risks is often accomplished through a combination of automated and human interventions.
The collective ‘we’ put our faith and trust in technologies we barely understand and cannot fathom the potential consequences of. Aside from if a piece of software is giving us the anticipated end result in a timely fashion, how has it been that over the past 10-20 years the majority of people willfully allow companies who develop algorithms, curate content, and profit off our data continue to be lax with safeguards they put into place? The majority of people regularly use digital tools such as social media and smartphone apps and it is common to acknowledge and agree to terms of usage with no more than a cursory glance at what is actually being agreed to. With access to cyberspace and the conveniences afforded by it (eg, for banking, shopping, work, email, etc.) comes increased opportunity for malicious actors to prey upon people who exist in the physical world but operate with a casual recklessness regarding cybersecurity.
What makes the online world function and how it is policed and maintained are questions that go beyond most people’s knowledge or care. The fast pace of life demands that people attend to other daily life tasks, which may divert attention away from online safety best practices. Even without formally acknowledging our own culpability in the process, it is a safe assumption that many people either overtly or covertly hope their data are being safeguarded via corporations acting in socially responsible ways. Not only is this not always the case, but it is time consuming to be individually aware of and maintain your own cybersecurity. However, we often find out the hard way that it is also time consuming to overcome the fallout when we are, our employers are, or larger macrosystems of society are negatively impacted by cybercrime.
Research suggests there is more awareness of and individual engagement against cybersecurity threats via adherence to best practices after being victimized (online), so why is it that people from the top-down in an organization do not maintain a greater sense of conscientiousness when it is well known that breaches in cybersecurity occur on individual, micro, and macro levels alike on a regular basis? Why do holes in even the most complex and comprehensive cybersecurity infrastructure get uncovered and exploited, even with both supervised and unsupervised safeguards in place? Cyberpsychology can help answer this question and provide insight and advice on how to overcome human-based exploits. Psychological theory can help conceptualize the dynamic between digital technology, cybersecurity, and human behavior since education may not result in expected behavioral changes of individuals, governments or businesses. This reality threatens to compromise everyone's hope for a safe and trustworthy cyberspace.
Various perceptual motor styles during locomotion in virtual reality
ABSTRACT. Human perceptual-motor style refers to the individual variations in posture and movement during sensorimotor tasks, highlighting the adaptability and variability inherent in human behavior. This concept reflects the intricate interplay between perception, cognition, and motor control, all of which can be significantly influenced by environmental factors. Height exposure is one such factor that presents a unique challenge to motor control, eliciting a wide range of responses in individuals from mild discomfort to severe anxiety. These responses are categorized into three main types: physiological height imbalance, visual height intolerance, and acrophobia. Each of these responses can profoundly affect motor performance, resulting in altered postural control, reduced oculomotor and head movements, and changes in gait characteristics.
In our previous research, we focused on how perceptual-motor style is modulated during locomotion at height using a virtual reality (VR) environment. We identified significant variability among participants in both static markers (such as stable configurations of the head, trunk, and limbs) and dynamic markers (including jerk, entropy, sample entropy, and the Two-Thirds Power Law). Notably, 61% of participants exhibited changes in at least one variable related to their dynamic control of locomotion on the ground after being exposed to height.
The present study aims to explore how height exposure influences different perceptual-motor styles, with a specific focus on understanding the physiological mechanisms behind these changes, particularly the role of muscle activity. We recruited 22 individuals with no prior fear of height or acrophobia. We used a VR video game to raise the level of anxiety during locomotion, with the thought of later using the same protocol to detect and eventually treat fear of falling and falls in older people. In this VR game, “Richie's Plank”, the person moves along a narrow plank placed between two buildings at the height of the 30th floor. Using machine learning techniques, we analyzed 12 EMG features across 15 muscle groups, assessing how these patterns changed across different movement directions.
Our findings revealed significant changes in EMG patterns between ground-level and height conditions, with the most pronounced alterations observed in the upper body muscles. This suggests a gradient of EMG changes from lower to upper body segments as participants adjusted to height exposure. Applying a machine learning model to our EMG data resulted in high classification accuracies, with the best models achieving up to 88% accuracy in distinguishing between height and ground conditions. This underscores the potential of using machine learning techniques to identify and predict individual differences in perceptual-motor styles based on neuromuscular data.
In conclusion, this study provides new insights into the neuromuscular adaptations that occur during locomotion in height-induced challenging environments. The significant individual variability observed in EMG responses highlights the complex nature of perceptual-motor style adaptation and suggests that future research should continue to explore the diverse ways in which individuals respond to environmental challenges. Understanding these mechanisms has important implications for fields such as rehabilitation, sports science, and virtual reality applications, where tailored interventions based on individual perceptual-motor styles could optimize performance.
Multimodal assessment of anxiety: a complementary approach between IoT and Psychometrics
ABSTRACT. Several authors indicate that anxiety must be assessed using three different systems: physiological, psychological and behavioral (Lang, 1968; Barlow, 2000; Lawyer & Smitherman, 2004).
The integration of physiological data with wearable devices (Gomes et al., 2023) enables a more objective measurement of anxiety. Contextual factors are equally important, as they can influence psychological states and psychometrics (Reynolds & Suzuki 2012, Lewin, 1942, 2013; Kaplan & Kaplan, 1989). To observe these phenomena, the use of virtual reality and the assessment of the feeling of presence (Haddouk & Missonnier, 2020), seem particularly suitable in cyberpsychology.
In this scientific context, we pose the following research question: What is the impact of variations in the virtual reality (VR) environment during a psychometric assessment on the anxiety state of young adults? We hypothesize that variation in the environment will lead to variation in the psychological and physiological state of anxiety.
We carried out an intra-individual longitudinal follow-up of healthy individuals aged 18 to 30 over four sessions, three of which were in virtual reality with anxiogenic (An), neutral (Ne) and soothing (So) environments.
The first session after participants’ recruitment consisted of a semi-structured inclusion interview, completion of personality (BFI-10, Courtois et al., 2020) and anxiety (STAI, Spielberger et al., 1983; HADS, Zigmond & Snaith, 1983) questionnaires to establish baseline measurements, evaluation of 45 virtual environments (VEs) to select suitable ones for the protocol, and a post-immersion semi-structured interview.
Participants then attended three VR sessions every two weeks, with session order (An, Ne, So) varying according to their assigned group. During VR immersion, participants completed the BFI-10, STAI, HADS questionnaires, and part of the ITC-SOPI (Lessiter et al., 2001). After immersion, they answered questions on emotional valence and intensity using a Likert scale (0-10), cybersickness (SSQ, Bouchard et al., 2011), and completed the remaining ITC-SOPI.
We performed both group-level and intra-individual analyses of longitudinal psychometric and physiological data, including case studies. At the group level, we used a Wilcoxon test to observe the effect of the session on anxiety state during psychometric evaluation, revealing:
- HAD-A score was significantly higher in the session with an anxiogenic environment than in the session with a soothing environment (p: 0.004 < 0.01).
- STAI score significantly higher in session with anxiogenic environment than in session with soothing environment (p: 0.00 < 0.01).
For intra-individual analyses, we examined two participants with similar anxiety profiles and scores, both exhibiting high extraversion and a tendency to feel anxious with difficulty managing it (BFI-10). Anxiety scores and heart rate data are presented in the following table, with session order corresponding to the columns:
We highlighted in bold the elements corresponding to definite a certain anxiety symptomatology (HAD-A), moderate anxiety (37-45), and strong anxiety (>45; STAI). At the cardiac level, the ratio indicates whether the autonomic nervous system is predominantly sympathetic (LF, stress/physiological reaction) or parasympathetic (HF, relaxation/physiological rest). A ratio greater than 1 indicates sympathetic predominance.
Our results suggest on a group level that the environment used during psychometric assessment influences psychological scores and states. Intra-individual analysis reveal interesting physiological reactions and differences between similar anxiety profiles, which may not be detected by psychometrics alone but could be valuable for practitioners.
Our findings should be considered with the limitations of our study, especially concerning the choice of neurophysiological sensors. However, this exploratory research proposes a new approach to studying human behavior, combining subjective and physiological data in a longitudinal follow-up. Using VR and psychometrics to assess anxiety directly in context while adopting a multimodal approach could help practitioners to study anxiety mechanisms more precisely and understand individual variations in anxiety manifestations.
Real-time estimation of the workload at a complex man-machine interfaces, the helicopter model
ABSTRACT. Although ill-defined, mental workload (MWL) remains a key concept in several industrial fields, such as aeronautics. According to the ergonomic conception, an optimal MWL is reached when the cognitive resources used by an operator to perform a given task match the operational requirements (task load). When mental demand exceeds task load, cognitive overload arises, increasing the risk of work-related fatigue or accidents.
In this study, we introduce a novel predictive model to estimate the MWL of helicopter pilots in ecological conditions. The proposed model is a data-driven machine learning approach that determines the most accurate MWL predictors from a set of measured signals.
The model was applied to two realistic scenarios with heterogeneous expected MWL in a full-flight simulator: nine male professional helicopter pilots (age: 39 ± 5.7) were asked to self-evaluate their workload at key moments during the experiment while physiological and operational parameters were recorded. Using a leave-one-out paradigm, our algorithm shows good performance (AUC = 0.805 ± 0.085) and outputs an interpretable list of the most relevant features for MWL estimation.
In contrast to former research, our findings show that a surprisingly good evaluation of the MWL could be achieved with little to no information from the operator’s physiological records. That is, our machine learning model was able to correctly predict the pilots’ mental workload using only data coming from the device they were operating. Our results pave the way towards intelligent systems able to monitor the MWL of operators in real-time and this should make online adaptive guidance possible (e.g., automatic adjustments of the information displayed).
The lack of physiological parameters in the model favorite features could mean that human body is actually too complex to measure. Are we really sure that the physiological parameters recorded are all shaped by the amount of mental workload only? It is largely possible that at some point in the experiment, some (or even all) pilots were thinking to something else rather than fully focused on their tasks: maybe they felt thirsty, hungry, thought about their kids or partner, etc. It is even possible that some time after the failure of a given task, they thought about it again and said to themselves “I should have done it that way instead of what I did”. Hence even when the pilot is fully focused on the experiment, there still might be some noise in the physiological parameters pilot-wise. So measuring the machine instead of the operator gives incomplete information, but it is more relevant to the aeronautics, a domain in which the safety is paramount and the question “do the aircraft is being flight properly?” is more important than “is it difficult?”.
ABSTRACT. Presence is a multifaceted concept that transcends the mere physical act of being in a space. It’s about the experiential, psychological, and emotional aspects of being truly "there," whether in a physical, virtual, or augmented environment. As we gather at this international conference workshop to delve into the intricacies of presence, it’s essential to reflect on both what we know and, perhaps more importantly, what we don’t know about this elusive yet critical concept.
Presence, in its simplest form, is the sense of being in an environment, whether it is real, virtual, or imagined. In physical terms, presence is often taken for granted—being in a room, at an event, or in a conversation implies presence. However, in digital spaces, presence becomes more complex. Virtual presence, for example, can be described as the psychological state in which virtual experiences are perceived as actual experiences. It’s about how immersive technology—be it virtual reality (VR), augmented reality (AR), or mixed reality (MR)—can trick the mind into feeling as though it is physically present in a non-physical world.
Through years of research and technological advancements, we have gained significant insights into presence. We understand that presence is influenced by several factors, including the quality of the sensory information provided (visual, auditory, and haptic feedback), the user’s ability to interact with the environment, and the degree of immersion provided by the technology. For instance, higher resolution displays, spatial audio, and responsive interfaces contribute to a stronger sense of presence in virtual environments.
Moreover, we know that presence is not just about sensory input but also about cognitive and emotional engagement. A user is more likely to experience a deep sense of presence when they are cognitively and emotionally invested in the environment. This is why storytelling and narrative are powerful tools in creating presence; they provide context and meaning, making the experience more immersive.
Presence is also social. In both physical and virtual environments, the presence of others significantly impacts our sense of being. Social presence, the feeling of being with others, enhances the overall experience of presence, particularly in collaborative and communicative scenarios. This is why virtual meetings or multiplayer games can feel more "real" when we perceive others as genuinely being there with us.
Despite these advances, there remains much we don’t understand about presence. One of the major gaps in our knowledge is the subjective nature of presence. While we can measure physiological responses and observe behavior, the internal experience of presence is deeply personal and varies significantly from one individual to another. Why do some users experience strong presence in certain environments while others do not? The psychological and neural underpinnings of these differences are still not fully understood.
12:30-14:00Lunch
Lunch will be held in Omni Hotel 2nd Floor - Rooms 6-8.
Join us for the Closing Ceremonies, where we’ll celebrate with Cocktails and Dinner following the remarks and session awards presentation. The night will be made even more special with a captivating performance by Scott Sixkiller Sinquah, a two-time World Champion Hoop Dancer, showcasing the rich traditions of Indigenous culture.