View: session overviewtalk overview
09:50 | Effects of Patient Care Assistant Embodiment and Computer Mediation on User Experience ABSTRACT. Providers of patient care environments are facing an increasing demand for technological solutions that can facilitate increased patient satisfaction while being cost effective and practically feasible. Recent developments with respect to smart hospital room setups and smart home care environments have an immense potential to leverage advances in technologies such as Intelligent Virtual Agents, Internet of Things devices, and Augmented Reality to enable novel forms of patient interaction with caregivers and their environment. In this paper, we present a human-subject study in which we compared four types of simulated patient care environments for a range of typical tasks. In particular, we tested two forms of caregiver agency with a real person or a virtual agent, and we compared two forms of caregiver embodiment with disembodied verbal or embodied interaction. Our results show that, as expected, a real caregiver provides the optimal user perception but an embodied virtual assistant is also a viable option for patient care environments, providing significantly higher social presence and engagement than voice-only interaction known from current digital assistants. We discuss the implications for practitioners in this field. |
10:15 | Collaborative Virtual Reality for Laparoscopic Liver Surgery Training PRESENTER: Vuthea Chheang ABSTRACT. Virtual reality (VR) has been used in many medical training systems for surgical procedures. However, the current systems are limited due to inadequate interactions, restricted possibilities of patient data visualization, and collaboration. We propose a collaborative VR system for laparoscopic liver surgical planning and simulation. Medical image data is used for model visualization and manipulation. Additionally, laparoscopic surgical joysticks are used to provide an opportunity for a camera assistant to cooperate with an experienced surgeon in VR. Continuous clinical feedback led us to optimize the visualization, synchronization, and interactions of the system. Laparoscopic surgeons were positive about the systems' usefulness, usability, and system performance. Additionally, limitations and potential for further development are discussed. |
10:40 | AR food changer using deep learning and cross-modal effects ABSTRACT. We propose an AR application which enables to change the appearance of food without AR markers by applying machine learning and image processing. Modifying the appearance of real food is a difficult task because the shape of food is atypical and deforms while eating. Therefore, we developed a real-time object region extraction method that combines two approaches in a complementary manner to extract food regions with high accuracy and stability. These approaches are based on color and edge information processing with a deep learning module trained with a small amount of data. In addition, we implemented some novel methods to improve the accuracy and reliability of the system. Then, we conducted an experiment and the results show that the taste and oral texture were affected by visual textures. Our application can change not only the appearance in real time but also the taste and texture of actual real food. Therefore, in conclusion, our application can be virtually termed as an “AR food changer”. |
11:05 | Measuring User Responses to Driving Simulators: A Galvanic Skin Response Based Study ABSTRACT. The use of simulator technology has become popular in providing training, investigating driving activity and performing research as it is a suitable alternative of actual field study. The transferability of the achieved result from driving simulators to the real world is a critical issue considering later real-world risks, and important to the ethics of experiments. Moreover, researchers have to trade-off between simulator sophistication and the cost it incurs to achieve a given level of realism. This study will be the first step towards investigating the plausibility of different driving simulator configurations of varying verisimilitude, from drivers’ galvanic skin response (GSR) signals as GSR is the widely used indicator of behavioural response. By analyzing GSR signals in a simulation environment, our results are aimed to support or contradict the use of simple low-level driving simulators. We investigate GSR signals of 23 participants doing virtual driving tasks in 5 different configurations of simulation environments. A number of features are extracted from the GSR signals after data preprocessing. With a simple neural network classifier, the prediction accuracy of different simulator configurations reaches up to 90% during driving. Our results suggest that participants are more engaged when realistic controls are used in normal driving, and are less affected by visible context during driving in emergency situations. The implications for future research are that for emergency situations realistic controls are important and research can be conducted with simple simulators in lab settings, whereas for normal driving the research should be conducted with full context in a real driving setting. |
14:00 | Immersive Analysis of 3D Multi-Cellular In-Vitro and In-Silico Cell Cultures PRESENTER: Andreas Knote ABSTRACT. Immersive visualization has become affordable formany laboratories and researchers with the advent of consumervirtual reality devices. This paper introduces two accordingscenarios, (1) modeling biological simulations of multi-cellulartumor spheroids, and (2) analysing spatial features in fluorescencemicroscopy data of organoids. Based on these, we derive a listof functional requirements for an immersive workbench thatintegrates both image data analysis and modeling perspectives. Three existing, exploratory prototypes are presented and dis-cussed. Finally, we propose two specific applications of immersivetechnology for the support of the targeted user groups’ research,one for the collaborative definition of segmentation “groundtruth”, and one for the analysis of spatial features in organoids |
14:25 | Immersive Analytics of Large Dynamic Networks via Overview and Detail Navigation PRESENTER: Johannes Sorger ABSTRACT. Analysis of large dynamic networks is a thriving research field, typically relying on 2D graph representations. The advent of affordable head mounted displays however, sparked new interest in the potential of 3D visualization for immersive network analytics. Nevertheless, most solutions do not scale well with the number of nodes and edges and rely on conventional fly- or walk-through navigation. In this paper, we present a novel approach for the exploration of large dynamic graphs in virtual reality that interweaves two navigation metaphors: overview exploration and immersive detail analysis. We thereby use the potential of state-of-the-art VR headsets, coupled with a web-based 3D rendering engine that supports heterogeneous input modalities to enable ad-hoc immersive network analytics. We validate our approach via a performance evaluation and a case study with experts analyzing a co-morbidity network. |
14:50 | Using Visualization of Convolutional Neural Networks in Virtual Reality for Machine Learning Newcomers ABSTRACT. Software systems and components are increasingly based on machine learning methods, such as Convolutional Neural Networks (CNNs). Thus, there is a growing need for common programmers and machine learning newcomers to understand the general functioning of these algorithms. However, as neural networks are complex in nature, novel presentation means are required to enable rapid access to the functionality. For that purpose, we examine how CNNs can be visualized in Virtual Reality (VR), as it offers the opportunity to focus users on content through effects such as immersion and presence. With a first exploratory study, we confirmed that our visualization approach is both intuitive to use and conductive to learning. Moreover, users indicated an increased motivation to learning due to the unusual virtual environment. Based on our findings, we propose a follow-up study that specifically compares the benefits of a virtual visualization approach to a traditional desktop visualization. |
15:15 | Coordinate: A spreadsheet-programmable augmented reality framework for immersive map-based visualizations ABSTRACT. Augmented reality devices are opening up a new design space for immersive visualizations. 3D spatial content can be overlay on onto existing physical visualizations for new insights into the data. We present Coordinate, a collaborative analysis tool for augmented reality visualizations of map-based data designed for mobile devices. Coordinate leverages a spreadsheet-programmable web interface paired with a contemporary augmented reality infrastructure for an easy-to-use tool that can provide spatial information to multiple users. Coordinate offers an immersive visualization experience that seeks to enrich presentations for business, education and scientific discussions. |
15:30 | DatAR: Your Brain, your Data, on your Desk - A Research Proposal ABSTRACT. We present a research proposal that investigates the use of 3D representations in Augmented Reality (AR) to allow neuroscientists to explore literature they wish to understand for their own scientific purposes. Neuroscientists need to identify potential real-life experiments they wish to perform that provide the most information for their field with the minimum use of limited resources. This requires understanding both the already-known relationships among concepts and those that have not yet been discovered. Our assumption is that by providing overviews of the correlations among concepts through the use of semantic graphs, these will allow neuroscientists to better understand the gaps in their own literature and more quickly identify the most suitable experiments to carry out. We will identify candidate 3D semantic graph visualizations and improve upon these for a specific search task. We will also investigate the utility of different clustering algorithms for “summarizing” the large number of relationships in the literature. We describe our planned prototype 3D AR implementation and directions we intend to explore. |
14:00 | Introduction (by workshop co-chairs) PRESENTER: Zerrin Yumak |
14:10 | Expressive Body Movement in VR: Why it Matters, What Matters and How We Get There ABSTRACT. Virtual Reality, and XR in most of its manifestations, allows us to project people and avatars into the same shared 3D environment. This opens new possibilities for character based technologies to operate as powerful communication tools. Realizing this potential, however, requires getting the details right as people are highly trained observers of nonverbal communication. This talk will begin by making the case for embodiment and its power in VR. I'll then summarize work we have done over the last few years aimed at understanding how movement variations correlate with people's perceptions of characters. Finally, I will chart possible technical paths forward on the continued development of expressive character technologies. |
15:05 | Data-driven Gaze Animation using Deep Learning |
15:25 | Ubiquitous Virtual Humans: A Multi-Platform Framework for Embodied AI Agents in XR ABSTRACT. We present an architecture and framework for the development of virtual humans for a range of computing platforms, including mobile, web, Virtual Reality (VR) and Augmented Reality (AR). The framework uses a mix of in-house and commodity technologies to support audio-visual sensing, speech recognition, natural language processing, nonverbal behavior generation and realization, text-to-speech generation, and rendering. This work builds on the Virtual Human Toolkit, which has been extended to support computing platforms beyond Windows. The resulting framework maintains the modularity of the underlying architecture, allows re-use of both logic and content through cloud services, and is extensible by porting lightweight clients. We present the current state of the framework, discuss how we model and animate our characters, and offer lessons learned through several use cases, including expressive character animation in seated VR, shared space and navigation in room-scale VR, autonomous AI in mobile AR, and real-time user performance feedback based on mobile sensors in headset AR. |
16:15 | Civil War Battlefield Experience: Historical event simulation using Augmented Reality Technology ABSTRACT. In recent years, with the development of modern technology, Virtual Reality (VR) has been proven as an effective means for entertaining and encouraging learning processes. Users immerse themselves in a 3D environment to experience situations that are very difficult or impossible to encounter in real life, such as volcanoes, ancient buildings, or events on a battlefield. Augmented Reality (AR), on the other hand, takes a different approach by allowing users to remain in their physical world while virtual objects are overlaid on physical ones. In education and tourism, VR and AR are becoming platforms for student learning and tourist attractions. Although several studies have been conducted to promote cultural preservation, they are mostly focused on VR for historical building visualization. The use of AR for simulating an event is relatively uncommon, especially for a battlefield simulation. This paper presents a work-in-progress, specifically a web-based AR application that enables both students and tourists to witness a series of battlefield events occurring at the Battle of Palmito Ranch, located near Brownsville, Texas. With markers embedded directly into the printed map, users can experience the last battle of the Civil War in the US. |
16:35 | Crowd and procession hypothesis testing for large-scale archaeological sites ABSTRACT. Our goal is to construct parameterized, spatially and temporally situated simulations of large-scale public ceremonies. Especially in pre-historic contexts, these activities lack precise records and must be hypothesized from material remains, documentary sources, and cultural context. Given the number of possible variables, we are building a computational system SPACES (Spatialized Performance And Ceremonial Event Simulations), that rapidly creates variations that may be both visually (qualitatively) and quantitatively assessed. Of particular interest are processional movements of crowds through a large-scale, navigationally complex, and semantically meaningful site, while exhibiting individual contextual emotional, performative, and ceremonially realistic behaviors. |
16:55 | Implementing Position-Based Real-Time Simulation of Large Crowds ABSTRACT. In recent years, many methods have been proposed for simulating crowds of agents. Regrettably, not all methods are computational scalable as the number of simulated agents grows. Such quality is particularly important for virtual production, gaming, and immersive reality platforms. In this work, we provide an open-source implementation for the recently proposed Position-based dynamics approach to crowd simulation. Position-based crowd simulation was proven to be real-time, and scalable for crowds of up to 100K agents, while retaining interesting dynamic agent and group behaviors. We provide both non-parallel, and GPU-based implementations. We demonstrate our implementation on several scenarios, and witness scalability, as well as visually realistic collective behavior. |
17:15 | CrowdAR Table - An AR Table for Interactive Crowd Simulation ABSTRACT. In this paper we describe a prototype implementation of an augmented reality (AR) system for accessing and interacting with crowd simulation software. We identify a target audience and tasks (access to the software in a scientific museum) motivate the choice of AR system (an interactive table complemented with handheld AR via smartphones) and describe its implementation. Our system has been realized in a prototypical implementation verifying its feasibility and potential. Detailed user testing will be part of our future work. |
17:35 | Crowd Simulation: Overview and Future Challenges |
17:55 | Closing comments PRESENTER: Zerrin Yumak |