Tags:HoloLens, learner performance, multimodal data and XR-based simulation
Abstract:
This pilot research aims to investigate the potential of multimodal data in predicting the user’s performance in an XR-based training simulation environment. XR-based learning simulations have rapidly gained popularity in training education areas like medical and nursing, as well as STEAM, due to their ability to provide training opportunities in an immersive and authentic environment. In this study, multimodal data, such as eye-tracking and behavioral data (task completion time, accuracy, head movement, and hand movement) were collected, as well as the participant’s perception of the simulation through a subjective survey questionnaire. We aim to investigate how and what kind of multimodal data could be collected in an XR-based learning environment to optimize the training curriculum, improve the user experience, and enhance the learning outcomes. To achieve this, we collected multimodal data for attention, cognitive load, and performance behavior: head movement, hand movement, and eye information from 22 medical residents who participated in the XR-based simulation experience focused on strabismus diagnosis for resident training. As a result, we can collect fifteen multimodal data types for attention and nine data types for performance behavior. These results could be the fundamental research for predicting the learner’s performance states in XR-based training simulation.
Multimodal Data as User’s Performance Recording in XR-Based Training Simulation Environment