IEEE ICHMS 2021: IEEE 2ND INTERNATIONAL CONFERENCE ON HUMAN-MACHINE SYSTEMS
PROGRAM FOR WEDNESDAY, SEPTEMBER 8TH
Days:
next day
all days

View: session overviewtalk overview

08:30-09:00Registration and Coffee

Get a coffee, pick up your name badge and goodie bag or simply login

09:20-11:00 Session 2: D1.1: Human Performance Modelling

Day 1,  Session 1

09:20
Examining Team Interaction using Dynamic Complexity and Network Visualizations
PRESENTER: Travis Wiltshire

ABSTRACT. Given the increasing complexity of many sociotechnical work domains, effective teamwork has become increasingly crucial. While there is evidence that face-to-face communication contributes to effective teamwork, methods for understanding the time-varying nature and structure of team communication are limited. In this work, we combine sensor-based social analytics of Sociometric badges (Rhythm Badge) with two visualization techniques (Dynamic Complexity Heat Maps and Network Visualizations) to advance an intuitive way of understanding the dynamics of team interaction. To demonstrate the utility of our approach, we provide a case study that examines one team’s interaction for a Lost at Sea simulation. We were able to recover transitions in the task and team interaction as well as uncover structural changes in team member communication patterns, which we visualize using networks. Taken together, this work represents an important first step at optimizing team effectiveness by identifying critical transitions in team interactions.

09:40
Maximal benefits and possible detrimental effects of binary decision aids
PRESENTER: Joachim Meyer

ABSTRACT. Binary decision aids, such as alerts, are a simple and widely used form of automation. The formal analysis of a user’s task performance with an aid sees the process as the combination of information from two detectors who both receive input about an event and evaluate it. The user’s decisions are based on the output of the aid and on the information, the user obtains independently. We present a simple method for computing the maximal benefits a user can derive from a binary aid as a function of the user’s and the aid’s sensitivities. Combining the user and the aid often adds little to the performance the better detector could achieve alone. Also, if users assign non-optimal weights to the aid, performance may drop dramatically. Thus, the introduction of a valid aid can actually lower detection performance, compared to a more sensitive user working alone. Similarly, adding a user to a system with high sensitivity may lower its performance. System designers need to consider the potential adverse effects of introducing users or aids into systems.

10:00
IRTEX: Image Retrieval with Textual Explanations
PRESENTER: Sayantan Polley

ABSTRACT. In a Content Based Image Retrieval (CBIR) system, images are retrieved based on the content of the images such as color, shapes and objects. CBIR is typically accomplished by extracting features and comparing image feature vectors with a query vector and rankings are derived using a similarity measure. However, often end users experience a semantic gap between the notion of similarity used by the ranking model versus the users’ perception about image similarity. Explainable AI (XAI) is an emerging research field that attempts to provide transparency of “black box” models, to make AI systems trustworthy and gain user trust. This work aims at building an Image Retrieval system with Textual in EXplanations such as “The (global) results are similar to the query by X % due to shape, Y % due to color”. Local explanations are generated using various methods such as comparing images with respect to overlap between low level features such as color, shape and regions (MPEG-7 features). Additionally, high level features such as background-foreground segmentation, deep learned features and major key-points, objects identified (SIFT features) are used to enrich the explanations. We evaluate the quality of rankings on benchmark data-sets such as PASCAL VOC. The XAI facet of user satisfaction and usefulness of the system is evaluated in a lab based user study. Our results show that the semantic gap is better bridged using high level features. While low level features might be better suited for re-ranking of the retrieved images.

10:20
Cognitive Load and Productivity Implications in Human-Chatbot Interaction
PRESENTER: Stephan Schlögl

ABSTRACT. The increasing progress in artificial intelligence and respective machine learning technologies has fostered the proliferation of chatbots to the point where today they are being embedded into various human-technology interaction tasks. In enterprise contexts, the use of chatbots seeks to reduce labor costs and consequently increase productivity. For simple, repetitive customer service tasks such already proves beneficial, yet more complex collaborative knowledge work seems to require a better understanding of how the technology may best be integrated. Particularly, the additional mental burden which accompanies the use of these natural language based artificial assistants, often remains overlooked. To this end, cognitive load theory implies that unnecessary use of technology can induce additional extrinsic load and thus may have a contrary effect on users' productivity. The research presented in this paper thus reports on a study assessing cognitive load and productivity implications of human chatbot interaction in a realistic enterprise setting. A/B testing software-only vs. software + chatbot interaction, and the NASA TLX were used to evaluate and compare the cognitive load of two user groups. Results show that chatbot users experienced less cognitive load and were more productive than software-only users. Furthermore, they show lower frustration levels and better overall performance (i.e, task quality) despite their slightly longer average task completion time.

10:40
A brain-sensing fragrance diffuser for mental state regulation using electroencephalography
PRESENTER: An-Yu Zhuang

ABSTRACT. Human brain studies have shown that olfactory perception can regulate emotion and attention networks and prevent depressed mental states. Fragrance diffusers have been used as the potential appliance to reconcile mental conditions and achieve stress relief in daily life. Although perceiving fragrances are a complicated and subjective experience, studies have shown that it is possible to reveal the personal preference of fragrance from brain activity measured by electroencephalography (EEG). Moreover, using EEG to detect neural/mental states and apply them to human-machine interfaces has also been investigated for years. Therefore, this study has two aims: (1) to identify users’ preference for fragrances from EEG; (2) to develop a personalized fragrance diffuser, Aroma Box, which can detect three mental states from EEG when a user feels depressed, stressed, or drowsily and then release fragrances in real-time to help user recover from the abnormal states. To achieve this goal, we first extracted the features and built a classifier to identify the user's preference for fragrances from EEG. Then we calculated the indicators of brain states based on the EEG frequency analysis. Finally, we deployed our algorithms in an in-house developed diffuser with a consumer 32-channel EEG headset, which has been further implemented in a real-life working environment and evaluated its efficacy by two users.

11:00-11:30Coffee Break
11:30-13:00 Session 3: D1.2: Interactive and Wearable Computing Systems (Special Session)

Day 1,  Session 2

11:30
Assessment of a textile portable exoskeleton for the upper limbs' flexion

ABSTRACT. Flexible exoskeletons are lightweight robots that surround the user's anatomy to either assist or oppose its motion. Their structure is made of light and flexible materials, like fabrics. Therefore, the forces created by the robot are directly transferred to the user's musculoskeletal system; this makes exosuits sensitive to the sliding of the actuation, textile perturbations and improper fitting to the user. LUXBIT is a cable-driven flexible exoskeleton that combines different fabrics and sewing patterns to promote its anatomical adaptation. The exoskeleton described is intended for bimanual assistance of daily tasks and long-term usage. To this end, the system reduces the pressures applied to the user and the misalignment with the user by stacking textile patches. The patches enhance the functioning of the base garment and promote the transference of the assistance forces. Additionally, LUXBIT has a compact actuation with deformable components to prevent restricting the user's motion. The exoskeleton is portable by using an enhanced textile backpack. This paper shows the exoskeleton's benefits for trajectory and muscle activity when the user flexes the shoulder and elbow.

11:50
t-SNE and PCA in Ensemble Learning based Human Activity Recognition with Smartwatch
PRESENTER: Dipanwita Thakur

ABSTRACT. Smartwatch based Human Activity Recognition (HAR) is gaining popularity due to habitual unhealthy behavior of the population and the rich in-built sensors of smartwatch. Raw sensor data is not well suited for the classifiers to identify similar activity patterns. According to the HAR literature handcrafted features are beneficial to properly identify the activities, which is time consuming and need expert domain knowledge. Automatic feature extraction libraries give high-dimensional feature sets that increase the computation and memory cost. In this work, we present an Ensemble Learning framework that exploit dimensional reduction and visualization to improve performance specification. Specifically, using Time Series Feature Extraction Library (TSFEL), the high dimensional features are extracted automatically. Then, to reduce the dimension of the feature set and proper visualization, Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE) are used respectively. The relevant extracted features using PCA are fed to an ensemble of five different Machine Learning (ML) classifiers to identify six different human physical activities. We also compare the proposed method with three popularly used shallow ML methods. Self collected human activity smartwatch sensor signal is used to establish the feasibility of the proposed framework. We observe that the proposed framework outperforms the existing state-of-the-art benchmark frameworks, with an accuracy of 96%.

12:10
FaceMask: a Smart Personal Protective Equipment for Compliance Assessment of Best Practices to Control Pandemic
PRESENTER: Raffaele Gravina

ABSTRACT. Disposable and reusable face masks represent one of the key personal protective equipment (PPE) against COVID-19 pandemic and their use in public environments is mandatory in many countries. According to the intended use, there exist different types of masks with varying level of filtration. World Health Organization (WHO) has developed a set of best practices and guidelines to the correct use of this fundamental PPE. Nevertheless, many people tend to neglect wearing the mask in presence of other people and to unintentionally overuse the mask before replacement, which results in increased exposure to airborne infections. This paper proposes the development of a smart wearable computing system, consisting of a reusable face mask augmented with sensing elements and wireless connected to a personal mobile device, to recognize correct positioning of the face and capable to monitor other parameters such as usage time. Specifically, we realized a 3D printed mask prototype with replaceable filter and equipped with a small electronic embedded device. The mask collects internal and external parameters including humidity, temperature, volatile organic compounds (VOC) inside the mask, inertial motion, and external temperature and light. Collected data are transmitted over Bluetooth Low Energy to a smartphone responsible of performing signal pre-processing and position classification. Two machine learning algorithms are compared and obtained results from real experiments showed SVM performed slightly better than Naive Bayes, 98% and 97% accuracy, respectively.

12:30
Impedance-Based Feedforward Learning Control for Natural Interaction between a Prosthetic Hand and the Environment

ABSTRACT. Robotic prosthetic hands or arms often do not apply appropriate force and pressure and also do not have appropriate tactile and proprioceptive feedbacks as accurately and precisely as a human, which make the prosthetic arms less user-friendly and inconvenient. The lack of human-like tactile and proprioceptive feedbacks may also cause serious safety problems in the interaction between a prosthetic arm and the environment. This paper proposes a supervised learning-based solution to this problem associated with a support vector machine (SVM) classifier, which is to create a method that allows the synthetic hand or a prosthetic arm to apply forces to the environment (and react to the forces applied by the environment on the prosthetic arm in the form of tactile and proprioceptive forces or pressures) properly. As part of the entire goal, we create a glove instrumented with piezoelectric tactile sensors that fits over one of the hands, applies forces on the environment (an object grasped by a human subject wearing the glove) and records the applied forces/pressures along with proprioceptive and tactile feedbacks. In a simple user study, we subjectively evaluate the interaction between the environment and the human hand wearing the glove. Based on the user study results and the measured forces, we then outline a supervised learning algorithm to be applied with a support vector machine to classify the natural and unnatural interactions between the glove (potential prosthetic arm) and the object (environment). The learned (trained) algorithm is then proposed to be used to develop feedforward learning control for achieving human-like natural interactions between the prosthetic arm and the environment.

12:45
Advancing the Adoption of Virtual Reality and Neurotechnology to Improve Flight Training
PRESENTER: Evy van Weelden

ABSTRACT. Virtual reality (VR) has been used for training purposes in a wide range of industries, including education, healthcare, and defense. VR allows users to train in a safe and controlled digital environment while being immersed and highly engaged in a realistic task. One of its advantages is that VR can be combined with multiple wearable sensing technologies, allowing researchers to study (neuro)physiological and cognitive processes elicited by dynamic environments and adapt these simulations on the basis of such processes. However, the potential of VR combined with neurotechnology to facilitate effective and efficient aviation training has not yet been fully explored. For instance, despite the growing interest in including VR as part of the training programs for military and commercial airlines pilots, it is still unclear what the effectiveness of VR is in short- and long-term training of pilots. This paper provides an overview of the state-of-the-art research in VR applications for aviation training and identifies challenges and future opportunities. We particularly discuss the potential of neurotechnology in objective measurement of training progress and providing real-time feedback during VR flight tasks. Overall, VR combined with neurotechnology for flight training holds promise to maximize individual learning progress.

13:00-14:00Lunch Break - Day 1
15:00-15:30Coffee Break - Day 1 - Afternoon
15:30-17:40 Session 5: D1.4: Autonomous and Assisted Driving

Day 1,  Session 3

15:30
A Classified Driver’s Lane-Change Decision-Making Model Based on Fuzzy Inference for Highly Automated Driving
PRESENTER: Muhua Guan

ABSTRACT. Many efforts have been devoted to modeling drivers’ lane-change decision-making process. However, most of them proposed a general model and ignored drivers’ various driving habits. In this study, a classified driver’s lane-change decision-making model based on fuzzy inference was proposed. A driving experiment was held to determine the membership functions. To meet various driving habits and preferences of drivers, the proposed model was classified into three types, namely aggressive, medium, and conservative. As model validation, a mathematical simulation was run to compare the classified fuzzy model with a conventional model proposed in the previous study. Simulation results showed that the classified fuzzy models could make differentiated lane change decisions. Furthermore, the classified fuzzy models made more stable lane-change decisions than the conventional model. This study suggests the potential of using the proposed model for the design of highly automated driving with different driving types.

15:45
EEG-based Classification of Drivers Attention using Convolutional Neural Network

ABSTRACT. Accurate detection of a driver’s attention state can help develop assistive technologies that respond to unexpected hazards in real time and therefore improve road safety. This study compares the performance of several attention classifiers trained on participants’ brain activity. Participants performed a driving task in an immersive simulator where the car randomly deviated from the cruising lane. They had to correct the deviation and their response time was considered as an indicator of attention level. Participants repeated the task in two sessions; in one session they received kinaesthetic feedback and in another session no feedback. Using their EEG signals, we trained three attention classifiers; a support vector machine (SVM) using EEG spectral band powers, and a Convolutional Neural Network (CNN) using either spectral features or the raw EEG data. Our results indicated that the CNN model trained on raw EEG data obtained under kinesthetic feedback achieved the highest accuracy (89%). While using a participant’s own brain activity to train the model resulted in the best performances, inter-subject transfer learning still performed high (75%) , showing promise for calibration-free Brain-Computer Interface (BCI) systems. Our findings show that CNN and raw EEG signals can be employed for effective training of a passive BCI for real-time attention classification.

16:00
Driver-Vehicle Interaction: The Effects of Physical Exercise and Takeover Request Modality on Automated Vehicle Takeover Performance between Younger and Older Drivers
PRESENTER: Gaojian Huang

ABSTRACT. Semi-automated vehicles still require manual takeover intervention. For older drivers, age-related declines may make takeover transitions difficult, but the current literature on takeover and aging is mixed. Non-chronological age factors, such as engagement in physical exercise, which has been shown to mitigate perceptual and cognitive declines, may be contributing to these conflicting results. The goal of this pilot study was to examine whether age, physical exercise, and takeover request alert modality influence post-takeover performance. Sixteen younger and older adults were divided into exercise and non-exercise groups, and completed takeover tasks with seven different types of takeover requests. Overall, older adults in the physical exercise group had shorter decision-making times and lower maximum resulting jerk, compared to seniors in the non-exercise group. Takeover request type did not influence takeover performance. Findings may contribute to theories on aging and inform the development of next-generation automated vehicle systems.

16:15
Haptic interface for presenting enveloping force from remote obstacles in a personal vehicle
PRESENTER: Takuma Yabe

ABSTRACT. It is expected that small electric vehicles will be used in Shared Space. Driving an electric vehicle safely becomes issues of Shared Space. In this research, a vehicle control interface that consists of a haptic joystick and five 1 degree-of-freedom manipulators on the joystick is proposed. It gives force sense to the driver's hand and fingers from surrounding manipulators if there are obstacles in proximity area. The direction of the obstacles and the direction of haptic pressure are linked so that the driver can figure out the obstacle's direction without visual information. We prove that the system is able to make drivers to recognize obstacles that should be watched out. It is also revealed that the direction the HMI present is recognizable. In addition, the system is usable when the obstacle is in the blind spot.

16:30
Human reliability analysis in situated driving context considering human experience using a fuzzy-based clustering approach
PRESENTER: Chao He

ABSTRACT. Although more higher level advanced driver assistance systems (ADAS) are applied to driving, human driver reliability is still critical for driving safety as human-related accidents accounts for the highest proportion of total accidents. Existing reliability approaches qualify human behaviors in a static manner. In this contribution dynamically changing situations are considered: as example dynamic and situated driving context is used for human reliability evaluation. The dynamic and situated driving context requires dynamic solutions for human reliability evaluation. Cognitive reliability and error analysis method (CREAM) provides the evaluation method for human reliability in industrial fields, when it is applied to situated context, adaption is required. Furthermore, human experience as an important factor for driving safety is also needed to be considered when human driver reliability is evaluated. In this contribution, three variables are selected to evaluate human driver experience (HDE) in situated driving context. Meanwhile, a new list of common performance conditions (CPCs) in CREAM to characterize the situated driving context is generated due to the application limits of CPCs in original CREAM. To determine the levels in HDE variables and new generated CPCs, fuzzy neighborhood density-based spatial clustering of application with noise (FN-DBSCAN) is applied to driving data defining the membership function parameters. Therefore, HDE and human driver reliability score (HDRS) in situated driving context are calculated quantitatively. Next, a new evaluation index, human performance reliability score (HPRS) is defined. The results show that the new proposed method could quantify and evaluate human driver reliability in real time.

16:50
Vehicle Autonomous Navigation with Context Awareness

ABSTRACT. Nowadays, many models performing global robotic navigation exist, and they are capable to drive safely and autonomously, and to reach their set destination. However, most of them don’t take into account the information coming from the context in which such navigation is occuring, resulting in a severe information loss. Without Context-Aware Navigation, it is not possible to build a model to let the vehicle adapt its behaviour to the situation, in the way a human driver spontaneously does. It is therefore needed a study on how to connect the contextual information with the robot’s control loop. For our solution we will use semantic structures known as ontologies, that help the vehicle reason in real-time and change its own behaviour in function of given contextual information. After a definition of the Context of Navigation, in this paper we propose an approach to the problem of encoding the Context Awareness in the Autonomous Navigation’s controller. Finally, such approach is put to the test in a simulator, to discuss the results achieved.

17:05
How Pedestrians Perceive Autonomous Buses: Evaluating Visual Signals

ABSTRACT. With the deployment of autonomous buses, sophisticated technological systems are entering our daily lives and their signals are becoming a crucial factor in human-machine interaction. The successful implementation of visual signals requires a well-researched human-centred design as a key component for the new transportation system. The autonomous vehicle we investigated in this study uses a variety of these: Icons, LED panels and text. We conducted a user study with 45 participants in a virtual reality environment in which four recurring communication scenarios between a bus driver and his passengers had to be correctly interpreted. For our four scenarios, efficiency and comprehension of each visual signal combination was measured to evaluate performance on different types of visual information. The results show that new visualization concepts such as LED panels lead to highly variable efficiency and comprehension, while text or icons were well accepted. In summary, the authors of this paper present the most efficient combinations of visual signals for four reality scenarios.

17:20
Human-centric Autonomous Driving in an AV-Pedestrian Interactive Environment Using SVO
PRESENTER: Luca Crosato

ABSTRACT. As Autonomous Vehicles (AV) are becoming a reality, the design of efficient motion control algorithms will have to deal with the unpredictable and interactive nature of other road users. Current AV motion planning algorithms suffer from the freezing robot problem, as they often tend to overestimate collision risks. To tackle this problem and design AV that behave human-like, we integrate a concept from Psychology called Social Value Orientation into the Reinforcement Learning (RL) framework. The addition of a social term in the reward function design allows us to tune the AV behaviour towards the pedestrian from a more reckless to an extremely prudent one. We train the vehicle agent with a state of the art RL algorithm and show that Social Value Orientation is an effective tool to obtain pro-social AV behaviour.