ISMCR 2020: 23RD INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS
PROGRAM FOR THURSDAY, OCTOBER 15TH
Days:
next day
all days

View: session overviewtalk overview

11:45-12:00 Session 1: Opening remarks

Dr Zafar Taqvi (USA) - IMEKO TC17 (Measurement and Robotics) Chair, ISMCR General Chair

Prof. Hassan Charaf (Hungary) - Dean, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics

Location: Virtual Room A
12:00-13:40 Session 2: Multi-agent systems
Location: Virtual Room A
12:00
Artificial Intelligence based bus routing in urban areas
PRESENTER: Adonisz Dimitriu

ABSTRACT. Public transportation consists mostly of fixed transit systems, which have fixed stations, routes, and schedules. In this paper, a new approach for bus routing in public transportation is proposed, in which buses are traveling unbounded, adapting to passengers (not vice versa) by picking them up at their current location and transferring them to their destinations. Bus routes need to be adjusted to the passenger layout. This problem is close to the Dial-a-Ride Problem (DARP), but the solution is searched on real road-network graphs. The goal is to find a globally optimal set of paths for a given number of buses, such that all passengers are transferred to their destinations while the average travel time is minimal. In this paper, a modified Max-Min Ant System (MMAS) algorithm is utilized.

12:20
SWARM based drone umbrellas to counter missiles
PRESENTER: Lajos Szarka

ABSTRACT. This paper presents a SWARM (PSO – Particle SWARM Optimization) based method to neutralize incoming missiles with drone units by forming adaptive umbrellas to electroshock the missiles’ control system. The drone navigation is a topic that is getting increased attention from the research community in recent years, while the SWARM methods are still not widely used despite of its strengths. This article is providing a solution to neutralize missiles with different orbits and velocity by a self-forming PSO based drone swarm, which is performing dynamic self-distribution by a radar-based heuristic. The results are statistically evaluated and discussed. The paper concludes with proposals for future research.

12:40
Optimal Feedback Strategy of a Superior Evader Passing Between Two Pursuers
PRESENTER: János Szőts

ABSTRACT. In many differential games, the optimal feedback strategies cannot be given explicitly, even if the optimal trajectories are known. This is the case in the game introduced by Hagedorn and Breakwell, where a faster evader has to cross the gap between two pursuers, which strive to capture it. The authors managed to integrate the optimal trajectories of this game analytically. In our previous work, we validated their work by analyzing the underlying theory and integrating the trajectories numerically. Here we present a problem-specific numerical method for calculating optimal inputs of the pursuers and evader in any state, thus calculating the optimal feedback strategies. The method relies on the numerical integration of trajectories and is computationally efficient such that it may be applied in real-time.

13:00
Learning to Play Robot Soccer from Partial Observations

ABSTRACT. Reinforcement learning (RL) has undergone unprecedented evolution in the last few years, managing to surpass the human baseline in several high-profile board and computer games. Despite these successes, deep neural agents remained absent in several challenging fields, such as robot soccer, which is a rather complex task involving simultaneous cooperation and competition between multiple agents. In this paper, we investigate the feasibility of playing robot soccer via deep reinforcement learning using an environment of our own making. This environment provides the agent with imperfect and incomplete observations to simulate the errors of a real vision pipeline. To improve the agent's performance on this task, an intrinsic reward module is proposed based on self-supervised learning. Our experiments show that the proposed extension significantly improves the agent.

13:20
A New Advantage Actor-Critic Algorithm For Multi-Agent Environments
PRESENTER: Gabor Paczolay

ABSTRACT. Reinforcement learning is one of the most researched fields of artificial intelligence right now. Newer and newer algorithms are being developed, especially for deep reinforcement learning, where the selected action is computed with the assist of a neural network. One of the subcategories of reinforcement learning is multi-agent reinforcement learning, where multiple agents are present in the world. In our paper, we modify an already existing algorithm, the Advantage Actor-Critic (A2C) to be suitable for multi-agent scenarios. Afterwards, we test the modified algorithm on our testbed, a cooperative-competitive pursuit-evasion environment.

14:30-15:00 Session 3: Plenary session
Location: Virtual Room A
14:30
Virtual stability analysis methods for vehicles

ABSTRACT. The steering wheel and the longitudinal acceleration / deceleration pedals are the two instruments available for the driver to interface with the movement of the vehicle. The aim of an EPAS (Electronic Power Assisted Steering) system is to make the steering wheel-induced lateral acceleration more responsive to driver input, as well as to provide a more convenient and comfortable driving experience. The interoperability of the vehicle dynamics with the driver perception and actuation through the EPAS interface constitutes a really complex, intertwined closed-loop block diagram. Due to the obvious safety-critical property of the steering system, it is crucial to find an efficient way of proving its stability and robustness. Unfortunately, nonlinear methods are infeasible to tackle this problem. Therefore, we needed to look for more applicable semi-linear alternatives, raising several practical questions regarding measurement execution and processing.

In this talk, we outline two possible open-loop methods differentiated mostly by their thoroughness and level of effort required. We also present some practical problems which had to be solved during the stability analysis. Basically, our stability analysis consists of

  • identifying the target vehicle, in several working points, based on noisy measurement data;
  • linearizing the software with respect to the identified vehicle model;
  • building the interconnection of the linear models for the stability analysis;
  • and finally performing the margin calculation.
15:00-16:40 Session 4A: Artificial intelligence based algorithms
Location: Virtual Room A
15:00
A Deep Learning Based Classifier for Crack Detection with Robots in Underground Pipes

ABSTRACT. Underground utility pipes especially sewer pipes are prone to develop cracks due to aging, shifting soil, increased traffic, corrosion, and improper installation. A major challenge for utility operators is cost effective periodic condition monitoring of their sewer networks. The existing industry standard pipe condition monitoring system is based on passing a robot mounted closed circuit television (CCTV) camera through the pipe. The CCTV video feed is recorded and monitored by a trained operator who annotates it corresponding to the location of cracks and other structural imperfections. This system is both cost and labor intensive. In recent years, the deep learning based systems have achieved success in vision based object detection problems. In this project, we have collected pipe crack data from extensive field trials with CCTV based systems in actual sewer networks. The noisy field data is cleaned up and used for training the convolutional neural networks. We test the proposed model with validation data to determine its accuracy and effectiveness. The results indicate that deep learning model can be effectively used to detect cracks in underground pipes.

15:20
Simulation to Real Domain Adaptation for Lane Segmentation
PRESENTER: Márton Tim

ABSTRACT. As the cost of labelling and collecting real world data remains an issue for companies, simulator training and transfer learning slowly evolved to be the foundation of many state-of-the-art projects. In this paper these methods are applied in the Duckietown setup where self-driving agents can be developed and tested. Our aim was to train a selected artificial neural network for right lane segmentation on simulator generated stream of images as a comparison baseline, then use domain adaptation to be more precise and stable in the real environment. We have tested and compared four knowledge transfer methods that included domain transformation using CycleGAN and semi-supervised domain adaptation via Minimax Entropy. As the latter was previously untested in semantic segmentation according to our best knowledge, we have contributed to showing it is indeed possible and produces promising results. Finally we have shown that it could also create a model that fulfills our performance requirements of stability and accuracy. We show that the selected methods are equally eligible for the simulation to real transfer learning problem, and that the simplest method delivers the best performance.

15:40
Sim-to-real reinforcement learning applied to end-to-end vehicle control
PRESENTER: András Kalapos

ABSTRACT. In this work, we study vision-based end-to-end reinforcement learning on vehicle control problems, such as lane following and collision avoidance. Our controller policy is able to control a small-scale robot to follow the right-hand lane of a real two-lane road, while its training was solely carried out in a simulation. Our model, realized by a simple, convolutional network, only relies on images of a forward-facing monocular camera and generates continuous actions that directly control the vehicle. To train this policy we used Proximal Policy Optimization, and to achieve the generalization capability required for real performance we used domain randomization. We carried out thorough analysis of the trained policy, by measuring  multiple performance metrics and comparing these to baselines that rely on other methods. To assess the quality of the simulation-to-reality transfer learning process and the performance of the controller in the real world, we measured simple metrics on a real track and compared these with results from a matching simulation. Further analysis was carried out by visualizing salient object maps.

16:00
Data Augmentation Powered by Generative Adversarial Networks

ABSTRACT. Face identification projects must be based on a high-quality database of the faces to be recognized. The accuracy of the identification depends on the illumination and the diversity of the subject’s expression, among other things. Naturally, a more diverse training database and data augmentation can help in loss reduction. In this research, we attempt to increase the quality of few-shot learning face identification by using Generative Adversarial Network-based data augmentation techniques. This paper presents a novel method to embed images into GAN’s latent space, and to use the augmented versions for few-shot learning.

16:20
Online RPG Environment for ReinforcementLearning
PRESENTER: Gabor Pasztor

ABSTRACT. Researchers have achieved significant success in controlling various board, arena and strategy games with Deep Neural Networks in recent years. Nevertheless, there have been relatively few attempts to control Role-Playing Games (RPGs), which are perhaps closest to real-life environments conceptually. In this paper, a lightweight, easy-to-use online RRG is introduced to test and train deep reinforcement learning algorithms in a structured environment. A number of different models were trained on our environment, and analysis on their results and behaviour is provided. The project is open-source and available on https://github.com/szemenyeim/AIRPGEnv.

15:00-16:40 Session 4B: Virtual reality and 3D sensing
Location: Virtual Room B
15:00
Integrating Human Hand Gestures with Vision-Based Feebdack Controller to Navigate a Virtual Robotic Arm
PRESENTER: Jaydip Desai

ABSTRACT. This paper reports the design and development of a real-time IMU-Vision-based hybrid control algorithm to interact with a 6-DOF Kinova virtual robotic arm. Human-Robot Interaction (HRI) control scheme proposed in this paper utilizes an embedded gyroscopic sensor from a Myo Gesture Control Armband’s inertial measurement unit and an 800*600 pixel resolution from a Microsoft HD camera. The algorithm exploits the mathematical features of a numerical discrete time integrator and a mean filter to process the raw angular velocity data from the gyroscope. The processed data further provides the angular displacements to the end-effector of the robotic arm during clockwise or counterclockwise actions along x, y, and z axes from the user. This also facilitates the end effector (gripper) motion which was also controlled simultaneously by the roll action through threshold comparison in the algorithm. A vision-based feedback system was designed using a computer vision toolbox and blob analysis technique in order to make the system more reliable and to the control end-effector distance while reaching to desired objects. The result demonstrated a significant control of the 6-DOF virtual robotic limb using the gyroscopic information and user inputs. The virtual robotic arm stopped the movement after reaching 320 mm from the desired object as expected. For three different objects, the maximum error between the real and the measured distance was calculated as 15.3 cm for cylindrical object. Due to its smooth control and arm gesture controller, this technology has potential to assist people with either physical impairments or neurological disorders to perform activities of daily living using assistive robotic arm in the near future.

15:20
Combining immersion and interaction in XR training with 360-degree video and 3D virtual objects
PRESENTER: George Salazar

ABSTRACT. Research shows that XR experiences activate portions of the brain that facilitate learning. Case studies in industry bear this out, showing enormous benefits of XR when used to train employees, including reduced time to learn and complete a task, increased ability to perform the task correctly the first time, and better knowledge retention than with traditional training methods. The emerging technologies that fall under the umbrella term XR—virtual reality, augmented reality, mixed reality, and 360-degree video—can facilitate or enhance learning through different affordances. The degree of immersion and the ability to exercise agency through interaction enhances learning. 360-degree video is highly immersive. But it is not inherently interactive. It is theorized that by inserting interactive 3D objects into a 360-degree video environment, both immersion and interactivity can be achieved, with the result of enhanced learning. The objectives of this article are to create a prototype virtual environment very close to reality and test the technical effectiveness of learning, and situational awareness using two 360-degree video together with 3D objects. An iterative human-centered design (HCD) process is proposed that integrates airplane pilots and training personnel as end-users in a co-creation process of an aviation training system HCD; has the active involvement of users and a clear understanding of tasks; good perception between user and technology; design interactions and a multidisciplinary design. A few scripts were created in the prototype to increase the user experience (UX) when interacting with the user interface (UI) and to be able to track the user’s head movement in the environment to facilitate gaze control. The prototype app was presented to flight instructors and flight pilots using a SAMSUNG S7 smartphone with a Samsung Gear VR by Oculus. The prototype also allows the experience to be viewed on a mobile device without VR glasses. The experience is based on a 360-degree video taken within a CESSNA 150, with a view of the panel. 3D objects were inserted and were made interactive so the user can control them. A survey that asked for suggestions to improve the next version of this prototype was given to the test subjects. The survey helped in analyzing the strengths and weaknesses of using a combination of 360-degree video and 3D objects for training. Preliminary findings suggest system user acceptance, improved pilot confidence before the actual flight, decreased errors, and better situational awareness. Checklists were more accurate and were able to be used more quickly by the pilot. Using 3D virtual objects promotes better recognition of certain panel instruments because they are more visible and notable with the insertion of arrows and different colors for emphasis. Some system shortcomings included a restricted movement video with no changes in the axis in front of the panel due to a fixed camera. Another shortcoming is that some pilots don´t feel comfortable using VR Glasses. In conclusion, the combination of video 360 degrees and 3D virtual objects can help pilots train better, more safely, in less time, and with more confidence in the use of the aircraft checklist. The technology provides an alternative to those without access to an aircraft, permitting them to learn by themselves with the use of their smartphone.

15:40
Augmented Reality life-size flight panel for checklist training

ABSTRACT. A checklist before a flight demands a deep knowledge about the aircraft and its panel, avionics, instruments, functions, and cockpit layout. The student training to be a pilot or an advanced pilot aiming for an upgrade certification must know each instrument very deeply and its position on the flight panel. Each second spent looking for a localization of an instrument, bottom or indicator can waste time at the end of the checklist resulting in a bad starting procedure. This paper presents the idea that Augmented Reality (AR) technology can help pilots improve their skills, and help them learn new layout flight panel of different aircraft. It is also anticipated that this technology would aid in developing more robust aircraft human interfaces as well. Augmented Reality is a technology that overlays virtual components in a real-world environment while users have a device to watch through; where virtual objects are added to the real world in real-time during the user’s experience. With AR, the pilot can see a virtual flight panel in front of him, using their smartphone, the user can visualize and study each component before going to a real and physical aircraft. The AR training methodology used is Human-Centered Design (HCD) that is a multidisciplinary process that involves many stakeholders who collaborate on design skills, including people that belong to this process as engineers, fly instructors, and pilots. Key to the importance of HCD is the iterative process of continuous test and evaluation of the system that can flesh out problems with the AR application related to the aircraft the user is learning or problems with the aircraft human interface. The future of AR depends on huge innovations in all fields. So, in this paper, a life-size prototype was created and tested by flight instructors, pilots, engineers, and students with the visualization of a virtual flight panel using Augmented Reality in their smartphones or tablets. The results showed that the virtual 3D model can be very realistic and useful to the pilot with the feature of simulating failures in the instrument, to check if the pilot paid attention to flight indicators, or the aircraft has deficiencies in the human interface design that should be corrected or worked around in flight. The prototype has animated controls and pieces that promote an interactive and more complex task in different situations. This application can help the pilot to be more confident, faster, and secure when flying. Spending less time in the checklist in a real aircraft or a flight simulator, will decrease the costs of the process and increase safety with AR training. Also, AR improves their situational awareness (SA) - perceive, comprehend, and project future actions in a scenario that SA and mental models of the system are important in minimizing human error.

16:00
VR and Depth Camera based Human-Robot Collision Predictor System with 3-Finger Gripper Assisted Assembly Device
PRESENTER: Imre Paniti

ABSTRACT. It is known that Human-Robot Collaboration (HRC) performance is improved in some assembly tasks when a robot emulates the effective coordination behaviours observed in human teams, but a close interaction could cause collisions which should be avoided. There are several methods that can be used to communicate the intension of the robot. However, these are mainly acoustic or visual signals. In this paper a Virtual Reality and Depth Camera based System is presented where vibration signals are used to alert the user of a probable collision with a Robot equipped with a 2-Finger gripper. Experimental tests are investigated in an assembly task with - another 3-Finger - gripper which functions as a Flexible Assembly Device.

16:20
Low-cost 3D Scanning Table for Educational Purposes
PRESENTER: Miroslav Kohut

ABSTRACT. Students studying Robotics and Cybernetics have to obtain multiple skills in areas of programming, hardware, 3D vision, etc. Many universities provide different types of hardware for the students to let them gain practical, not just theoretical experience. Universities have difficulties to cover the whole spectrum of knowledge during exercises with a specific type of hardware and teaching becomes more and more expensive. This paper describes a low-cost 3D scanning device that was created to cover the field of robotics and 3D vision, and it can be easily used for educational purposes. The cost of the device is up to 150 Euro and could be used in multiple ways such as microcontroller programming, servo control, 3D vision, 3D modelling and better understanding of space transformation. The device has open API and is fully compatible with ROS (Robot Operating System). In this paper, the function of the device and its integration to the specific subject of study is described. To ensure easier integration to the course, the API description and an open-source package will be provided. In the last part of the paper, it is described how to use the device to help students to understand space and point cloud transformation, basic data filtering and math which is required for obtaining the digital 3D object representation.