IAS-17: 17TH INTERNATIONAL CONFERENCE ON INTELLIGENT AUTONOMOUS SYSTEMS
PROGRAM FOR WEDNESDAY, JUNE 15TH
Days:
previous day
next day
all days

View: session overviewtalk overview

09:00-10:00 Session 6: Keynote talk
Location: Ban Jelačić
09:00
Learning from Human-Robot Interaction

ABSTRACT. Robots working in human environments need to learn from and adapt to their users. In this talk, I will describe the challenges of robot learning during human-robot interaction:  what should be learned?  how can a user effectively provide feedback and input? I will illustrate the challenges with examples of robots in different roles and applications, including rehabilitation, collaboration in industrial and field settings, and in education and entertainment.

10:00-10:30Coffee Break
10:30-12:10 Session 7A: (iS) Advanced and intelligent control design for underwater robots
Chair:
Location: Ban Jelačić
10:30
(i) Acoustical Underwater Localization of a Remotely Operated Vehicle in Mariculture

ABSTRACT. Localization of underwater vehicles, namely remotely operated vehicles (ROVs), used in mariculture autonomous inspection applications represents a challenging problem. The need for accurate localization of an ROV is further emphasized by often cluttered underwater environment of fisheries with many ropes and mooring around the net pens potentially causing entanglement with ROV's tether. This paper presents an overview and preliminary results of the HEKTOR (Heterogeneous Autonomous Robotic System in Viticulture and Mariculture) project regarding the ROV localization using acoustics mounted onto an autonomous surface vehicle (ASV). The ROV and the developed ASV are described, together with the hardware and software integration of a short baseline acoustic localization system. Preliminary sea trial results show promising performance of the localization system, proving that it could be used in autonomous net pen inspection missions.

10:50
(i) A Virtual Online Simulator Design for the Docking of Unmanned Underwater Vehicle

ABSTRACT. In this paper, a virtual online simulator is designed for the docking simulation task of Unmanned Underwater Vehicle(UUV), which provides an economic way to evaluate the reliability of related algorithms before the real underwater test. The host and lower computers are selected with several units’ design to send and receive the data via TCP/IP communication in the virtual online simulator. The host computer runs the simulink control program, which calculates the mathematical model, monitors the state of UUV, and simulates or interacts with the real sensor during the docking process. The host computer runs the unity simulation software, which obtains the data from the lower computer, and implements the navigation and control of the UUV. The virtual docking scene is designed to display the simulation of UUV on the screen, and four different navigation modes are designed to implement the experiment with motion verification and underwater docking for the UUV. The experiment results show that the designed virtual online simulator can achieve the simulation of docking task in a lower cost.

11:10
(i) Simulation Environment for Underwater Vehicles Testing and Training in Unity3D

ABSTRACT. Autonomous Underwater Vehicle (AUV) design and operation are relatively new topics, and building a functional one often requires an iterative approach. Because of the risk of damaging costly hardware, the limited availability of infrastructure suitable for testing AUVs, and the significant time investment required, it is often impractical to conduct thorough live experiments for every incremental change. In this paper, the benefits of using a simulated environment for running software tests, training neural networks and testing the AUV's performance in arbitrary scenarios are explored. By utilising Unity3D - a cross-platform game engine - a customized framework was developed, allowing for setting up environments simulating various aspects of underwater operation such as buoyancy and caustics. This framework supports communication with AUV systems (collecting observations through simulated sensors and controlling simulated actuators), gathering datasets for offline training, and randomizing parts of the environment to test the system's robustness.

11:30
(i) Sliding Mode Control for Underwater Multi-DoF Hydraulic Manipulator

ABSTRACT. Underwater hydraulic manipulator is a typical underwater operation tool. With the progress of industrial automation, the application occasions of underwater hydraulic manipulator are increasing, and some of occasions put higher demand on operation accuracy. However, due to the negative influence caused by joint coupling, underwater dynamics, model nonlinearity and unknown external disturbance, the control performance of underwater hydraulic manipulators is limited, which results that underwater hydraulic manipulators are gradually restricted in the high-precision operations. In this paper, for achieving good control performance of multi-DoF underwater hydraulic manipulator, the comprehensive dynamic model of multi-DoF underwater hydraulic manipulator considering joint coupling and underwater dynamics is established, and the sliding mode controller is designed to cope with the negative influence caused by model nonlinearity and unknown external disturbance, and also, guarantees stability of the system. Finally, the experiment is carried out to verify the effectiveness and reliability of the designed controller.

11:50
Path Following for Underwater Inspection allowing Manoeuvring Constraints

ABSTRACT. A guidance system is proposed for underwater navigation and inspection of structures to enable path-following control objectives with manoeuvring constraints such as velocity and orientation instructions. To document a vertical surface like a ship hull, a submerged drone will take benefit of maneuvering with the heading perpendicular object, while during transit the most efficient would be to align the heading towards the next way point. The proposed system is simulated using a small underactuated Remotely Operated underwater Vehicle (ROV) with control in surge, sway, heave and yaw (4 DOF). It is based on the Line Of Sight (LOS) steering laws and PID controllers for the 4 DOF motion control. The waypoints are generated together with a list of instructions for orientation and velocity for the ROV using the Parametrised Rapidly exploring Random Graph (PRRG). The LOS vector is used for heading control during transit whereas during inspection, it is used for course control. The proposed framework is tested in simulation to follow 3D straight lines in a lawnmower pattern and a typical path for ship hull inspection. Simulations shows that the paths generated using the proposed solution are viable for inspection tasks taking into account the manoeuvring constraints posed by the inspection mission and the properties of the vehicle.

10:30-12:10 Session 7B: Collaborative robots (II)
Location: Ban Zrinski
10:30
Validation of shared intelligence approach for teleoperating telepresence robots through inaccurate interfaces

ABSTRACT. Telepresence robots can support people with special needs (e.g., who cannot move) to remotely interact with people and the environment at a distance. In this application, people can communicate with the robot via alternative channels of communication, such as brain-machine interfaces, that are less accurate than the traditional mediums and allow the user to send limited classes of commands to the robot. To overcome these limitations, shared intelligence approaches are born to fuse the user's inputs with a sort of intelligence on the robot, that interprets the user's commands with respect to the environment and provides the robot deliberative ability in choosing the next action to implement. In this paper, we investigate how a shared intelligence system could be affected by the kind of inaccurate user's input interfaces. With this purpose, we compare a brain-machine interface vs. a more reactive keyboard endowed with the same percentage of noise. Overall, the results revealed comparable navigation performance in the two conditions except for the accuracy (e.g., number of targets positions reached), indicating similar assistance derived by the system. However, differences between the two modalities are found by correlating the performance to the navigation situation, suggesting a different user's inclination (more in control vs. robot's autonomy) with respect to the interface and the target to reach, and encouraging the necessity of adapting the shared intelligence system according to the real-time user's ability and the surrounding environment.

10:50
Uncertainty Estimation for Safe Human-Robot Collaboration using Conservation Measures

ABSTRACT. We present an online and data-driven uncertainty quantification method to enable the development of safe human-robot collaboration applications. Safety and risk assessment of systems are strongly correlated with the accuracy of measurements: Distinctive parameters are often not directly accessible via known models and must therefore be measured. However, measurements generally suffer from uncertainties due to the limited performance of sensors, even unknown environmental disturbances, or humans. In this work, we quantify these measurement uncertainties by making use of conservation measures which are quantitative, system specific properties that are constant over time, space, or other state space dimensions. The key idea of our method lies in the immediate data evaluation of incoming data during run-time referring to conservation equations. In particular, we estimate violations of a-priori known, domain specific conservation properties and consider them as the consequence of measurement uncertainties. We validate our method on a use case in the context of human-robot collaboration, thereby highlighting the importance of our contribution for the successful development of safe robot systems under real-world conditions, e. g. , in industrial environments. In addition, we show how obtained uncertainty values can be directly mapped on arbitrary safety limits (e.g, ISO 13849) which allows to monitor the compliance with safety standards during run-time.

11:10
Post-facto Misrecognition Filter based on Resumable Interruptions for Coping with Real World Uncertainty in the Development of Reactive Robotic Behaviors

ABSTRACT. In this paper we propose a resumable interruption framework for robotic applications which allows to "filter" misrecognition signals after their occurrence. Handling misrecognition is essential for deploying reactive systems into the real world, since being over-reactive to detection errors can lead to livelocks and stagnation. For example, constantly interrupting and resuming a picking task due to misrecognition can make the robot alternate between pre-grasping and grasping motions, without ever achieving the task. Our solution is based on resumable interruptions, continuing interrupted procedures from the exact preemption point if similar execution requests are received shortly after a cancellation order. This acts as a post-facto misrecognition filter, which stabilizes execution and ensures task completion. When compared with standard filtering, the post-facto approach allows to deliver signals faster and recover from misrecognition longer. The proposed system is verified through real robot experiments in dynamic and static environments.

11:30
Gestural and Touchscreen Interaction for Human-Robot Collaboration: a Comparative Study

ABSTRACT. Close human-robot interaction (HRI), especially in industrial scenarios, has been vastly investigated for the advantages of combining human and robot skills. For an effective HRI, the validity of currently available human-machine communication media or tools should be questioned, and new communication modalities should be explored. This article proposes a modular architecture allowing human operators to interact with robots through different modalities. In particular, we implemented the architecture to handle gestural and touchscreen input, respectively, using a smartwatch and a tablet. Finally, we performed a comparative user experience study between these two modalities.

11:50
Development and Evaluation of Fiber Reinforced Modular Soft Actuators and an Individualized Soft Rehabilitation Glove

ABSTRACT. Recently, special attention has been given to soft actuators for rehabilitation support because of their high viscoelasticity and flexibility. Individualized support taking into consideration users' morphological and biomechanical characteristics is the next critical step towards their use in rehabilitation practice. Compared with single-structure soft actuators, which use one actuator for several joints, thus require to be designed and fabricated for each individual user, joint modular soft actuators enable the support for each joint, connected and adjusted by rigid connectors which are much easier to fabricate. Several modular actuators have been developed and reported, however, neither their support performance (such as bending angle and torque for joint support) nor their individual adaptability has been appropriately evaluated. In this study, we proposed a new modular soft actuator that improved the bending support performance for individualized finger motion with enlarged chambers and matched fiber reinforcement. Moreover, we made a soft rehabilitation glove with the proposed new modular soft actuators and conducted an objective evaluation for both actuators and the soft rehabilitation glove with a dummy finger and a dummy hand. As a result, the new modular soft actuators showed better motion support performance, and the soft rehabilitation glove can support multiple grasping profiles. This is a step towards the use of soft actuators in rehabilitation practice.

10:30-12:10 Session 7C: Perception
Location: Ban Mažuranić
10:30
On Hand-Eye Calibration via On-Manifold Gauss-Newton Optimization

ABSTRACT. Perception of autonomous robotic systems is highly dependent on accurate data fusion of multiple heterogeneous sensors. However, to maximally exploit the advantages of such setups, sensor data fusion necessitates accurate extrinsic calibration. In this paper, we propose a novel derivation of the Gauss-Newton based iterative on-manifold batch solution to the hand-eye calibration problem. By adopting a special Euclidean group formulation of the objective function, we derive exact and approximate solutions and validate them via synthetic and real-world experiments. The results show that the accuracy of the proposed approximate solutions is on par with the exact solution and alternative on-manifold iterative solutions. Moreover, due to the near commutativity of the hand-eye problem in low noise scenarios, the proposed 0th order approximation achieves up to 4 times faster execution time, thus opening up practical possibilities of utilization in more complex optimization techniques.

10:50
YOLOPose: Transformer-based Multi-Object 6D Pose Estimation using Keypoint Regression

ABSTRACT. 6D object pose estimation is a crucial prerequisite for autonomous robot manipulation applications. The state-of-the-art models for pose estimation are convolutional neural network (CNN)-based. Lately, Transformers, an architecture originally proposed for natural language processing, is achieving state-of-the-art results in many computer vision tasks as well. Equipped with the multi-head self-attention mechanism, Transfomers enable simple single-stage end-to-end architectures for learning object detection and 6D object pose estimation jointly. In this work, we propose YOLOPose (short form for You Only Look Once Pose estimation), a Transformer-based multi-object 6D pose estimation method based on keypoint regression. In contrast to the standard heatmaps for predicting keypoints in an image, we directly regress the keypoints. Additionally, we employ a learned orientation estimation module to predict the orientation from the keypoints. Along with a separate translation estimation module, our model is end-to-end differentiable. Our method achieves state-of-the-art results on the YCB-Video dataset. Running at $\sim$59 fps, our method is ideal for real-time applications.

11:10
People Tracking in Panoramic Video for Guiding Robots

ABSTRACT. A guiding robot aims to effectively bring people to and from specific places within environments possibly unknown to them. During this operation the robot should be able to detect and track the accompanied person, trying never to lose sight of her/him. A solution to minimize this event is to use an omnidirectional camera: its 360° Field of View (FoV) guarantees that any framed object cannot leave the FoV if not occluded or very far from the sensor. However, the acquired panoramic videos introduce new challenges in perception tasks such as people detection and tracking, including the large size of the images to be processed, the distortion effects introduced by the cylindrical projection and the periodic nature of panoramic images. In this paper, we propose a set of targeted methods that allow to effectively adapt to panoramic videos a standard people detection and tracking pipeline originally designed for perspective cameras. Our methods have been implemented and tested inside a deep learning-based people detection and tracking framework with a commercial 360° camera. Experiments performed on datasets specifically acquired for guiding robot applications and on a real service robot show the effectiveness of the proposed approach over other state-of-the-art systems. We release with this paper the acquired and annotated datasets and the open-source implementation of our method.

11:30
Clustering-based refinement for 3D human body parts segmentation

ABSTRACT. A common approach to address human body parts segmentation on 3D data involves the use of a 2D segmentation network and 3D projection. Following this approach, several errors could be introduced in the final 3D segmentation output, such as segmentation errors and reprojection errors. Such errors are even more significant when considering very small body parts such as hands. In this paper, we propose a new algorithm that aims to reduce such errors and improve 3D segmentation of human body parts. The algorithm detects noise points and wrong clusters using DBSCAN algorithm, and changes the labels of the points exploiting the shape and position of the clusters. We evaluated the proposed algorithm on the 3DPeople synthetic dataset and on a real dataset, highlighting how it can greatly improve the 3D segmentation of small body parts like hands. With our algorithm we achieved an improvement up to 4.68% of IoU on the synthetic dataset and up to 2.30% of IoU in the real scenario.

11:50
Autonomous Exploration for 3D Mapping Using a Mobile Manipulator Robot with an RGB-D Camera

ABSTRACT. This study focuses on an unknown environment exploration method for generating a 3D occupancy grid map using a mobile manipulator robot with an RGB-D camera. To determine the moving destination and camera pose, a boundary between known and unknown regions is detected as a frontier, and a frontier observation map is created to record the number of frontiers that can be observed at a certain position and pose. Subsequently, based on the frontier observation map, an approach to efficiently generate a map in a short travel distance and time by actively changing the camera posture while moving is explored. However, if the robot simply sets the position by using the largest number of frontiers as the destination, it may have to move back and forth to the same location many times when missing some frontiers, resulting in more time to generate the map. To solve this problem, a method of modifying the frontier observation map based on the distance and angle to preferentially observe the frontier near the robot and in the direction where the robot is moving, respectively, is proposed. The effectiveness of the modification process in determining the destination in terms of the length of the path traveled and map generation time for several environments is verified by simulations. In addition, the effect of actively changing the camera posture while the robot is moving is evaluated.

12:10-13:10Lunch Break
13:10-14:10 Session 8: Keynote talk
Location: Ban Jelačić
13:10
Probabilistic and Machine Learning Approaches for Autonomous Robots and Automated Driving

ABSTRACT. For autonomous robots and automated driving, the capability to robustly perceive their environments and execute their actions is the ultimate goal. The key challenge is that no sensors and actuators are perfect, which means that robots and cars need the ability to properly deal with the resulting uncertainty. In this presentation, I will introduce the probabilistic approach to robotics, which provides a rigorous statistical methodology to solve the state estimation problem. I will furthermore discuss how this approach can be extended using state-of-the-art technology from machine learning to bring us closer to the development of truly robust systems able to serve us in our every-day lives.

14:10-14:50 Session 9: Industry Session
Location: Ban Jelačić
14:10
Visage Technologies

ABSTRACT. https://visagetechnologies.com/

14:25
Gideon Ltd.
PRESENTER: Josip Cesic

ABSTRACT. https://www.gideon.ai/