ROBOT2023: 6TH IBERIAN ROBOTICS CONFERENCE
PROGRAM FOR WEDNESDAY, NOVEMBER 22ND
Days:
next day
all days

View: session overviewtalk overview

10:40-12:20 Session 1A: Collaborative Robotics
Location: Auditorium
10:40
A Collaborative Robot-Assisted Manufacturing Assembly Process

ABSTRACT. An effective human-robot collaborative process results in the reduction of the operator’s workload, promoting a more efficient, productive, safer and less error-prone working environment. However, the implementation of collaborative robots in industry is still challenging. In this letter, we compare manual and robot-assisted assembly processes to evaluate the effectiveness of collaborative robots while featuring different modes of operation (coexistence, cooperation and collaboration). Results indicate an improvement in ergonomic conditions and ease of execution without substantially compromising assembly time. Furthermore, the robot is intuitive-to-use and guides the user on the proper sequencing of the process.

11:00
A cognitive architecture for human-aware interactive robot learning with industrial collaborative robots

ABSTRACT. While industrial collaborative robots programming becomes more and more accessible, building adaptable and modular behaviors is still out of reach for most non programmer experts. Allowing any humans to teach cobots complex and personalized behaviors with natural means of communications would likely ease their acceptability and long-term co-integration in industry of all sizes. In this paper, we present a prototype of robotic cognitive architecture for interactive task learning (ITL) integrating human preferences, in a human-robot collaborative industrial context. The architecture is based on the integration of connectionnist modules, such as deep learning modules, and symbolic semantic graphs, such as behavior trees, for modular skill representations and learning. An experimental validation was made on a real UR10e industrial collaborative robot. The cobot is taught, online and incrementally, a simple task with variations based on human preferences.

11:20
Safety-based Stable Variable Impedance Controller for Interaction Tasks Involving Human Body Surfaces

ABSTRACT. This paper contributes to the design of Variable Impedance Controllers with safety guarantees in tasks involving contact with a human body surface using a robotic manipulator. Particularly, the work focuses on a bathing task scenario with different contact and free movement phases. Novel modulation laws based on safety criteria are introduced and as safety guarantee an stability and optimality condition is considered. To assess its fulfilment, the controller is formulated in terms of a Linear Parameter Varying (LPV) system such that guarantees can be formulated in terms of Linear Matrix Inequalities (LMI) that define a convex optimisation problem. Results are presented for a 7-DoF WAM robotic manipulator, both in simulation and in the real platform, presenting a comparison against constant impedance and a position-force switching controllers. Assessed Variable Impedance Controller present a stable behaviour while maintaining contact force closer to desired one.

11:40
Trajectory generation using Dual-Robot haptic interface for Reinforcement Learning from Demonstration

ABSTRACT. In learning robotics, techniques such as Learning from Demonstrations (LfD) and Reinforcement Learning (RL) have become widely popular among developers. However, this approximations can result in inefficient strategies when it comes to train more than one agent interacting in the same space with several objects and unknown obstacles. To solve this problematic, Reinforcement Learning from Demonstration (RLfD) allows the agent to learn and evaluate its performance from a set of demonstrations provided by a human expert while generalising from them using RL training. In dual-robot applications this approach is suitable for training agents that perform collaborative tasks. For this reason, a dual-robot haptic interface has been designed in order to produce dual manipulation trajectories to feed a RLfD agent. Haptics allows to perform high quality demonstrations following an impedance control approach. Trajectories obtained will be used as positive demonstrations so the training environment can generate automatic ones. As a result, this dual-robot haptic interface will provide a few trajectory demonstrations on dual manipulation in order to train agents using RL strategies. The aim of this research is to generate trajectories with this dual-robot haptic interface to train one or more agents following RLfD paradigms. Results show that trajectories performed with this interface present less error and deviation than others performed with a non-haptic interface, increasing the quality of the training data.

12:00
A Robotic System to Automate the Disassembly of PCB Components

ABSTRACT. The disposal and recycling of electronic waste (e-waste) poses a significant global challenge. The disassembly of components is a crucial step in achieving an efficient recycling process, avoiding destructive methods. While manual disassembly remains the norm due to the diversity and complexity of components, there is a growing interest in automating the process to enhance efficiency and reduce labor costs. This study endeavors to automate the desoldering process and the extraction of components from printed circuit boards (PCBs) by implementing a robotic solution. The proposed strategy encompasses multiple phases, one of which involves the precise contact of the developed robotic tool with the components on the PCB. The tool was designed to exert a controlled force on the PCB component, thereby efficiently desoldering it from the board. Results demonstrate the feasibility of achieving a high success rate in removing PCB components, including the ones from mobile phones. The desoldering success rate observed is approximately 100% for the larger components.

10:40-12:20 Session 1B: Mobile Robotics
Location: Student Hub 1
10:40
The use of Semantic knowledge in the creation of tasks for robotic agents, minimising human error.

ABSTRACT. With the advancement of technology and the increasing application of robotic agents in many areas, they are being applied in increasingly complex and changing environments. Moving away from standard (mainly static) work cells, where operators and machines follow strictly defined boundaries and schedules. The non-deterministic workflow of these workspaces raises concerns about planning tasks and actions between resources, as the work environment is now dynamic. The traditional approach to human-robot interfaces is based on explicit programming and pre-defined commands. However, with AI and natural language processing advances, incorporating semantic knowledge into the interface has become a promising avenue to enable more natural, intuitive and context-aware interactions between humans and robots. Semantic knowledge and ontologies enable the interface to understand the context of the task creation process, allowing robotic agents to consider the overall situation, the environment and any relevant previous tasks or actions, resulting in more context-appropriate task execution. Following these requirements, this study presents a solution for programming and assigning tasks and actions to robotic agents, where semantic knowledge is used to minimise human error, considering environment changes, such as the positioning of features and parts.

11:00
Geometric Pattern-based Computer Vision Positioning System

ABSTRACT. Visible Light Positioning refers to the estimation of position based on the ac-quisition of images of previously known reference beacons. This work pro-poses the usage of visible light sources, arranged in a specific geometric pat-tern, that allows for their identification and the subsequent estimation of the agent’s position with respect to the detected beacon. The light sources are considered to be point sources, which allows having reference light marks at considerable distances. The proposed approach is organized in two stages; the first stage corresponds to the identification of the light sources, and the second stage to the pose estimation of the agent. The algorithm is validated by simulation, testing the accuracy of the system as a function of the dis-tance to the beacon, image resolution and uncertainty in the light sources re-gion of interest. Furthermore, the error propagation of the proposed algo-rithm is verified in different conditions.

11:20
Map Merge and Accurate Localization in Multi-Robot Systems in Real Environments

ABSTRACT. This article presents an approach to map merged and precise position determination in multi-robot systems operating in real environments, using the matrix transformation technique and Particle Swarm Optimization (PSO). The primary objective is to combine information from multiple maps, represented by occupancy matrices, into a unique and comprehensive map. This map can be employed in applications that require cooperation and coordination among robots. To achieve the proposed objective, the PSO technique is applied to find the optimal values of rotation (\(\psi\)), translation along the x-axis (\(dx\)), and translation along the y-axis (\(dy\)) to optimize the map merged. The map merged is performed based on identified correspondences between the maps, using the Jaccard Similarity algorithm. The PSO approach is employed in this work to determine the best possible overlap between the maps. After obtaining the transformation values, they are used to find and update the real positions of the robots on the resulting fused map. Consequently, the proposed technique provides an advanced and efficient solution for map merged and robot positioning in real environments. This approach opens doors for the application of effective cooperation techniques and intelligent navigation for multiple robots in complex and dynamically changing scenarios. An experiment video has been produced and can be accessed through the following link https://youtu.be/RLQsJhlnMuQ.

11:40
Category Theory for Autonomous Robots: The Marathon 2 Use Case

ABSTRACT. Model-based systems engineering (MBSE) is a methodology that exploits system representation during the entire system life-cycle. The use of formal models has gained momentum in robotics engineering over the past few years. Models play a crucial role in robot design; they serve as the basis for achieving holistic properties, such as functional reliability or adaptive resilience, and facilitate the automated production of modules. We propose the use of formal conceptualizations beyond the engineering phase, providing accurate models that can be leveraged at runtime. This paper explores the use of Category Theory, a mathematical framework for describing abstractions, as a formal language to produce such robot models. To showcase its practical application, we present a concrete example based on the Marathon 2 experiment. Here, we illustrate the potential of formalizing systems---including their recovery mechanisms---which allows engineers to design more trustworthy autonomous robots. This, in turn, enhances their dependability and performance.

12:00
Development of a Low-cost 3D Mapping Technology with 2D LIDAR for Path Planning Based on the A* Algorithm

ABSTRACT. This article presents the development of a low-cost 3D mapping technology for trajectory planning using a 2D LiDAR and a stepper motor. The research covers the design and implementation of a circuit board to connect and control all components, including the LiDAR and motor. In addition, a 3D printed support structure was developed to connect the LiDAR to the motor shaft. System data acquisition and processing are addressed, as well as the generation of the point cloud and the application of the A* algorithm for trajectory planning. Experimental results demonstrate the effectiveness and feasibility of the proposed technology for low-cost 3D mapping and trajectory planning applications.

10:40-12:20 Session 1C: Marine Robotics
Location: Student Hub 2
10:40
Ocean Relief-Based Heuristic for Robotic Mapping

ABSTRACT. Order picking has driven an increase in the number of logis- tics researchers. Robotics can help reduce the operational cost of such a process, eliminating the need for a human operator to perform trivial and dangerous tasks such as moving around the warehouse. However, for a mobile robot to perform such tasks, certain problems, such as defining the best path, must be solved. Among the most prominent techniques applied in the calculation of the trajectories of these robotic agents are potential fields and the A* algorithm. However, these techniques have limitations. This study aims to demonstrate a new approach based on the behavior of oceanic relief to map an environment that simulates a logistics warehouse, considering distance, safety, and efficiency in trajec- tory planning. In this manner, we seek to solve some of the limitations of traditional algorithms. We propose a new mapping technique for mobile robots, followed by a new trajectory planning approach.

11:00
Artificial Intelligence for Automated Marine Growth Segmentation

ABSTRACT. Marine growth impacts the stability and integrity of offshore structures, while simultaneously preventing inspection procedures. In consequence, companies need to employ specialists that manually assess each impacted part of the structure. Due to harsh subsea environments, acquiring large quantities of quality underwater data becomes difficult. To mitigate these challenges a new data augmentation algorithm is proposed that generates new images by performing localized crops on regions of interest from the original data, expanding the total size of the dataset approximately 6x. This research also proposes a learning-based algorithm capable of automatically delineating marine growth in underwater images, achieving up to 0.389 IoU and 0.508 Dice Loss. Advances in this area contribute for reducing the manual labour necessary to schedule maintenance operations in man-made submerged structures, while increasing the reliability and automation of the process.

11:20
Enhancing Underwater Inspection Capabilities: a Learning-based Approach for Automated Pipeline Visibility Assessment

ABSTRACT. Underwater scenarios pose additional challenges to perception systems, as the collected imagery from sensors often suffers from limitations that hinder its practical usability. One crucial domain that relies on accurate underwater visibility assessment is underwater pipeline inspection. Manual assessment is impractical and time-consuming, emphasizing the need for automated algorithms. In this study, we focus on developing learning-based approaches to evaluate visibility and identify issues in underwater environments. We explore various neural network architectures and evaluate them on datasets. Notably, the ”ResNet18” model outperforms others, achieving a testing accuracy of 93.5% in visibility evaluation. In terms of inference time, the fastest model is ”MobileNetV3 Small,” with an inference time of 42.45 ms. These findings represent significant progress in enabling unmanned marine operations and contribute to the advancement of autonomous underwater surveillance systems.

11:40
Robust Adaptive Finite-time Motion Control of Underactuated Marine Vehicles

ABSTRACT. This paper focuses on the development of an adaptive back- stepping control for underactuated marine vehicles. In order to estimate system uncertainty, a fuzzy system with a simple structure is utilized. A novel dynamical formulation has been developed for deriving the control signal. The Lyapunov function has been employed to formally prove the Semi-globally Practically Finite-time Stability of the overall closed loop system. Simulations are conducted on an underactuated marine vehicle to assess the effectiveness of the proposed control approach. These simu- lations consider various challenging scenarios, including external distur- bances, unmodeled dynamics, time-varying water currents, and lateral velocities. Furthermore, a comparison analysis is performed.

12:00
Development of an autonomous surface system for permanent and persistent monitoring of vehicles in the water column

ABSTRACT. The developments associated with Industry 4.0 [1] allowed us to reduce the cost and increase the technology capacity even in small work teams. The revolution is supported on 9 pillars [2] : Big Data & Advanced Analytics, Autonomous Robots, Simulation, Horizontal and Vertical System Integration, The Industrial Internet of Things (IoT), The Cloud, Additive Manufacturing, Augmented Real-ity (AR), and Cyber Security. This is not a undisputed definition, and some au-thors simplify this to 4 mais areas: Big Data & Advanced Analyics, Cyber Se-curity, Internet of Things (IoT) and Cloud Computing [3]. With the emergence of Industry 4.0, organizations must adapt and seek the progress and inclusion of these new developments in their organization using systems that are more effi-cient, customized, and autonomous. The Portuguese Navy, as a branch of the Armed Forces (FFAA) is monitoring this progress, continuing to make resources available for research and develop-ment of technology that makes it possible to carry out their missions quickly, effectively, and safely. This work describes the process for designing an auton-omous surface platform for monitoring the water column. First, a literature re-view was carried our on the existence products in the market and some exam-ples for the new autonomous vessels. Requirements were mapped to create a product strategy and know which are the operational employments and project limitations. A new vessel was designed, incorporating lessons learned and re-cent developments. This designed was modelled in software that performed static analysis of the full structure, and hydrodynamic analysis was perfomed using various tools. The command and control (C2) system as well as the power generations and distribution of energy on board the platform is also addressed. The system is described in further detail in [4].

12:20-14:00Lunch Break
14:00-15:00 Session K1: Keynote
Location: Auditorium
14:00
Understanding the environment and the users: towards mobile robot navigation and interaction in the real world

ABSTRACT. Despite the impressive advances in many areas of mobile robot navigation, robust autonomous operation still faces significant challenges for many real applications. Widely recognized aspects to be improved are related to robustness and task oriented perception. These research problems require complex theoretical work as well as practical testing and discovery. Understanding the environment, the user and the relations between them is not easy, and there are many issues that experiments in controlled settings and labs cannot reveal. What if the robot should be precisely positioned with respect to an object? If there is a user involved, a flexible approach may be preferred. What if the environment changes? What if the user believes that the system is working properly when it is not? And the other way around?

This invited talk will provide an overview of the limitations in the current State of the Art and promising works addressing them. I will present my experiences in several projects, focusing on practical lessons learned in a) drone inspection for airplane maintenance and b) assistive robots for elderly users in their own homes. While each application presents its own particularities, most of the presented ideas could be easily adapted to other scenarios and configurations.

15:00-16:00 Session 2A: Manufacturing - 1
Location: Auditorium
15:00
Hybrid Localization Solution for Autonomous Mobile Robots in Complex Environments

ABSTRACT. Mobile robot platforms capable of operating safely and accurately in dynamic environments can have a multitude of applications, ranging from simple delivery tasks to advanced assembly operations. These abilities rely heavily on a robust navigation stack, which requires stable and accurate pose estimations within the environment. The wide range of AMR’s applications and the characteristics of multiple industrial environments (indoor and outdoor) led to the development of a flexible and robust robot software architecture providing the fusion of different data sensors in real time. This paper presents a multi localization system for industrial mobile robots in complex and dynamic industrial scenarios, based on different localization technologies and methods that can interact together and simultaneously.

15:20
Optimization of the Energy Consumption for Robotic Kitting in the Automotive Industry

ABSTRACT. This paper presents a comprehensive Integer Programming (IP) model designed to optimize the robotic kitting process in industrial automotive settings. Robotic kitting, involving the efficient assembly and preparation of kits using automated systems, plays a crucial role in modern manufacturing facilities. The proposed IP model considers various key aspects related to the cycle time, including preparation time for kit boxes on Automated Guided Vehicles (AGVs), picking time with Autonomous Mobile Robots (AMRs), image acquisition and processing time, AMR and AGV travel times, and removal time of empty component bins by AMRs. The objective is to minimize the energy consumption of AGVs in the kitting process, enhancing operational efficiency while ensuring accurate kit assembly. The formulation of the mathematical programming model allows for the consideration of flow-related activities, improving the adaptability and flexibility of the kitting process to varying order patterns. Numerical experiments demonstrate the effectiveness of the model in achieving key insights into AGVs' energy demand, contributing to advancements in mapping this process in industrial automation and logistics.

15:40
Comparison of pallet detection and location using COTS sensors and AI based applications

ABSTRACT. Autonomous Mobile Robots (AMR) are seeing an increased introduction in distinct areas of daily life. Recently, their use has expanded to intralogistics, where forklift type AMR are applied in many situations handling pallets and loading/unloading them into trucks. One of the these vehicles requirements, is that they are able to correctly identify the location and status of pallets, so that the forklifts AMR can insert the forks in the right place. Recently, some commercial sensors have appeared in the market for this purpose. Given these considerations, this paper presents a comparison of the performance of two different approaches for pallet detection: using a commercial off-the-shelf (COTS) sensor from SICK and a custom developed application based on Artificial Intelligence algorithms.

15:00-16:00 Session 2B: Perception & Manipulation - 1
Location: Student Hub 1
15:00
An Approach to Computer Vision Control of a Parallel Soft Gripper

ABSTRACT. Soft robotics has seen significant advancements in the past decade, offering compliant and adaptive solutions for various applications. The parallel soft gripper presented in this article, fabricated entirely using additive manufacturing techniques, incorporates flexible and compliant pneumatic actuators for safe interaction with delicate objects. Computer vision techniques are utilized, integrating a camera and image processing algorithms to extract object features and generate control signals. Experimental results demonstrate the gripper's improved adaptability, dexterity, and grasping performance compared to traditional rigid grippers.

15:20
Shape Control of Maneuvering Planar Formations Based on Distributed Deformation Minimization

ABSTRACT. This paper presents a novel scheme for controlling planar multirobot formations. We assume the multirobot team's overall motion is guided by a subset of independently moving leader robots. We propose a strategy to control the other robots, called followers, based on minimizing a distributed deformation cost. This cost is based on a team organization in triads, i.e., three-robot subsets. Our strategy allows the team to maintain a prescribed formation shape while maneuvering under the leaders' guidance during, e.g., collaborative object transport or navigation tasks. We also study how to restrict the leaders' dynamics to facilitate formation tracking by the followers under motion constraints. The control laws we propose are distributed, can be designed locally, and rely on relative position measurements only. We illustrate our scheme with simulations considering single-integrator and unicycle robot dynamics.

15:40
Adaptive Bayesian optimization for robotic pushing of thin fragile deformable objects

ABSTRACT. Robotic manipulation of deformable objects is challenging due to the great variety of materials and shapes. This task is even more complex when the object is also fragile, and the allowed amount of deformation needs to be constrained. For the goal of driving a thin fragile deformable object to a target 2D position and orientation, we propose a manipulation method based on executing planar pushing actions on the object edges with a robotic arm. Firstly, we obtain a probabilistic model through Gaussian process regression, which represents the time-varying deformation properties of the system. Then, we exploit the model in the framework of an Adaptive Bayesian Optimization (ABO) algorithm to compute the pushing action at each instant. We evaluate our proposal in simulation.

15:00-16:00 Session 2C: Robotics in Defense - 1
Location: Student Hub 2
15:00
Automatic people detection based on RGB and thermal imagery for military applications

ABSTRACT. Automatic detection of people in military applications offers numerous benefits that enhance situational awareness, operational efficiency, and overall security.

RGB images offer high resolution and color information, making them suitable for detailed visualization, but they are limited by lighting conditions and can struggle to detect people in low-light or nighttime scenarios. Additionally, camouflage and adverse weather can hinder their effectiveness. On the other hand, thermal images detect heat signatures, enabling people detection in darkness and adverse conditions, but they may lack detail, and their cost is higher. Both technologies have their merits, and a combined approach can provide a more comprehensive solution for people detection in various military and security applications.

This research contributes to the field of perception systems for military applications, by harnessing the potential of AI and deep learning technologies. This paper presents a comparison of performance of three people detectors based on the YOLOv8 architecture, namely using 1)~RGB images, 2)~thermal images, and 3)~RGB and thermal images combined. A quantitative analysis of performance of the three models allows to compare their performance in realistic and challenging scenarios. Additionally, a qualitative assessment is conducted to identify specific limitations and advantages associated with each approach, providing valuable insights for further improvement and optimization of these detection systems.

15:20
The Automatic Seaman: From Speech2text To Text2Task

ABSTRACT. This paper introduces the development of an autonomous seaman system, leveraging sound-to-text processing, intent detection, slot filling and action control. The Whisper model was employed to process sound commands and transcribe them, while JointBert was used to ex- tract intentions and fill relevant slots. For enhanced model performance, Whisper model was fine-tuned using real voice data in the Portuguese lan- guage, while JointBert benefited from data generated by Chat GPT-3. To ensure effective interaction management and action execution, a directed graph structure was used as abstraction. The system’s performance was evaluated based on word error rate, intent accuracy, F1 score for slot filling and task accumplished. Experimental results showcased the effectiveness of our proposed approach, demonstrating accurate comprehension of sound commands and efficient action control. As a result, the autonomous seaman robot holds great promise for practical applications in automating diverse seafaring tasks. The improvement in the man-machine interface is very relevant for manned systems, but even more for unmanned robotic systems.

15:40
Man-Machine Symbiosis UAV Integration for Military Search and Rescue Operations

ABSTRACT. Over the last few years, Man-Machine collaborative systems have been increasingly present in daily routines. In these systems, one operator usually controls the machine through explicit commands and assesses the information through a graphical user interface. Direct & implicit interaction between the machine and the user does not exist.

This work presents a man-machine symbiotic concept & system where such implicit interaction is possible targeting search and rescue scenarios. Based on measuring physiological variables (e.g. body movement or electrocardiogram) through wearable devices, this system is capable of computing the psycho-physiological state of the human and autonomously identify abnormal situations (e.g. fall or stress). This information is injected into the control loop of the machine that can alter its behavior according to it, enabling an implicit man-machine communication mechanism.

A proof of concept of this system was tested at the ARTEX (ARmy Technological EXperimentation) exercise organized by the Portuguese Army involving a military agent and drone. During this event the soldier was equipped with a kit of wearables that could automatically monitor several physiological variables and detect a fall during a mission. This information was continuously sent to the drone that successfully identified this abnormal situation triggering the take-off and a situation awareness fly-by flight pattern, delivering a first-aid kit to the soldier in case he did not recover after a pre-determined time period.

The results were very positive, proving the possibility and feasibility of a symbiotic system between humans and machines.

16:00-16:20Coffee Break
16:20-18:00 Session 3A: Manufacturing - 2
Chair:
Location: Auditorium
16:20
Experimental analysis of robot base frame identification methods

ABSTRACT. For many industrial applications (e.g. machining, drilling, pick-and-place, etc.), robot's poor absolute accuracy has to be enhanced through calibration processes. These processes involve measurement devices, mostly Laser Trackers, which measure the position of a Spherical Mounted Reflector, usually attached on the flange of the robot, in a virtual measurement frame associated with the Laser Tracker. However, calibration processes require to know the position of the flange of the robot in the robot's base frame. Thus, there is a need for robot base frame identification methods. The objective of this paper is to provide both qualitative and quantitative elements to determine which method is the most suitable for robot calibration. Five different methods are discussed, and based on a qualitative analysis, three of them are experimentally compared, both on repeatability of the method and accuracy.

16:40
A Robotic Cable-Gripper for reliable inspection of transmission lines

ABSTRACT. Inspection of power transmission lines is often a highly haz- ardous activity, subject to uncertainties due to the system and environ- mental characteristics. The present study aims to develop a mobile robot to inspect transmission lines. The development process encompasses a se- ries of fundamental steps. Initially, a highly realistic simulation environ- ment is created, containing an authentic section of a transmission line. Subsequently, the simulated robot is designed and equipped with a cam- era and LIDAR sensor to inspect the simulated transmission line. Once the simulation results are validated, the real prototype of the robot is materialized. This approach allows for a precise evaluation of the robot’s performance, enabling necessary adjustments and enhancements to en- sure the effectiveness of transmission line inspection. The prototype con- sists of a robotic gripper capable of conducting preventive and predictive inspections in transmission lines.

17:00
Development of a Controller for the FANUC S-420FD Industrial Robot: a description of the graphical user interface

ABSTRACT. This paper describes the development of a complete controller for the FANUC S-420FD 6-axis industrial robot. The original controller of the robot presented failures that made it impossible to operate and that negatively impacted the academic and research activities. To solve this problem, it was proposed the development of a new open-technology controller and also the design of an intuitive and functional graphical interface, allowing the programming, control and monitoring of the robot parameters. The developed interface offers advanced features such as trajectory programming, custom parameter configuration, and real-time visualization of the robot's state. This work highlights the importance of efficient and affordable solutions for the maintenance of industrial robots in university environments, encouraging scientific and technological advancement in these areas of study.

17:20
Control of a Mobile Robot through VDA5050 Standard

ABSTRACT. This paper presents a software module capable of controlling an autonomous mobile robot and communicating with a ROS-based robot fleet manager using the VDA5050 Standard and exchanging information via the MQTT communication protocol, aiming at flexibility and control across different robot brands.

17:40
Forces analysis on robotics screwing tasks.

ABSTRACT. This paper describes the experiments carried out to understand the use of robotic tools for screw-driving tasks. First, the bolting tasks are described; following, several tests using a tool with external fingers to mitigate the torque in the robot's joints are analyzed to get the forces and torqued involved; finally, some experiments show the results obtained by combining robot control strategies with the design of specific tools for each task using the external tool. The main conclusion of this work is that high screwing torques can be achieved, with low torques in the joints of the robot, when the tool is designed appropriately to the bolting components and an efficient combination of position and force control modes are implemented on the robot. The origin of this study is related to the maintenance tasks to be performed in the future International Fusion Materials Irradiation Facility – Demo Oriented Neutron Source (IFMIF-DONES) facility, where operating bolts using robots is a key aspect.

16:20-18:00 Session 3B: Perception & Manipulation - 2
Location: Student Hub 1
16:20
Design of a modular soft tool for automatic seed sowing

ABSTRACT. Agriculture 4.0, a concept introduced in the last decade, aims to implement all the knowledge gained in the industry into various agricultural tasks, such as weeding and harvesting. Although several task remains challenging due to working in unstructured environments, they have led to the implementation of various automation technologies, including classical robotics. However, this has not been the case with tasks like sowing, where the main problem lies in the handling of small and delicate objects such as seeds, making precise and damage-free manipulation difficult. Soft robotics is emerging as the next evolution of these traditional robotic systems, enabling the manipulation of delicate and precise objects in unstructured environments without bruising them. This article proposes a modular soft tool for the gentle and agile manipulation of seed in the automation of sowing tasks.

16:40
Manipulation of deformable objects with a multi-robot system

ABSTRACT. When dealing with object manipulation, objects can be too large, heavy or difficult for a single robot to grasp. Therefore, to improve the performance of these tasks, multiple robotic manipulators could be considered. This paper presents a cooperative multi-robot system capable of manipulating deformable objects in a coordinated and controlled manner, making the object also acquire the desired shape and configuration in space. Additionally, strategies that avoid collision of obstacles with the transported object and the mobile robots are implemented, extending its application in dynamic and unstructured environments. The algorithm is flexible with respect to the shape of the transported object and the number of manipulator robots, making it a versatile solution capable of being applied in many different contexts depending on the needs. The PyBullet simulator visualises in real time the movement of the robots and obstacles, as well as the behaviour of the deformable object being transported.

17:00
3D Motion Estimation of Volumetric Deformable Objects from RGB-D Images Synthetically Generated by a Multi-Camera System

ABSTRACT. Estimating deformation in volumetric objects, particularly when occluded, is a pressing challenge in computer vision. We present McDeforms, a novel dataset synthesized from a multi-camera system in PyBullet, simulating three scenarios of volumetric object deformations. Alongside RGB-D images, our dataset provides the object’s 3D coordinates ground truth and camera specifications. We explore McDeforms’ potentiality by evaluating two scene flow methods, Coherent Point Drift (CPD) and RAFT-3D, both of which competently estimate 3D flow across our simulations.

17:20
Elastic contour mapping for the estimation of abrupt shape deformations

ABSTRACT. Estimating object deformations is important in various industrial applications, such as material testing or object shape control. However, existing methods assume smooth or gradual deformations and thus face a significant challenge when it comes to sudden changes in an object's shape. In this paper, we propose a novel approach using our FMM-based contour mapping framework to estimate the deformation mesh of texture-less objects that experience abrupt shape changes. By analysing the contour's geometry, we compute a map between the contours of consecutive deformation states, which we use as input to update the deformation mesh. Our experiments and comparisons demonstrate the effectiveness of our method in handling abrupt deformations and accurately estimating the object's deformation mesh.

16:20-18:00 Session 3C: Robotics in Defense - 2
Location: Student Hub 2
16:20
Cybersecurity Threats in Military Robotic and Autonomous Systems

ABSTRACT. The Russia-Ukraine war emphasised how digital technology and the information domain increasingly integrate into modern battlefields. The military strategy began using Robotic and Autonomous Systems (RASs) to enhance tactical and strategic advantages over conventional weaponry and kinetic military action. Evolution in these circumstances increased dependence on cyberspace, algorithms, automation, and robots, creating new opportunities and challenges. Using military RASs in physical warfare can provide relevant operational advantages and open new vulnerabilities, increasing a system attack surface. This research examines military RAS cybersecurity threats, potential vulnerabilities, how they affect the attack surface, and the importance of responsible and precise technical innovation. We conclude that engineers and developers must guarantee that technology innovations meet cybersecurity regulations, military needs, and wartime rules. Military and political leaders must make educated decisions to maximise gains and minimise risks from complex combat technologies. Additionally, the military leadership should promote the revision of doctrine to accommodate the employment of new technologies on the battlefield. All stakeholders must ensure a coherent reflection and open discussion on the merits and perils of RAS.

16:40
Semantic Segmentation Network for Search and Rescue Scenes

ABSTRACT. Semantic segmentation is a common task in deep learning. Its intensive research has carried out a series of datasets dedicated to different tasks such as indoor, outdoor, urban scenes, and synthetic scenes, but none of them are related to search-and-rescue (SAR) scenes. In this work, we evaluate the U-Net convolutional neural network with ResNet-50 as an encoder for the segmentation of objects in SAR situations. For this purpose, the model is trained in three different stages using transfer learning techniques: i) using the CityScapes dataset with its 19 classes to evaluate the performance of the model, ii) using different labels of the Cityscapes dataset that correspond to SAR-related classes, iii) using the self-developed dataset focused on SAR scenarios. The results obtained indicate good recognition in classes with significant presence on the training images.

17:00
APH-Yolov7t: Yolov7-tiny with Attention Prediction Head for Person Detection on Drone-Captured Search and Rescue Scenarios

ABSTRACT. Inspection and intervention by drones in rescue operations have growing attention due to multiple causes, including natural and man-related events. Additionally, the rapid advancements in vision sensors, object detection models, and AI-based methods can boost the success of rescue scenarios. Drone navigation involves object scale variations creating a computation load and densely packed objects in the scene urge high-speed processing. To solve the two issues mentioned above, we propose the APH- Yolov7t method. In this paper, we introduce a new Attention-based Prediction Head for Yolov7-tiny. We also present the evaluation results of Yolov7 the latest state-of the-art convolutional neural network-based solution, here is used for robust object detection in the context of drone navigation to perform detection of persons on land and sea surfaces allowing to reduce disaster, distress, identify and rescue them. Despite the higher success rate of object detection models, vision complexities make detection tasks on drone-captured images more challenging and this area remains under-explored. We used the existing three search and rescue datasets which are images acquired from drones specific to our objective. Results show that our APH-Yolov7t method was the most robust attention-based Yolo and comprehensive object detection method for our application, demonstrating a consistently high level of performance in comparison to Yolov7-tiny. Evaluation results on all three datasets are reported. With this solution, we demonstrate to be able to satisfy our requirements of a mean average precision (mAP50) of over 0.80 for the person class and operational performance with over 125fps on a single GPU Nvidia RTX2080Ti.

17:20
ISR missions in maritime environment using UAS - Contributions of the Portuguese Air Force Academy Research Centre

ABSTRACT. This paper outlines the main research activities carried out by the Portuguese Air Force Academy Research Centre (CIAFA) in the domain of Unmanned Aircraft Systems (UAS) for Intelligence, Surveil- lance and Reconnaissance (ISR) missions in maritime environment. Firstly, a general description of CIAFA is presented. Then, an end-to-end overview of CIAFA contributions regarding UAS airframes, hardware, software and control systems architectures for ISR maritime missions is presented. The wide range of contributions in the field of UAS demonstrates how CIAFA plays an important role in the context of military robotics

17:40
An integrated method for landing site selection and autonomous reactive landing for multirotors

ABSTRACT. Unmanned Aerial Vehicles (UAV) in autonomous operations is an emerging technology with growing applications in several areas, such as agriculture, search and rescue (SaR), and even space exploration. The take-off and particular landing are some of the critical parts of operation. This paper proposes a landing site selection and control algorithm for an autonomous multirotor UAV. The goal is to land the UAV in safe locations as close as possible to a Point of Interest (PoI), mainly in unknown and unsafe terrains. The Landing Site Selection (LSS) algorithm uses terrain features from a 3D pointcloud, Support Vector Machines (SVM) to classify landing safety, and a cost function to compute the best landing site. The algorithm can be used both offline with a 3D map and online with data from a depth sensor. The states of the landing procedure are handled by a high-level state machine and velocity controllers control the UAV. LSS was tested using 3D maps of real scenarios and data from depth camera mounted on a real UAV, and the full autonomous landing system was tested in a simulated environment.