ROBOT2023: 6TH IBERIAN ROBOTICS CONFERENCE
PROGRAM FOR FRIDAY, NOVEMBER 24TH
Days:
previous day
all days

View: session overviewtalk overview

09:20-10:20 Session K4: Keynote
Location: Auditorium
09:20
From Intuitive Immersive Telepresence Systems to Conscious Service Robots

ABSTRACT. Intuitive immersive telepresence systems enable transporting human presence to remote locationsin real time. The participants of the recent ANA Avatar XPRIZE competition developed robotic systems that allow operators to see, hear, and interact with a remote environment in a way that feels as if they are truly there. In the plenary, I will present the competition tasks and results. My team NimbRo won the $5M Grand Prize. I will detail our approaches for the design of the operator station, the avatar robot, and the software. While telepresence enables a multitude of applications such as telemedicine and remote assistance, for other scenarios autonomy is required. In the second part of my presentation, I argue that consciousness is needed to adapt quickly to novel tasks in open-ended domains and to be aware of own limitations. I will present a research concept for developing conscious service robots that systematically generalize their knowledge to cope with novelty and that monitor themselves to obtain more information when needed, to avoid risks, and to detect and mitigate errors. This new generation of robots has much potential for numerous open-ended application domains, including assistance in everyday environments.

10:20-10:40Coffee Break
10:40-12:20 Session 7A: Robotics in Education
Location: Auditorium
10:40
Vulcano: Using the Vulcano metaphor as a challenge to educational robotics

ABSTRACT. In this paper, we present an evolution of the Vulcan Robotics Competition, designed to promote interest in robots with legs, where robots have to achieve goals on a structure that simulates the slope of a volcano. A rough, undulating surface creates an additional obstacle to their movement. This new challenge is more complex and some hardware adjustments have been made to adapt to the demands of the competition. The challenge was tested in the context of a robotics festival, assessing the programming and mechanical skills of the participants. Participants received a prototype robot from the organisation to assemble and program. A final survey presents some data on how participants enjoyed the challenge while assessing its degree of complexity.

11:00
An Educational Kit for Simulated Robot Learning in ROS 2

ABSTRACT. Robot Learning is one of the most important areas in Robotics and its relevance has only been increasing. The Robot Operating System (ROS) has been one of the most used architectures in Robotics but learning it is not a simple task. Additionally, ROS 1 is reaching its end-of-life and a lot of users are yet to make the transition to ROS 2. Reinforcement Learning (RL) and Robotics are rarely taught together, creating greater demand for tools to teach all these components. This paper aims to develop a learning kit that can be used to teach Robot Learning to students with different levels of expertise in Robotics. This kit works with the Flatland simulator using open-source free software, namely the OpenAI Gym and Stable-Baselines3 packages, and contains tutorials that introduce the user to the simulation environment as well as how to use RL to train the robot to perform different tasks. User tests were conducted to better understand how the kit performs, showing very positive feedback, with most subjects agreeing that the kit provided a productive learning experience.

11:20
Teaching Reinforcement Learning Fundamentals in Vocational Education and Training with RoboboSim

ABSTRACT. The current paper presents an educational resource to introduce Vocational Education and Training (VET) students to the topic of Reinforcement Learning (RL) through a practical activity. Specifically, they have to program a Q-learning algorithm using Python language to obtain a policy that allows a mobile robot to solve a task autonomously in an industrial-like setup. To this end, a 3D simulation platform called RoboboSim, and the Python libraries of the Robobo educational robot are used. The resource has been developed in the scope of the Erasmus+ project called AIM@VET, and it has been tested with a group of 12 VET students from Spain, Portugal, and Slovenia. The obtained results have been successful, and students with no previous background on RL have learned its fundamentals through a purely practical methodology. In addition, some drawbacks have been captured from teachers and students, and the resource has been improved accordingly before it was made available through the project web

11:40
An open-source, low-cost UAV testbench for educational purposes

ABSTRACT. This paper presents a low-cost, open source UAV testbench, developed with affordable materials and based on Arduino and Matlab. The project aims to develop a testbench that simulates the roll angle behavior of an UAV, using only standard components and straightforward fabrication, aiming its use for educational purposes. This testbench is composed by a structure with 1 DoF of rotation, which represents the UAV, and by two brushless motors installed in the structure, which allow to control the roll angle. The platform controller is developed in Arduino, which communicates in real time with a Graphical User Interface (GUI) developed in Matlab to change angle references, control modes or controller gains. The project also includes a Matlab-Simulink model of the testbench, allowing a preliminary analysis through simulations of the different controllers designed. The proposed testbench can be helpful for introducing multirotor control to students. This paper presents the development of this testbench, as well as its validation for different controllers.

12:00
Design and Development of a Differential Drive Platform for Dragster Competition

ABSTRACT. Robotics competitions have been increasing in the last years since they bring several impacts on students education, such as technical skill development, teamwork, resilience and decision making withing the STEM skills. The article highlights the significance of robotics competitions as platforms for fostering innovation and driving advancements in the field of robotics. This article primarily focuses on the development of a robot in the Dragster category for the 2023 Portuguese Robotics Open. It outlines the strategies devised to tackle the competition's challenges and discusses the obstacles encountered along with the corresponding solutions employed. The article delves into the specific details of the challenges faced and the iterative processes undertaken to enhance the robot's performance and functionalities. By sharing the insights gained from the project, future proposals for iterations of the robot will be presented, aiming to further augment its features and overall performance while sharing knowledge with other teams and community.

10:40-12:20 Session 7B: Robotic Navigation - 1
Location: Student Hub 1
10:40
Performance Analysis of ORB-SLAM in Foggy Environments

ABSTRACT. Vision based localization approaches, be Simultenous Localization and Mapping (SLAM) or Visual Odometry (VO), rely heavily on distinct features detectable and trackable across different frames. Therefore, state of the art approaches utilize features that are scale-invariant, visible from different points of view, and are also tolerant to changes in light. However, visibility of the feature points are affected also by haze, mist or fog, which are atmospherical phenomena that vary across the day, effectively hindering the performance of vision based SLAM/VO approaches. In this work, we study the effect of fog on SLAM, particularly ORB-SLAM. We analyze the changes in the quality and quantity of the features with varying fog levels, as well as the quality of the eventual path generated by SLAM. We also show that performance of SLAM in foggy conditions can be improved by defogging the images, though only to a limited extent depending on the amount of fog in the environment.

11:00
Socially reactive navigation models for mobile robots in dynamic environments

ABSTRACT. The objective of this work is to expand upon previous works, considering socially acceptable behaviours within robot navigation and interaction, and allow a robot to closely approach static and dynamic individuals or groups. The space models we developed are adaptive, that is, capable of changing over time to accommodate the changing circum- stances often existent within a social environment. The space model’s parameters’ adaptation occurs with the end goal of enabling a close in- teraction between humans and robots. This work also further develops a preexisting approach pose estimation algorithm in order to better guar- antee the safety and comfort of the humans involved in the interaction. The entire navigation system is then evaluated through both simulations and real life situations. These experiments demonstrate that the devel- oped space model and approach pose estimation algorithms are capable of enabling a robot to closely approach individual humans and groups, while maintaining considerations for their comfort and sensibilities.

11:20
Human motion trajectory prediction using the Social Force Model for real-time and low computational cost applications

ABSTRACT. Human motion trajectory prediction is a very important functionality for human-robot collaboration, specifically in accompanying, guiding, or approaching tasks, but also in social robotics, self-driving vehicles, or security systems. In this paper, a novel trajectory prediction model, Social Force Generative Adversarial Network (SoFGAN), is proposed. SoFGAN uses a Generative Adversarial Network (GAN) and Social Force Model (SFM) to generate different plausible people trajectories reducing collisions in a scene. Furthermore, a Conditional Variational Autoencoder (CVAE) module is added to emphasize the destination learning. We show that our method is more accurate in making predictions in UCY or BIWI datasets than most of the current state-of-the-art models and also reduces collisions in comparison to other approaches. Through real-life experiments, we demonstrate that the model can be used in real-time without GPU's to perform good quality predictions with a low computational cost.

11:40
DeepRL-based Robot Local Motion Planning in Unknown Dynamic Indoor Environments

ABSTRACT. Robots are increasingly replacing humans in a variety of tasks, from industrial applications to space exploration. Difficulties in performing many tasks such as navigation, target recognition and obstacle avoidance must be overcome. This work proposes a novel Deep Reinforcement Learning approach to solve robot motion planning in environments populated by both static and dynamic obstacles. Dueling double deep Q-Network (D3QN) is exploited, utilizing a costmap representation. Prioritized experience replay, reward propagation and curriculum/transfer learning are evaluated. Evaluation was carried out in a Gazebo simulation environment. The presented results highlight the proposed framework's performance in both static and dynamic scenarios.

12:00
Parallel Curves Path Planning based on Tangent Segments to Concentric Circles

ABSTRACT. The navigation problem of autonomous vehicles in unstructured environments entails the ability of autonomous vehicle to plan an obstacle-free path in real-time between a sequence of waypoints. While there are several path planners in the literature based on optimization, sampling, and geometry, none of them are designed considering the most commonly used sensors, which currently utilize polar coordinates such as LIDAR and RADAR. Inspired by that, this work proposes an algorithm based on computational geometry to path planning step for autonomous vehicle navigation in unstructured environments. The proposed algorithm was evaluated in simulation and its results were compared following the BARN standardized metrics with classical methods. The proposed method achieves the lowest processing time whilst maintaining comparable performance results on the spatial-related metrics. It also outperforms the others on the dispersion metric, suggesting it is a more robust method with more planning options for complex environments.

10:40-12:20 Session 7C: Robotics in Agriculture - 1
Location: Student Hub 2
10:40
Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

ABSTRACT. Forest soil ripping is a practice that involves disturbing the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low cost-sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

11:00
Multispectral Image Segmentation in Agriculture: A Comprehensive Study on Fusion Approaches

ABSTRACT. Multispectral imagery is frequently incorporated into agricultural tasks, providing valuable support for applications such as image segmentation, crop monitoring, field robotics, and yield estimation. From an image segmentation perspective, multispectral cameras can provide rich spectral information, helping with noise reduction and feature extraction. As such, this paper concentrates on the use of fusion approaches to enhance the segmentation process in agricultural applications. More specifically, in this work, we compare different fusion approaches by combining RGB and NDVI as inputs for crop row detection, which can be useful in autonomous robots operating in the field. The inputs are used individually as well as combined at different times of the process (early and late fusion) to perform classical and DL-based semantic segmentation. In this study, two agriculture-related datasets are subjected to analysis using both deep learning (DL)-based and classical segmentation methodologies. The experiments reveal that classical segmentation methods, utilizing techniques such as edge detection and thresholding, can effectively compete with DL-based algorithms, particularly in tasks requiring precise foreground-background separation. This suggests that traditional methods retain their efficacy in certain specialized applications within the agricultural domain. Moreover, among the fusion strategies examined, late fusion emerges as the most robust approach, demonstrating superiority in adaptability and effectiveness across varying segmentation scenarios.

11:20
Vision-based Smart Sprayer for Precision Farming

ABSTRACT. Currently, sustainability stands out as one of the main topics of discussion. Climate change, limited resources, and population growth necessitate more effective management of agricultural methods to minimize food waste and soil degradation. Since the advent of agriculture, humans have resorted to chemicals to combat pests and reduce losses in farming. Over time, this practice has evolved into what we now refer to as pesticides, which can potentially harm the soil due to indirect application. On the other hand, precision agriculture leverages AI and image processing techniques to regulate the amount of chemicals used on specific types of plants. This approach minimizes losses and optimizes resource utilization. Previous studies reviewed in this article have demonstrated positive outcomes in this area. However, there remains room for improvement in the sprayer system’s precision, accuracy, and mechanical aspects. This work aims to compile, study, execute, and test these enhancements to propose a ” GraDeS ” solution (Grape Detection Sprayer). To achieve the objective of this article, which involves spraying pesticides exclusively on grape bunches within a crop that are detected by an AI developed algorithm, as well as designing a sprayer with two degrees of freedom, the following steps will be undertaken: (1) A systematic review of the already existent algorithms today; (2) Improvement of the previous codes, and consequently, their results and accuracy; (3) Study and design a structure for the precision spray with two degrees of freedom; (4) Test and validation of the work developed.

11:40
Real-Time Drowsiness Detection and Health Status System in Agricultural Vehicles Using Machine Learning

ABSTRACT. This study presents a real-time monitoring system for preventing accidents in agricultural vehicles by monitoring the farmer and driving conditions. The system employs vital signals monitoring, using a bracelet to assess the farmer's health status, and drowsiness detection, utilizing a camera and Machine Learning to evaluate the level of drowsiness. In emergencies, the system communicates with a central station to identify the triggering factor. Driving conditions are monitored using inertial and GPS data. The focus of this paper is on the farmer's monitoring aspect. Health status is determined by analyzing Heart Rate and Oxygen Saturation values measured by the bracelet. While currently measuring two values, the system is designed to accommodate additional measurements. Multiple algorithms for driver drowsiness detection were tested, highlighting the need to consider different approaches in the final solution. This research proposes an integrated system to enhance safety and prevent accidents in agricultural vehicles, addressing the specific requirements of the farming industry.

12:00
A perception skill for herding with a 4-legged robot

ABSTRACT. While predators are important for the health of ecosystems, they can also pose challenges in certain situations, especially when they interact with human activities such as agriculture and livestock farming. In this way, effective predator control in extensive livestock farming could be achieved. This work proposes the design, development and integration of a robotic skill to distinguish between different species, grouping them according to whether they are potential predators or harmless species for a flock of sheep. This skill is integrated into a cognitive architecture for helping in scene understanding and integrated with the planning layer of a robotic sheepdog. Thus, if the perception system detects a potential predator, the action to be taken is to scare the predators and make the herd flee. Initial experimental results on images taken by a 4-legged robot show promising results.

12:20-14:00Lunch Break
14:00-16:00 Session 8A: Project Posters
Location: Auditorium
14:00
RECY-SMARTE: Sustainable approaches for recycling and re-use of discarded mobile phones

ABSTRACT. RECY-SMARTE: Sustainable approaches for recycling and re-use of discarded mobile phones

14:05
FREE4LIB: Feasible Recovery of critical raw materials through a new circular Ecosystem FOR a Li-Ion Battery cross-value chain in Europe

ABSTRACT. The aim of the Free4Lib project is to address the growing need to develop efficient and safe processes for the dismantling and sorting of End-of-Life (EOL) Lithium-Ion-Batteries (LIBs). With the exponential increase in demand for LIB in various sectors, there is concern about the proper handling of these batteries when they reach the end of their useful life. This stage of the project seeks to gain a detailed understanding of the dismantling and sorting processes of EOL LIBs to identify opportunities to improve efficiency and safety in these processes with Human Robot-Collaboration (HRC).

14:10
NHoA: Never Home Alone

ABSTRACT. The Never Home Alone (NHoA) project aims to create an intelligent and social robotic system that assists elderly people in maintaining their independence in their homes and preventing loneliness and isolation by promoting contact within a connected network, simulating group therapy. The robot is an embodied social actor who actively intervenes to create an emotional connection with the user after sensing the social and emotional environment.

14:15
IntelliMan: AI-powered manipulation system for advanced robotic service, manufacturing and prosthetics

ABSTRACT. IntelliMan is an European-funded reserach project aiming to to develop a robotic system for manipulating different elements with high performance and constant learning capabilities through a heterogeneous set of sensors, for advanced robotics, manufacturing and prosthetic services. This system will be able to adapt to the characteristics of its environment and to decide how to execute a task autonomously, detecting flaws in its execution and requesting new knowledge through interaction. Ensuring security and performance, IntelliMan’s advances range from learning manipulation skills to abstract descriptions of a handling task, or to discovering functionality of an object.

14:20
Limb Climbing Robot Design for Diameter Adaptation

ABSTRACT. The rapid development of wearable technologies is increasing the research interest in on-body robotics and wearable mechatronic systems, where relocatable robots can gather on-body sensor information or interact through motion. This work offers preliminary results towards developing a small robot that can move on human limbs. The proposed design consists of an open spring passive mechanism with a pivoting differential drive system with two spherical wheels on one side and an actuated wheel for grasping and stabilizing the other. The combination of the open mechanism, the pivoting system, and the spherical wheels allows adaptability to size variations of the limb section. Furthermore, the mechanism can be easily put on or removed at any point on the limb without slipping the robot over the hand or foot. Simulated results indicate good steerability of the proposed design.

14:25
Fuzzy logic based decision-making for urban platooning on urban roundabout scenarios

ABSTRACT. This paper proposes a fuzzy-based decision-making framework for urban platooning in roundabout scenarios. By utilizing fuzzy logic to handle uncertainties and imprecise inputs, the framework adapts the behavior of platoon vehicles based on real-time traffic conditions, vehicle dynamics, and safety considerations. In addition, a MPC-based platoon following controller is proposed to execute the actions defined by the decision-making approach. This approach is tested in Carla simulator with successful results, proving the proposal is feasible for platoon handling in urban roundabouts

14:30
NEXUS

ABSTRACT. Sistema baseado em drones autónomos, com capacidade de monitorização persistente e de longo prazo

14:35
The GreenAuto 3D navigation system for mobile robots

ABSTRACT. The purpose of the GreenAuto Agenda consists in transforming the national automotive sector, in the scope of the current transition to low emission vehicles. The promotors of the Agenda are industrial and research & innovation (R&I) entities, with extensive know-how related to the automotive industry, either by participating in the development, implementation and industrialization of new vehicles and their components, or through experience in the development and implementation of production technologies employed by automotive original equipment manufacturers (OEMs).

The Agenda will create the technical and operational conditions for Peugeot Citroën Automóveis Portugal, S.A. (Stellantis Mangualde) to start the production of a new light commercial vehicle (LCV) in Portugal, namely a battery electric light commercial vehicle (BE-LCV). To achieve this purpose while ensuring a high level of incorporation of national inputs, Stellantis Mangualde will be the end-user of the technologies and components to be developed by the industrial and R&I entities, within the scope of the Agenda. Having this in mind, this Agenda is composed of several work packages (WP): WP 1 focus on the industrialization of an innovative BE-LCV; WP 2 to 7 focus on the development and industrialization of new vehicle systems and components; and WP 8 to 10 focus on the development of manufacturing technologies required to drive down the still higher cost of manufacturing electric vehicles compared to internal combustion engine (ICE) vehicles.

With a collaborative effort between business and R&D entities, the context of WP10 of the GreenAuto Agenda is focused on developing new solutions for autonomous systems typically used in the automotive industry, and PPS18: 3D navigation system for mobile robots, fits the vision defined for WP10: Automated Logistics for the Automotive Industry.

Thus, the overall goal of PPS 18 is to promote R&Di activities for the development of a navigation system for Autonomous Mobile Robots (AMR), robust and in complex real contexts and circumstances, based on the fusion of sensors, precision GPS, odometry, 3D cameras, gyroscope and accelerometers.

The proposed poster will present the architecture envisioned for this system and some of the technologies being researched to fulfil PPS 18 objectives.

14:40
Initial Comparison of Infrastructure-free Synchronization for Dynamic Meshes of Robots

ABSTRACT. This poster shows preliminary results concerning the synchronization among a team of mobile robots that establish dynamic links, aiming at supporting a TDMA framework for collision-free transmissions. It presents and compares two synchronization approaches that are infrastructure-free and self-organizing based on the principle of pulse coupling, namely RA-TDMA and DESYNC.

14:00-16:00 Session 8B: Robotic Navigation - 2
Chair:
Location: Student Hub 1
14:00
Fuzzy logic based decision-making for urban platooning on urban roundabout scenarios

ABSTRACT. This paper proposes a fuzzy-based decision-making framework for urban platooning in roundabout scenarios. By utilizing fuzzy logic to handle uncertainties and imprecise inputs, the framework adapts the behavior of platoon vehicles based on real-time traffic conditions, vehicle dynamics, and safety considerations. In addition, a MPC-based platoon following controller is proposed to execute the actions defined by the decision-making approach. This approach is tested in Carla simulator with successful results, proving the proposal is feasible for platoon handling in urban roundabouts.

14:20
Fast tunnel traversal for ground vehicles by relative yaw estimation with neural networks

ABSTRACT. Underground environments are challenging environments for autonomous robotics, as they have a set of properties that make most common self-localization and navigation methods struggle. We propose a system to allow wheeled autonomous vehicles to traverse tunnels at high speeds. Our proposal is build around a Convolutional Neural Network (CNN) trained to predict the yaw that centers the robot into the tunnel as it traverses it. This exploits the generalization capabilities of CNNs, so that the method works in novel environments. The system achives speeds of up to 10 m/s in novel simulated environments.

14:40
Safe Autonomous Multi-Vehicle Navigation using Path Following Control and Spline-based Barrier Functions

ABSTRACT. Autonomous driving is a consistent new trend when discussing future smart cities. Safety guarantees are a crucial concern in the design of vehicle navigation systems operating on heterogeneous environments such as streets and roads, in order to prevent severe harm to passengers and damage to expensive equipment. This chapter proposes a path following control method for vehicle navigation in dynamic and fast response situations such as autonomous driving in a multi-lane highway, guaranteeing collision avoidance between pairs of vehicles and invariance vehicles inside the lane limits. A Control Lyapunov Function (CLF) and Control Barrier Function (CBF) framework for achieving the aforementioned objective is proposed and implemented by means of a Quadratic Programming (QP)-based controller. Convex barrier functions are proposed for modeling vehicle collisions and B-Splines are used both as paths to be followed and as models for the lane limits. Simulation results using a Python-based framework for vehicle navigation are presented and discussed, demonstrating the viability of the proposed framework for autonomous driving.

15:00
Integration of a Free Navigation Autonomous Mobile Robot into a Graph and ROS-based Robot Fleet Manager

ABSTRACT. The need for interoperability between robots of different brands and navigation typologies is increasing and this has led to the development of a new approach to empower a graph and ROS-based robot fleet manager for the management of free navigation mobile robots. This is an approach applicable to any free navigation Autonomous Mobile Robot (AMR) with the necessary adaptations regarding the information provided by the different brands of robots. The OMRON LD-90 was the robot chosen for this implementation, and for this purpose, a software module was developed allowing the exchange of data between a non-ROS-based mobile robot and a specific ROS-based robot fleet manager.

14:00-16:00 Session 8C: Robotics in Agriculture - 2
Location: Student Hub 2
14:00
A Mission Planner for Autonomous Tasks in Farms

ABSTRACT. This research introduces a Mission Planner, a route optimization system for agri-cultural robots. The primary goal is to enhance weed management efficiency us-ing laser technology in narrow-row crops like wheat and barley and wide-row crops like beets and maize. The Mission Planner relies on graph-based approach-es and incorporates a range of algorithms to generate efficient and secure routes. It employs three key algorithms: (i) Dijkstra algorithm for identifying the most optimal farm route, (ii) Visibility Road-Map Planner (VRMP) to select paths in cultivated fields where visibility is limited, and (iii) an enhanced version of the Hamiltonian path for determining the optimal route between crop lines. This Mis-sion Planner stands out for its versatility and adaptability, owing to its emphasis on graphs and the diverse algorithms it employs for various tasks. This adaptabil-ity allows it to provide multiple functions, making it applicable beyond a specific role. Furthermore, its ability to adjust to different agricultural robot sizes and specifications is a significant advantage, as it enables tailored programming to meet safety and movement requirements specific to each robot. These research re-sults affirm the effectiveness of the implemented strategies, demonstrating that a robot can confidently and effectively traverse the entire farm while performing weed management tasks, specifically laser-based weed management.

14:20
Pest Management in Olive Cultivation through Computer Vision: A Comparative Study of Detection Methods for Yellow Sticky Traps

ABSTRACT. This study compares two computer vision methods to detect yellow sticky traps using unmanned autonomous vehicles in olive tree cultivation. The traps aim to combat and monitor the density of the olive fly, an important pest that damages olive fruit, leading to substantial economic losses annually. The evaluated methods are traditional segmentation and state-of-the-art YOLOv8. A specific dataset was created to train and adjust the two algorithms. At the end of the study, both were able to locate the trap precisely. The segmentation algorithm performed well at short distances (50 cm), and YOLOv8 maintained consistent accuracy regardless of the tested distance. It is also demonstrated that both algorithms can be used in different situations, depending on the application's specific requirements, considering the trade-offs of accuracy and processing speed.

14:40
A Multisensor Factor-graph SLAM Framework for Steep Slope Vineyards

ABSTRACT. Steep slope vineyards pose specific challenges for autonomous robot navigation, therefore requiring accurate, robust and scalable localization and mapping solutions for such goal. In addition, due to the uneveness of the terrain, the identification of traversable zones is crucial for a safe operation, thus requiring a dense scene representation that captures these details. For such reasons, a novel SLAM architecture is presented in this work, characterized by a multi-sensor based dual factor-graph framework that integrates in real time wheel odometry, IMU, LIDAR and GNSS measurements, as well as heading and attitude data, generating a dense 3D map in point format. The proposed system was tested with datasets obtained from a real robot navigating in vineyards with different levels of slopeness, and benchmarked with state-of-the-art 3D LIDAR SLAM techniques. The presented results demonstrate superior performance over the compared methods, while maintaining overall map consistency and accuracy when matched with a reference model.

15:00
Mission Supervisor for food factories robots

ABSTRACT. Climate change, limited natural resources, and the increase in the world’s population impose society to produce food more sustain- ably, with lower energy and water consumption. The use of robots in agri- culture is one of the most promising solutions to change the paradigm of agricultural practices. Agricultural robots should be seen as a way to make jobs easier and lighter, and also a way for people who do not have agricultural skills to produce their food. The PixelCropRobot is a low-cost, open-source robot that can perform the processes of moni- toring and watering plants in small gardens. This work modified some hardware of the existing physical version of the PixelCropRobot robot, developed a mission supervision software, and created an interface based on NODERED. The communication between the mission supervisor and the other components of the system is done using ROS2 and MQTT. The mission supervisor receives a prescription map, with information about the respective mission, and decomposes them into simple tasks. An A* algorithm then defines the priority of each mission that depends on factors like water requirements, and distance travelled. The changes to the hardware and software were tested in a greenhouse, and the re- sults showed that the PixelCropRobot and mission supervisor software performed all tasks optimally. The robotic platform and software can be extended to perform other missions required for greenhouses and urban food factories.