ABSTRACT. Social robots are coming to appear in our daily lives. Yet, it is not as easy as one might imagine. We developed a human-like social robot, Robovie, and studied the way to make it serve for people in public space, such as a shopping mall. On the technical side, we developed a human-tracking sensor network, which enables us to robustly identify locations of pedestrians. Given that the robot was able to understand pedestrian behaviors, we studied various human-robot interaction in the real-world. We faced with many of difficulties. For instance, the robot failed to initiate interaction with a person, and it failed to coordinate with environments, like causing a congestion around it. Toward these problems, we have modeled various human interaction. Such models enabled the robot to better serve for individuals, and also enabled it to understand people’s crowd behavior, like congestion around the robot; however, it invited another new problem, robot abuse. I plan to talk about a couple of studies in this line, and some of successful services provided by the social robot in the shopping mall, hoping to provide an insight about what the social robots in public space in a near future will be.
Path Planning of Multiple Automatic Guided Vehicles with Tricycle Kinematics Considering Priorities and Occupancy Time Windows
ABSTRACT. This paper addresses the problem of path planning for multiple automated guided vehicles that have kinematic model of a tricycle and drive in a network of one-way roads. The goal of the path planning is that all the AGVs reach their goals in the shortest possible time, without any collision. The proposed path planning algorithm is extended from the A* algorithm and it considers AGV priorities. An approach of occupancy time windows is used to detect and resolve predicted collisions. The algorithm determines waiting times and waiting locations that enable collision free driving. The occupancy checking algorithm also considers shape of the vehicle and its speed. The result of the path search algorithm are the paths for all the vehicle and the waiting times on the individual roads. The resulting paths are converted into appropriate action plans that are sent to all the AGVs for execution. The applicability of the proposed path planning algorithm was evaluated in simulation environment and on real small-scale AGVs.
Cyber-Physical Platform with Miniature Robotic Vehicles for Research and Development of Autonomous Mobile Systems
ABSTRACT. The paper presents a cyber-physical platform that enables development
and evaluation of algorithms for autonomous driving of mobile robots.
The platform consists of three essential parts: a vision system for
object tracking, small-scale physical model of an environment with
miniature mobile robots and system for simulation of virtual sensors.
The vision system is comprised of a single camera above the platform
that enables global tracking of all the objects in real time. The
small-scale models of the environment are given as drawings on a flat
surface, on which the miniature wheeled mobile robots can drive. Two
physical models that have been developed are presented: a town with
miniature wheeled mobile robots and an industrial hall with miniature
automated guided vehicles. Since not all of the essential sensors are
available in appropriate small-scale form, an approach that enables
virtual sensors is introduced. Various modular systems are integrated
together using the framework provided by the Robot Operating System. The
proposed platform can be deployed quickly and it is simple to use, it is
also very affordable, adjustable and expandable.
Autonomous hierarchy creation for path planning of mobile robots in large environments
ABSTRACT. The main task of mobile robots in large environments such as factories, warehouses, and open spaces is to transport goods and people. Planning the path in large environments using classical graph-based search is computationally too intensive. A representation by a hierarchical graph (H-graph) facilitates graph creation and reduces the complexity of path planning. In this paper, we present an algorithm for autonomously generating a hierarchy of the environment from floor plans. The hierarchical abstraction depicts the environment in levels, from the most detailed to the most abstract representation of the environment, where pre-computed partial paths at the most detailed level are graph edges in a higher level. We use the E* algorithm to find partial paths in the most detailed abstraction level, and we propose the extraction of higher levels automatically from lower levels. We verified the proposed H-graph creation on our University premises resulting in five abstract levels.
Improving the flow in multi-robot logistic systems through optimization of layout roadmaps
ABSTRACT. In intralogistic systems, set-up is extremely important and largely determines their performance. In systems that use autonomous guided vehicles (AGVs), planning the roadmaps is a complex problem that is usually solved by human experts. While roadmaps in AGV systems are fixed using magnetic tape, autonomous mobile robots (AMRs) provide additional flexibility because they are able to move around freely. However, to maintain system performance, free movement needs to be constrained, e.g., to avoid deadlocks in narrow aisles. A method is presented that suggests the movement constraints in terms of preferred directions of motion. The free space is modeled as a grid graph with directed, weighted edges. The weights are then optimized by an algorithm inspired by ant colony optimization (ACO) that aims to reduce the amount of conflicting situations, such as collisions, in a given intralogistic problem. The approach is illustrated in a case study, which shows that the proposed method can inform a feasible roadmap plan that leads to fewer conflict situations.
Reaching Motion Planning with Vision-Based Deep Neural Networks for Dual Arm Robots
ABSTRACT. Dual arm robots have been attracting attention from the view point of factory automation. These robots are basically required to reach their hands toward the respective target objects, simultaneously. Therefore, we focus on motion planning with deep neural networks. Given an RGB-D camera mounted on a robot, object images are fed as the inputs to the reaching motion planner based on convolutional neural network, CNN. For multiple objects, the depth of each object in the image is useful information to determine a reaching target. If the objects are close to each other, however, the depth becomes similar. For this challenge, we propose to generate the target object image through instance segmentation and order classifier. In the experiment with multiple objects, we show that the robot is able to reach both the hands toward the target objects by using the target object images.
Kinematic calibration of a collaborative robot by a marker based optical measurement procedure.
ABSTRACT. Static robot calibration determines the parameters of a mathematical model that approximates as closely as possible the relationship between the end-effector pose of a robot and its corresponding actuated joint variables. The manufacturer of a robot delivers such a set of parameters but in principle more accurate own measurements can be used to determine optimized parameters. The purpose of this paper is 1. to show how much the position accuracy of robots of type UR5e from Universal Robots can be increased by application of an markerbased optical measurement procedure based on circle fits and 2. what are the limitations of this procedure.
Furthermore it is described how to use so called modified Denavit-Hartenberg parameters instead of the typically used distal/classical definition to achive comparability between different measurements. Overall it is shown that with the calibration procedure the errors can be reduced in practise but only less for a a brand-new robot and with usage of the manufacturer calibration data set. The remaining errors show a specific structure which suggests that the mechanical stability of the robot limits the positioning accuracy and not the accuracy of the measurement procedure.
Randomized Robotic Visual Quality Inspection with In-hand Camera
ABSTRACT. Robotic visual inspection is based on manually pre-defined postures (relation) between the camera and the object, and the robot either moves the object in front of the camera, or the in-hand camera around of the object. Either the complete object, or – in order to save time - some critical parts are checked. The path of the robot is typically always the same, determined in advance. But, in order to check only some out of all possible aspects of the product, we need to generate all possible transitions between all possible aspects, and then choose the correct ones. In this paper, we experimentally evaluate various motion planning algorithms to autonomously generate transitions between different, random inspection postures for guiding an in-hand camera with a robot in a confined space. This allows for completely random sequences of motion through predefined postures for visual quality inspection.
Articulated Objects: from Detection to Manipulation - Survey
ABSTRACT. Robotic manipulation of articulated objects, such as opening and closing doors, drawers, and cabinets, appears to be an emerging research topic in robotics. This is evidenced by a large number of recent research papers on this topic, as well as annotated datasets of articulated objects and their parts. This article reviews the state of the art and compares image-processing-based methods and benchmark datasets for detecting and segmenting articulated objects, determining their kinematic parameters, and finally manipulating articulated objects by a robot.