ISMCR2022: 25TH INTERNATIONAL SYMPOSIUM ON MEASUREMENT AND CONTROL IN ROBOTICS
PROGRAM FOR FRIDAY, SEPTEMBER 30TH
Days:
previous day
all days

View: session overviewtalk overview

09:00-10:00 Session C1: Navigation and communications
09:00
Business network lifecycle model: construct validity using structural equation model

ABSTRACT. In the context of small and medium-sized companies, characterized by the scarcity of human and economic resources, new production arrangements are studied and established, guided by the perception that many individual difficulties can be overcome together and that opportunities for learning, innovation and improvement can be achieved as a team (VERSCHOORE; BALESTRIN, 2010). Among the forms of arrangements, cooperation networks are those defined by formal and long-term collaborative agreements, through which companies establish a joint purpose and a form of governance to achieve common goals and generate competitive advantages. (MANDELL et al., 2016; VERSCHOORE et al., 2016). However, even with all the benefits that can be obtained through the formation of business networks, in many cases they are constituted in order to solve specific problems and, thus, do not have the strength to sustain and structure themselves (ASSENS, 2003). Studies also point out the lack of knowledge to establish and manage networks as another factor for their failure (AGOSTINI et al., 2015). The failure and low level of development of networks of Brazilian companies has already been observed in some studies in the academic literature (BORTOLASO et al. 2012; TOIGO; ALBA, 2010, ROTH et al., 2010). To better understand the evolution of a network in relation to organizational and relational aspects, analytical performance models have been developed in the literature. Among them, this work aims to study the life cycle model developed by Wegner et al. (2015). The life cycle analysis, when applied to the context of business networks, makes it possible to verify their development stage according to their particular characteristics, in addition to offering a temporal analysis of the changes and evolutions that occur in them. The use of such a model for network managers provides both information on what stage of development the network is in and collaborates with information to accelerate its consolidation (WEGNER et al., 2015). Since the life cycle is a latent construct, that is, an aspect that cannot be directly observed or measured, it can be analyzed from the perspective of the construction of psychometric scales (PASQUALI, 2009). The use of this perspective aims to ensure the validity and reliability of the measurement instrument developed, and is based on four basic conditions: elaboration and analysis of items, validity studies, precision and standardization (TIRLONI, 2013). In view of the outstanding importance of life cycle analysis applied to the context of business networks, the present work hás as goal to obtain evidence of construct validity with structural equation model of life cycle model developed by Wegner et al. (2015). As a data source for the analysis, the authors' research instrument was applied to 369 members of business networks in the construction materials trade sector in Brazil.

09:20
Efficient Neural Network Pruning Using Model-Based Reinforcement Learning

ABSTRACT. Model compression plays an important role in the efficient deployment of neural networks in resource-constrained devices, such as mobile phones or embedded systems. Rule-based conventional neural network pruning is a popular approach to model compression which is accomplished by systematically removing parameters from an existing, accurate network. This results in a smaller network while maintaining most of the initial accuracy. The process of choosing the parameters to be pruned is quite demanding, since the different layers in the DNN are not equally sensitive for removing parameters from them. Moreover, the deterioration of the accuracy is not only determined by the sensitivity of a current layer, but also by the amount of removed parameters from all the previous layers. The number of possible variations of these dependencies is so large, they cannot be tried out manually. Automated neural network pruning addresses this issue by leveraging a reinforcement learning agent to automatically find the best combination of parameters to be removed from a given model without human interaction. This topic is currently an open issue in the literature, however, the main disadvantage of the existing solutions is that they determine the main environmental state variables – the deterioration of the accuracy and the sparsity – by pruning and testing the model on the validation dataset in run time, which slows down the training procedure extremely. In this paper, we propose a novel reinforcement learning-based system which is able to prune the YOLOv4 object detector optimally, in addition to decreasing the training time. Compared to existing solutions, our system contains an additional neural network (so-called State Predictor Network, SPN) that can predict the main environmental variables if the sparsification coefficient for the current layer and the number of previously removed parameters are given. This network replaces the long procedures that were performed to determine the environment state, making the training of the agent significantly faster. Moreover, the use of SPN enables us to implement a model-based approach in our algorithm, as its main purpose is the simulation of the environment. Moreover, it links the actor and the reward’s computational graph, which indirectly leads to less noisy gradients, therefore a more stable and faster learning process. We tested our method on YOLOv4 detector, producing a model with 49 % less parameters and 7.2 % higher mAP. This result outperforms our rule-based handcrafted pruning methods designed for YOLOv4 by 2.3 % in mAP and 17.1 % in sparsity. In terms of full development time needed for finding the best reduced model, our method is 146.2 times faster than the state-of-the-art PuRL method using NVIDIA Titan X GPU. The key advantage of our method is the time saved by using a simulated environment which can be devoted to various development related tasks. In addition, none of the proposed system’s parts require high computation capacity and GPU memory, therefore the field of RL-based automated pruning becomes accessible for researchers who do not own expensive, high-performance GPUs.

09:40
Successful Development of Problem-Solving and Computing Programming Competences in Children Using Arduino

ABSTRACT. Developing children's problem-solving and computing programming competencies is essential in the current information society. Problem-solving starts in preschool by observing and modelling adults' behaviour to face situations and come up with solutions (vicarious learning). Hence, children can understand how their actions affect problems and their outcomes. Like in other in-developing countries, children grew along with technology in Chile, although students receive programming classes in secondary school. Nowadays, with high-level block-based programming languages, developing programming competencies in children seems a reachable task for enhancing problem-solving competencies. Nonetheless, children do not develop programming and electronics competencies because these technology competencies usually appear at high-education levels. Moreover, primary school teachers usually can avoid developing those competencies because they seem outside the knowledge that students must develop. Nowadays, we can use Arduino to improve the development of problem-solving and computing programming competencies. Arduino looks like a good tool for kids to learn electronics and programming. Because of its functionality and open hardware and software nature, Arduino is a good option for kids. In addition, they can become familiar with fundamental electronic components and design. Thus, Arduino helps children boost their thinking ability in a new dimension. We apply Arduino to teach essential electronic circuits and computer programming components to successfully solve different computing and electronic problems, such as turning on and off a set of lights and reading sensors to react regarding the obtained values. Given the relevance of the problem-solving competence, the open nature of Arduino, and the applicability of Arduino to develop programming competence using block-based programming languages, this article aims the development of problem-solving and computer programming competencies in primary school children. A relevant associated result is that children who participated in the experiment improved their average score in school. These children are from a primary school in Valparaíso, Chile. The main limitations in this experiment were the children's lack of experience with electronics and programming concepts and the requirement of using a computer with an internet connection.

10:10-10:50 Session C2: Robotics for human performance, rehabilitation and medical applications II
10:10
Proposal for a Multi-Objective Optimization Information System for Referral of Patients from the Emergency Unit

ABSTRACT. This paper presents a multi-objective modeling process that formulates a system under various constraints so that its performance can be optimized. In this approach, different objectives are formulated individually first and combined with (priority parameters) together into a single objective function that serves as the main part of an optimization problem. Operational constraints are identified to further restrict the solution space into a finite set of feasible solutions that can be searched and evaluated in reasonable timeframe. The workability of this multi-objective modeling process is demonstrated with the hospital system in Chile where efficiency is modeled under the constraints of operational capability that relies on available supply and the replenishing behavior. The operational data are collected and used as inputs to the modeling process, and the optimized replenishing behavior is determined so that the efficiency of having as many patients as possible under the availability of necessary supplies can be obtained for automating the operational process.

10:30
The Importance Of Using Fantomas In The Dosimetry Of Radiological Protection

ABSTRACT. The main objective of radiological protection dosimetry is to evaluate the absorbed dose in sensitive organs and tissues of the human body by internal or external sources of radiation (ICRP 103, 2007). Fantomas are physical or computational models used to simulate the transport of ionizing radiation, and its interactions in the tissues of the human body and to evaluate energy deposition in regions of interest. The Monte Carlo method is applied to reproduce a statistical process similar to the interaction of nuclear particles with human tissues. The objective of this work is, through literary research, to compare computational Fantomas with physical Fantomas, and to show the importance and evolution of the use of computational Fantomas in the dosimetry of radiological protection. Physical Fantomas, especially anthropomorphic ones, are very expensive and limited, besides being difficult to position (AP/PA/PERFIL). Computer simulators do not have radiation emissions, which protects researchers and ensures that there is no variation of the simulated radiation beam. To reduce uncertainty in dose calculations caused by anatomical variations, the scientific community has developed several computational Fantomas with modified ICRP standard reference values about body mass, height, positioning, size and position of organs and structures, etc.

10:50-11:50 Session C3: Algorithms on FPGA
10:50
Configurable Binary Designs on FPGA

ABSTRACT. The field-programmable gate array (FPGA) vendors like Intel/Altera and AMD/Xilinx let designers create complex circuits by instantiating and interconnecting intellectual properties (IPs) from their provided tools like Vivado and Quartus [1,2]. The IPs are configurable and reusable by FPGA designs but not open to the simulators like Synopsys VCS, Cadence NC, and Mentor Graphic Modelsim/Questa. Therefore, this paper presents a binary design library including fundamental arithmetic circuits like full-adder, full-subtractor, binary multiplier, shifter, and more. The Chisel Hardware Construction Language (HCL) is employed to build the configurable designs, making each design module configurable with precisions including half-word, word, double-word, and quad-word. Chisel HCL is an open-source embedded domain-specific language which inherits the object-oriented and functional programming aspects of Scala for constructing hardware. Compared with traditional Verilog/VHLD Hardware Description Language (HDL), Chisel is scalable to the structural level of designs and suitable to the arithmetic implementations. Experimental results show the same accuracy with the Verilog HDL implementations. The hardware cost in terms of slice count, power consumption, and the maximum clock frequency is further presented in this research study.

11:10
Evaluating FPGA Acceleration on Binarized Neural Networks and Quantized Neural Networks

ABSTRACT. The gap between complex deep learning models and the limited computing capability of an individual robot is a driving force to the hardware acceleration of image/video processing networks [1,2]. Therefore, this paper presents two case studies on the field-programmable gate array (FPGA) acceleration with deep neural networks. The hardware platform is mainly composed of a Xilinx PYNQ-Z2 board as a computing device and a USB camera as the image/video sensor. The main applications in image/video detection and recognition of traffic signs, road objects, and the comparison of execution time between FPGA and software are performed in this work. Compared with the software computation, the results of FPGA implementation show a significant speedup to the road sign recognition (shown in Fig. 1) using Binarized Neural Networks and achieve more than ×50 speedup to the object detection using Quantized Neural Networks (shown in Fig. 2). This paper focuses on the preliminary research into hardware acceleration on deep neural networks. It demonstrates a great potential to improve the classification rate with FPGA, particularly to the time-constrained systems like the self-driving cases. The final goal of this project is to provide parameterizable designs for complex neural networks with FPGA. The network will be configurable with different precisions and different numbers of layers/neurons associated with different design specifications.

11:30
A proposal for an FPGA-based graphical pipeline for virtual depth image generation
PRESENTER: Dániel Szabó

ABSTRACT. Autonomous robots and vehicles are taking an increasingly important role in our life. There is a significant interest in developing fast motion planning algorithms for robotic manipulators working in a dynamic environment which allows us to use robotic arms in various industrial applications. In risky interventions, it is important to plan a trajectory as safe as possible, i.e. the collision with the obstacles in the environment has to be avoided. Another frequently studied use case for these dynamic motion planning algorithms is the collaboration between the human operator and the robotic manipulator. It is crucial to avoid dangerous collisions between humans and robots. To avoid dynamic obstacles, it is necessary to recognize the moving objects. This can be done by using RGB-D cameras, which are providing the distances between the objects and the camera. To use this information in a reactive motion planning algorithm, it is needed to separate the depth information related to the robotic arm from the distances of the obstacles. In former studies a GPU-based method was proposed to solve this issue. A virtual depth image is generated by using the CAD model of the manipulator, which provides virtual depth information related only to the robotic arm. By using this image, the measurements of the obstacle distances can be separated, and a reactive motion planning algorithm is used, based on repulsive velocities. In our paper, we provide a solution for an FPGA-based virtual depth image generation process. This is performed by implementing a modified graphical pipeline on the FPGA. By using the current configuration of the manipulator and the camera parameters, the robot's 3D model is transformed and projected into a 2D plane. After that, an FPGA-optimized rasterizer algorithm is shown, to get the depth values for all the pixels on the virtual depth image. With this method, we provided a deterministic, real-time solution for the depth image generation problem, which is crucial in these kinds of safety-critical applications.

12:00-12:10 Session Closing
  1. Symposium summary by Dr. Bálint Kiss (event co-chair)
  2. Concluding remarks by Dr. Zafar Taqvi (IMEKO TC17 chair)
12:10-12:30 Session Virtual Tour

YDUQS and Estáció

Attendees will be treated with a 15 minutes tour of Estacio showing how technology tools are supporting the educational transformation. It will be interesting to observe the implementation of some of the Data Analysis and Virtual Strategies.