DGB-ICRA-2019: Dataset Generation and Benchmarking of SLAM Algorithms for Robotics and VR/AR |
Website | https://sites.google.com/view/icra-2019-workshop/home |
Submission link | https://easychair.org/conferences/?conf=dgbicra2019 |
Submission deadline | March 15, 2019 |
Synthetic datasets have gained an enormous amount of popularity in the computer vision community, from training and evaluation of Deep Learning-based methods to benchmarking Simultaneous Localization and Mapping (SLAM). Having the right tools to create customized datasets will enable faster development, with the focus on the applications of robotics. A large number of datasets exist, but with emerging applications and new research directions, there is the need to have versatile dataset generation tools, covering all aspects of our daily life. On the other hand, SLAM is becoming a key component of robotics and augmented reality (AR) systems. While a large number of SLAM algorithms have been presented, there has been little effort to unify the interface of such algorithms, or to perform a holistic comparison of their capabilities. This is a problem, since different SLAM applications can have different functional and non-functional requirements. For example, a mobile phone-based AR application has a tight energy budget, while a UAV navigation system usually requires high accuracy. This workshop aims to bring experts in these two fields, dataset generation tools and benchmarking, to address challenges researchers facing.
This event will introduce novel benchmarking and dataset generation methods. As organizers, we will introduce InteriorNet (BMVC 2018), SLAMBench2.0 (ICRA 2018), and MLPerf.
- InteriorNet, developed at Imperial College London, is a versatile dataset generation application, capable of simulating a wide range of sensor and variations in environments, such a moving object and day lighting variation.
- SLAMBench2.0, developed at the University of Edinburgh, Imperial College London and the University of Manchester, is an open-source benchmarking framework for evaluating existing and future SLAM systems, both open and closed source, over an extensible list of datasets, while using a comparable and clearly specified list of performance metrics. A wide variety of existing datasets such as InteriorNet, TUM, ICL-NUIM, and also many SLAM algorithms such as ElasticFusion, InfiniTAM, ORB-SLAM2, OKVIS are supported. Integrating new algorithms and datasets to SLAMBench2.0 is straightforward and clearly has been specified by the framework. Attendees will gain experience on generating datasets and evaluating SLAM systems with SLAMBench.
- The MLPerf effort aims to build a common set of benchmarks that enables the machine learning (ML) field to measure system performance for both training and inference from mobile devices to cloud services. Researchers from several universities including Harvard University, Stanford University, University of Arkansas Littlerock, University of California Berkeley, University of Illinois Urbana Champaign, University of Minnesota, University of Texas Austin, and University of Toronto have contributed to MLPerf.
For further information about these works, please refer to the following links:
InteriorNet: https://interiornet.org/
SLAMBench2.0: https://github.com/pamela-project/slambench2
MLPerf: https://mlperf.org/
- The workshop accepts contributions of research papers describing early research on emerging topics.
- The workshop is intended for quick publication of work-in-progress, early results, etc. The workshop is not intended to prevent later publication of extended papers.
- Prizes will be given to the best paper and also to the best presentation.
- Submission Format: for extended abstract or full papers, please use standard IEEE format (2-8 pages).
- Submission Link: https://easychair.org/conferences/?conf=dgbicra2019
The following paper categories are welcome:
- SLAM Evaluation
- Reproducible Results
- Performance Analysis
- Application-oriented Mapping
- Metrics for Loop Closure Evaluation
- Active Vision Benchmarking and Datasets
- Metrics for Evaluations: from Perception to Motion Control
- Dataset and Benchmarking of SLAM in Dynamic Environments
- Task-based SLAM Evaluation: Navigation, Grasping, Planning, etc.
- Datasets and Benchmarking of AI for Robotics and Scene Understanding
- Customized Dataset Generation for SLAM and robotics learning: tools and datasets
- Deep Learning and AI: Datasets, Evaluation, and Benchmarking for Semantic and 3D Scene Understanding
Committees
Organizing committee
- Sajad Saeedi
- Bruno Bodin
- Wenbin Li
- Rui Tang
- Luigi Nardi
- Paul HJ Kelly
- Ankur Handa
Contact
All questions about submissions should be emailed to icra.workshop@gmail.com