Tags:Object level SLAM, Plane feature, RGB-D and Structured constraints
Abstract:
vSLAM (Visual Simultaneous Localization and Mapping) is a fundamental function in various robot applications. With the development of downstream applications, there is an increasing challenge for scene semantic understanding and stable operation in different scenarios. In this paper, we propose an object-level RGBD SLAM system that reconstructs objects using quadric surfaces and extracts planar information with lower measurement noise compared to point features. These extracted planes and original point features are tightly coupled as landmarks in the system to enhances the robustness of the system in different scenarios. Moreover, we utilize the edges of planes to inference unseen planes to obtain more structured constraints. The experiments conducted on publicly available datasets demonstrate the competitive performance of our framework when compared to state-of-the-art object-based algorithms. Code is available in github.com/DemoShiNan/RGBDOBJ/tree/master.
Integrate Depth Information to Enhance the Robustness of Object Level SLAM