Hilfe beim Zugang
Object-aware data association for the semantically constrained visual SLAM
Abstract Traditional vSLAM methods extract feature points from images to track and the data association of points is based on low-level geometric clues. When points are observed from variant viewpoints, these clues are not robust for matching. In contrast, semantic information remains consistent for...
Ausführliche Beschreibung
Abstract Traditional vSLAM methods extract feature points from images to track and the data association of points is based on low-level geometric clues. When points are observed from variant viewpoints, these clues are not robust for matching. In contrast, semantic information remains consistent for variance of viewpoints and observed scales. Therefore, semantic vSLAM methods gain more attention in recent years. In particular, object-level semantic information can be utilized to model the environment as object landmarks and has been fused into many vSLAM methods which are called object-level vSLAM methods. How to associate objects over consecutive images and how to utilize object information in the pose estimation are two key problems for object-level vSLAM methods. In this work, we propose an object-level vSLAM method which is aimed to solve the object-level data association and estimate camera poses using object semantic constraints. We present an object-level data association scheme considering object appearance and geometry of point landmarks, processing both objects and points matching for mutual improvements. We propose a semantic re-projection error function based on object-level semantic information and integrate it into the pose optimization, establishing longer term constraints. We performed experiments on public datasets including both indoor and outdoor scenes. The evaluation results demonstrate that our method can achieve high accuracy in the object-level data association and outperforms the baseline method in the pose estimation. An open-source version of the code is also available. Ausführliche Beschreibung