PL-SLAM: a stereo SLAM system through the combination of points and line segments
Loading...
Files
Description: Artículo principal, postprint del autor.
Identifiers
Publication date
Reading date
Collaborators
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
Share
Center
Department/Institute
Abstract
Traditional approaches to stereo visual simultaneous localization and mapping (SLAM) rely on point features to estimate the camera trajectory and build a map of the environment. In low-textured environments, though, it is often difficult to find a sufficient number of reliable point features and, as a consequence, the performance of such algorithms degrades. This paper proposes PL-SLAM, a stereo visual SLAM system that combines both points and line segments to work robustly in a wider variety of scenarios, particularly in those where point features are scarce or not well-distributed in the image. PL-SLAM leverages both points and line segments at all the instances of the process: visual odometry, keyframe selection, bundle adjustment, etc. We contribute also with a loop-closure procedure through a novel bag-of-words approach that exploits the combined descriptive power of the two kinds of features. Additionally, the resulting map is richer and more diverse in three-dimensional elements, which can be exploited to infer valuable, high-level scene structures, such as planes, empty spaces, ground plane, etc. (not addressed in this paper). Our proposal has been tested with several popular datasets (such as EuRoC or KITTI), and is compared with state-of-the-art methods such as ORB-SLAM2, revealing a more robust performance in most of the experiments while still running in real time. An open-source version of the PL-SLAM C++ code has been released for the benefit of the community.
Description
Bibliographic citation
R. Gomez-Ojeda, F. -A. Moreno, D. Zuñiga-Noël, D. Scaramuzza and J. Gonzalez-Jimenez, "PL-SLAM: A Stereo SLAM System Through the Combination of Points and Line Segments," in IEEE Transactions on Robotics, vol. 35, no. 3, pp. 734-746, June 2019, doi: 10.1109/TRO.2019.2899783.











