Multimodal object recognition module for social robots
Loading...
Files
Description: Artículo principal
Identifiers
Publication date
Reading date
Collaborators
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Share
Center
Department/Institute
Keywords
Abstract
Sensor fusion techniques are able to increase robustness and accuracy over data provided by isolated sensors. Fusion can be performed at a low level, creating shared data representations from multiple sensory inputs, or at a high level, checking consistency and similarity of objects provided by different sources. These last techniques are more prone to discard perceived objects due to overlapping or partial occlusions, but they are usually simpler, and more scalable. Hence, they are more adequate when data gathering is the key requirement, while safety is not compromised, computational resources may be limited and it is important to easily incorporate new sensors (e.g. monitorization in smart environments or object recognition for social robots). This paper proposes a novel perception integrator module that uses low complexity algorithms to implement fusion, tracking and forgetting mechanisms. Its main characteristics are simplicity, adaptability and scalability. The system has been integrated in a social robot and employed to achieve multimodal object and person recognition. Experimental results show the adequacy of the solution in terms of detection and recognition rates, integrability into the constrained resources of a robot, and adaptability to different sensors, detection priorities and scenarios.
Description
Bibliographic citation
Cruces, A., Tudela, A., Romero-Garcés, A., Bandera, J.P. (2023). Multimodal Object Recognition Module for Social Robots. In: Tardioli, D., Matellán, V., Heredia, G., Silva, M.F., Marques, L. (eds) ROBOT2022: Fifth Iberian Robotics Conference. ROBOT 2022. Lecture Notes in Networks and Systems, vol 590. Springer, Cham. https://doi.org/10.1007/978-3-031-21062-4_40










