Audio-Visual Perception System for a Humanoid Robotic Head.
| dc.centro | E.T.S.I. Telecomunicación | es_ES |
| dc.contributor.author | Viciana-Abad, Raquel | |
| dc.contributor.author | Marfil-Robles, Rebeca | |
| dc.contributor.author | Pérez-Lorenzo, José Manuel | |
| dc.contributor.author | Bandera-Rubio, Juan Pedro | |
| dc.contributor.author | Romero-Garcés, Adrián | |
| dc.contributor.author | Reche-López, Pedro | |
| dc.date.accessioned | 2024-10-03T08:54:24Z | |
| dc.date.available | 2024-10-03T08:54:24Z | |
| dc.date.issued | 2014-05-28 | |
| dc.departamento | Tecnología Electrónica | |
| dc.description.abstract | One of the main issues within the field of social robotics is to endow robots with the ability to direct attention to people with whom they are interacting. Different approaches follow bio-inspired mechanisms, merging audio and visual cues to localize a person using multiple sensors. However, most of these fusion mechanisms have been used in fixed systems, such as those used in video-conference rooms, and thus, they may incur difficulties when constrained to the sensors with which a robot can be equipped. Besides, within the scope of interactive autonomous robots, there is a lack in terms of evaluating the benefits of audio-visual attention mechanisms, compared to only audio or visual approaches, in real scenarios. Most of the tests conducted have been within controlled environments, at short distances and/or with off-line performance measurements. With the goal of demonstrating the benefit of fusing sensory information with a Bayes inference for interactive robotics, this paper presents a system for localizing a person by processing visual and audio data. Moreover, the performance of this system is evaluated and compared via considering the technical limitations of unimodal systems. The experiments show the promise of the proposed approach for the proactive detection and tracking of speakers in a human-robot interactive framework. | es_ES |
| dc.description.sponsorship | This work has been partially granted by the Science and Innovation Department of the Spanish Government under Project No. TIN2012-38079-C03-03 and by the University of Jaén and Caja Rural de Jaén (Spain) under Project No. UJA2013/08/44. | es_ES |
| dc.identifier.citation | Viciana-Abad, R., Marfil, R., Perez-Lorenzo, J.M., Bandera, J.P., Romero-Garces, A., Reche-Lopez, P. AUDIO-VISUAL PERCEPTION SYSTEM FOR A HUMANOID ROBOTIC HEAD Sensors (Switzerland), 2014, 14(6), pp. 9522–9545 | es_ES |
| dc.identifier.doi | 10.3390/s140609522 | |
| dc.identifier.issn | 1424-8220 | |
| dc.identifier.uri | https://hdl.handle.net/10630/34251 | |
| dc.language.iso | eng | es_ES |
| dc.publisher | MDPI | es_ES |
| dc.rights | Attribution 4.0 Internacional | |
| dc.rights.accessRights | open access | es_ES |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | |
| dc.subject | Interacción hombre-ordenador | es_ES |
| dc.subject | Visión artificial (Robótica) | es_ES |
| dc.subject | Detectores | es_ES |
| dc.subject | Percepción auditiva | es_ES |
| dc.subject.other | Multimodal perception | es_ES |
| dc.subject.other | Bio-inspired attention mechanism | es_ES |
| dc.subject.other | Human-robot interaction | es_ES |
| dc.title | Audio-Visual Perception System for a Humanoid Robotic Head. | es_ES |
| dc.type | journal article | es_ES |
| dc.type.hasVersion | VoR | es_ES |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | ba99a400-b2b7-4ed3-a2f2-b0f2a75a5f86 | |
| relation.isAuthorOfPublication | d6451673-45f2-423a-8ea9-3eb718117284 | |
| relation.isAuthorOfPublication.latestForDiscovery | ba99a400-b2b7-4ed3-a2f2-b0f2a75a5f86 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- sensors-Audiovisual.pdf
- Size:
- 7.13 MB
- Format:
- Adobe Portable Document Format
- Description:

