Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment
| dc.centro | E.T.S.I. Telecomunicación | es_ES |
| dc.contributor.author | Cuevas-Rodríguez, María | |
| dc.contributor.author | González-Toledo, Daniel | |
| dc.contributor.author | Reyes-Lecuona, Arcadio | |
| dc.contributor.author | Picinali, Lorenzo | |
| dc.date.accessioned | 2026-01-22T10:33:48Z | |
| dc.date.issued | 2021-04-13 | |
| dc.departamento | Tecnología Electrónica | es_ES |
| dc.description.abstract | When performing binaural spatialisation, it is widely accepted that the choice of the head related transfer functions (HRTFs), and in particular the use of individually measured ones, can have an impact on localisation accuracy, externalization, and overall realism. Yet the impact of HRTF choices on speech-in-noise performances in cocktail party-like scenarios has not been investigated in depth. This paper introduces a study where 22 participants were presented with a frontal speech target and two lateral maskers, spatialised using a set of non-individual HRTFs. Speech reception threshold (SRT) was measured for each HRTF. Furthermore, using the SRT predicted by an existing speech perception model, the measured values were compensated in the attempt to remove overall HRTF-specific benefits. Results show significant overall differences among the SRTs measured using different HRTFs, consistently with the results predicted by the model. Individual differences between participants related to their SRT performances using different HRTFs could also be found, but their significance was reduced after the compensation. The implications of these findings are relevant to several research areas related to spatial hearing and speech perception, suggesting that when testing speech-in-noise performances within binaurally rendered virtual environments, the choice of the HRTF for each individual should be carefully considered. | es_ES |
| dc.description.sponsorship | European Union’s Horizon 2020 research and innovation programme | es_ES |
| dc.description.sponsorship | Ministerio de Ciencia, Innovación y Universidades | es_ES |
| dc.identifier.citation | Maria Cuevas-Rodriguez, Daniel Gonzalez-Toledo, Arcadio Reyes-Lecuona, Lorenzo Picinali; Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment. J. Acoust. Soc. Am. 1 April 2021; 149 (4): 2573–2586 | |
| dc.identifier.doi | 10.1121/10.0004220 | |
| dc.identifier.uri | https://hdl.handle.net/10630/44717 | |
| dc.language.iso | eng | es_ES |
| dc.publisher | Acoustical Society of America | |
| dc.relation.projectID | info:eu-repo/grantAgreement/EC/H2020/644051/EU//3DTune-In | es_ES |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/PGC///SAVLab | es_ES |
| dc.rights | Atribución 4.0 Internacional | * |
| dc.rights.accessRights | open access | es_ES |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
| dc.subject | Realidad virtual | |
| dc.subject | Voz - Reconocimiento automático | |
| dc.subject.other | Head Related Transfer Function | |
| dc.subject.other | Speech in Noise | |
| dc.subject.other | Speech recognition | |
| dc.title | Impact of non-individualised head related transfer functions on speech-in-noise performances within a synthesised virtual environment | es_ES |
| dc.type | journal article | es_ES |
| dc.type.hasVersion | VoR | |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | 05db8acb-40ab-48ba-be98-3eb847047e46 | |
| relation.isAuthorOfPublication.latestForDiscovery | 05db8acb-40ab-48ba-be98-3eb847047e46 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- 2573_1_online.pdf
- Size:
- 4.28 MB
- Format:
- Adobe Portable Document Format
- Description:
- Artículo
Description: Artículo

