A Comparative Analysis of Large Language Models for Bilingual Term Extraction in Spanish-Arabic Interpreting and Translation
| dc.centro | Facultad de Filosofía y Letras | |
| dc.contributor.author | Gaber, Mahmoud | |
| dc.date.accessioned | 2026-02-11T13:23:00Z | |
| dc.date.issued | 2025-10-29 | |
| dc.departamento | Traducción e Interpretación | |
| dc.description.abstract | The burgeoning capabilities of Large Language Models (LLMs) are profoundly impacting Natural Language Processing (NLP), with their application in terminology extraction gaining increasing scholarly attention [4]. Terminology extraction is a cornerstone of professional interpreting and translation, indispensable for upholding semantic precision and discursive fluency in specialised communication [1]. Automatic Terminology Extraction (ATE) methodologies endeavour to mitigate the arduous demands of manual terminology management by generating ranked lists of candidate terms from domain-specific corpora [2]. Despite the remarkable capabilities of LLMs, often attributed to their sophisticated training paradigms [4], empirical evaluations of these models for ATE purposes remain notably scarce, particularly concerning linguistically divergent pairs such as Spanish-Arabic. Existing research predominantly focuses on European language combinations, thereby creating a critical lacuna in understanding AI's efficacy in non-Indo-European linguistic contexts. Given the inherent structural and semantic disparities between Spanish and Arabic, a comprehensive assessment of AI's performance in this domain is imperative for ensuring the reliability of LLM-driven tools in professional linguistic workflows [5]. This study undertakes a comparative evaluation of AI tools—specifically ChatGPT, DeepSeek, Gemini, and Manus—for their proficiency in extracting bilingual Spanish-Arabic terminology across two distinct specialised domains: ophthalmology (medical) and tourism. Utilising a methodological framework encompassing Precision, Recall, F-score, and Accuracy [3], this research provides a granular assessment of each tool's capacity for accurate and contextually relevant bilingual term identification. The findings contribute to the theoretical and practical advancements in AI-assisted terminology extraction, offering insights into the evolving landscape of AI integration within interpreting and translation studies. | |
| dc.description.sponsorship | Instituto Universitario de Investigación de Tecnologías Lingüísticas Multilingües | |
| dc.description.sponsorship | Universidad de Málaga | |
| dc.identifier.uri | https://hdl.handle.net/10630/45386 | |
| dc.language.iso | eng | |
| dc.relation.eventdate | 29th to 31st November 2025 | |
| dc.relation.eventplace | University of Cordoba (Spain) | |
| dc.relation.eventtitle | 4th International Conference “Translation and the Language of Tourism” (TRADITUR) | |
| dc.relation.projectID | Postdoctoral research contract (PPIT-UMA) | |
| dc.relation.projectID | DIFARMA (HUM106-G-FEDER) | |
| dc.relation.projectID | DÍGAME (JA.A1.3-06) | |
| dc.relation.projectID | PIE22-135 (2022/23-2023/24) | |
| dc.relation.projectID | info:eu-repo/grantAgreement/AEI/Plan Estatal de Investigación Científica y Técnica y de Innovación 2017-2020/PID2020-112818GB-I00/ES/ADAPTACION MULTILINGUE Y MULTI-DOMINIO PARA LA OPTIMIZACION DEL SISTEMA VIP/ | |
| dc.relation.projectID | PIE22-135 | |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
| dc.rights.accessRights | open access | |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
| dc.subject | Inteligencia artificial | |
| dc.subject | Traducción automática | |
| dc.subject.other | Large language models | |
| dc.subject.other | Artificial intelligence | |
| dc.subject.other | Automatic terminology extraction | |
| dc.subject.other | Terminology management | |
| dc.subject.other | LLMs assessment | |
| dc.subject.other | Translation and interpreting | |
| dc.title | A Comparative Analysis of Large Language Models for Bilingual Term Extraction in Spanish-Arabic Interpreting and Translation | |
| dc.type | conference output | |
| dspace.entity.type | Publication |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- LLMs_CordobaConferenceGaber.pdf
- Size:
- 2.49 MB
- Format:
- Adobe Portable Document Format

