Machine Learning for Bidirectional Translation between Different Sign and Oral Language.
| dc.centro | E.T.S.I. Telecomunicación | es_ES |
| dc.contributor.advisor | Luque-Nieto, Miguel Ángel | |
| dc.contributor.author | Saleem, Muhammad Imran | |
| dc.date.accessioned | 2024-02-23T08:00:44Z | |
| dc.date.available | 2024-02-23T08:00:44Z | |
| dc.date.created | 2023-09-02 | |
| dc.date.issued | 2024 | |
| dc.date.submitted | 2023-09-27 | |
| dc.departamento | Ingeniería de Comunicaciones | |
| dc.description.abstract | Deaf and mute (D-M) people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These D-M individuals, who rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. In practice, D-M face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. This thesis presents a solution to this problem through (i) a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language, and (ii) hand gestures of different languages are supported. The hand gestures of D-M people are acquired and processed using deep learning (DL), and multiple language support is achieved using supervised machine learning (ML). The D-M people are provided with a video interface where the hand gestures are acquired, and an audio interface to convert the gestures into speech. Speech from ND-M people is acquired and converted into text and hand gesture images. The system is easy to use, low cost, reliable, modular, based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). A supervised ML dataset is created that provides multi-language communication between the D-M and ND-M people, which includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system has been validated through a series of experiments, where the hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between D-M people. | es_ES |
| dc.identifier.uri | https://hdl.handle.net/10630/30616 | |
| dc.language.iso | eng | es_ES |
| dc.publisher | UMA Editorial | es_ES |
| dc.rights | Attribution-NonCommercial-NoDerivatives 4.0 Internacional | * |
| dc.rights.accessRights | open access | es_ES |
| dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
| dc.subject | Sordos | es_ES |
| dc.subject | Lenguaje por signos | es_ES |
| dc.subject | Procesado de señales | es_ES |
| dc.subject | Aprendizaje automático (Inteligencia artificial) | es_ES |
| dc.subject | Reconocimiento de formas (Informática) | es_ES |
| dc.subject.other | Deaf and mute person | es_ES |
| dc.subject.other | Hand gesture recognition | es_ES |
| dc.subject.other | Multi-language processing | es_ES |
| dc.subject.other | Sign language | es_ES |
| dc.subject.other | Supervised machine learning | es_ES |
| dc.title | Machine Learning for Bidirectional Translation between Different Sign and Oral Language. | es_ES |
| dc.type | doctoral thesis | es_ES |
| dspace.entity.type | Publication | |
| relation.isAdvisorOfPublication | 6923f625-485e-4970-8f52-d31c8305bbb4 | |
| relation.isAdvisorOfPublication.latestForDiscovery | 6923f625-485e-4970-8f52-d31c8305bbb4 |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- TD_SALEEM, Muhammad Imran.pdf
- Size:
- 18.09 MB
- Format:
- Adobe Portable Document Format
- Description:

