RT Dissertation/Thesis T1 Machine Learning for Bidirectional Translation between Different Sign and Oral Language. A1 Saleem, Muhammad Imran K1 Sordos K1 Lenguaje por signos K1 Procesado de señales K1 Aprendizaje automático (Inteligencia artificial) K1 Reconocimiento de formas (Informática) AB Deaf and mute (D-M) people are an integral part of society, and it is particularly important to provide them with a platform to be able to communicate without the need for any training or learning. These D-M individuals, who rely on sign language, but for effective communication, it is expected that others can understand sign language. Learning sign language is a challenge for those with no impairment. In practice, D-M face communication difficulties mainly because others, who generally do not know sign language, are unable to communicate with them. This thesis presents a solution to this problem through (i) a system enabling the non-deaf and mute (ND-M) to communicate with the D-M individuals without the need to learn sign language, and (ii) hand gestures of different languages are supported. The hand gestures of D-M people are acquired and processed using deep learning (DL), and multiple language support is achieved using supervised machine learning (ML). The D-M people are provided with a video interface where the hand gestures are acquired, and an audio interface to convert the gestures into speech. Speech from ND-M people is acquired and converted into text and hand gesture images. The system is easy to use, low cost, reliable, modular, based on a commercial-off-the-shelf (COTS) Leap Motion Device (LMD). A supervised ML dataset is created that provides multi-language communication between the D-M and ND-M people, which includes three sign language datasets, i.e., American Sign Language (ASL), Pakistani Sign Language (PSL), and Spanish Sign Language (SSL). The proposed system has been validated through a series of experiments, where the hand gesture detection accuracy of the system is more than 90% for most, while for certain scenarios, this is between 80% and 90% due to variations in hand gestures between D-M people. PB UMA Editorial YR 2024 FD 2024 LK https://hdl.handle.net/10630/30616 UL https://hdl.handle.net/10630/30616 LA eng DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 19 ene 2026