RT Journal Article T1 Multimodal features fusion for gait, gender and shoes recognition A1 Castro Payán, Francisco Manuel A1 Marín-Jiménez, Manuel J. A1 Guil-Mata, Nicolás K1 Hombre - Identificación K1 Reconocimiento de formas (Informática) AB This paper evaluates how fusing multimodal features (audio, RGB, and depth) can enhance the task of gait recognition, as well as gender and shoe recognition. While most previous research has focused on visual descriptors like binary silhouettes, little attention has been given to audio or depth data associated with walking. The proposed multimodal system is tested on the TUM GAID dataset, which includes audio, depth, and image sequences. Results show that combining features from these modalities using early or late fusion techniques improves state-of-the-art performance in gait, gender, and shoe recognition. Additional experiments on CASIA-B (which only includes visual data) further support the advantages of feature fusion. PB Springer YR 2016 FD 2016 LK https://hdl.handle.net/10630/32730 UL https://hdl.handle.net/10630/32730 LA eng NO Castro, F.M., Marín-Jiménez, M. & Guil, N. Multimodal features fusion for gait, gender and shoes recognition. Machine Vision and Applications 27, 1213–1228 (2016). https://doi.org/10.1007/s00138-016-0767-5 DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 4 mar 2026