Multimodal features fusion for gait, gender and shoes recognition
Loading...
Files
Identifiers
Publication date
Reading date
Collaborators
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Share
Center
Department/Institute
Abstract
This paper evaluates how fusing multimodal features (audio, RGB, and depth) can enhance the task of gait recognition, as well as gender and shoe recognition. While most previous research has focused on visual descriptors like binary silhouettes, little attention has been given to audio or depth data associated with walking. The proposed multimodal system is tested on the TUM GAID dataset, which includes audio, depth, and image sequences. Results show that combining features from these modalities using early or late fusion techniques improves state-of-the-art performance in gait, gender, and shoe recognition. Additional experiments on CASIA-B (which only includes visual data) further support the advantages of feature fusion.
Description
Bibliographic citation
Castro, F.M., Marín-Jiménez, M. & Guil, N. Multimodal features fusion for gait, gender and shoes recognition. Machine Vision and Applications 27, 1213–1228 (2016). https://doi.org/10.1007/s00138-016-0767-5









