MuPeG—The Multiple Person Gait Framework

dc.contributor.authorDelgado-Escaño, Rubén
dc.contributor.authorCastro Payán, Francisco Manuel
dc.contributor.authorRamos-Cózar, Julián
dc.contributor.authorMarín-Jiménez, Manuel J.
dc.contributor.authorGuil-Mata, Nicolás
dc.date.accessioned2025-12-16T12:59:21Z
dc.date.available2025-12-16T12:59:21Z
dc.date.issued2020
dc.departamentoArquitectura de Computadoreses_ES
dc.description.abstractGait recognition is being employed as an effective approach to identify people without requiring subject collaboration. Nowadays, developed techniques for this task are obtaining high performance on current datasets (usually more than of accuracy). However, those datasets are simple as they only contain one subject in the scene at the same time. This fact limits the extrapolation of the results to real world conditions where, usually, multiple subjects are simultaneously present at the scene, generating different types of occlusions and requiring better tracking methods and models trained to deal with those situations. Thus, with the aim of evaluating more realistic and challenging situations appearing in scenarios with multiple subjects, we release a new framework (MuPeG) that generates augmented datasets with multiple subjects using existing datasets as input. By this way, it is not necessary to record and label new videos, since it is automatically done by our framework. In addition, based on the use of datasets generated by our framework, we propose an experimental methodology that describes how to use datasets with multiple subjects and the recommended experiments that are necessary to perform. Moreover, we release the first experimental results using datasets with multiple subjects. In our case, we use an augmented version of TUM-GAID and CASIA-B datasets obtained with our framework. In these augmented datasets the obtained accuracies are and whereas in the original datasets (single subject), the same model achieved and for TUM-GAID and CASIA-B, respectively. The performance drop shows clearly that the difficulty of datasets with multiple subjects in the scene is much higher than the ones reported in the literature for a single subject. Thus, our proposed framework is able to generate useful datasets with multiple subjects which are more similar to real life situations.es_ES
dc.identifier.citationDelgado-Escaño, R.; Castro, F.M.; R. Cózar, J.; Marín-Jiménez, M.J.; Guil, N. MuPeG—The Multiple Person Gait Framework. Sensors 2020, 20, 1358. https://doi.org/10.3390/s20051358es_ES
dc.identifier.doi10.3390/s20051358
dc.identifier.urihttps://hdl.handle.net/10630/41142
dc.language.isoenges_ES
dc.publisherMPDIes_ES
dc.rights.accessRightsopen accesses_ES
dc.subjectReconocimiento de formas (Informática)es_ES
dc.subject.otherGait recognitiones_ES
dc.subject.otherGait frameworkes_ES
dc.subject.otherGait datasetes_ES
dc.subject.otherMultiple subjectses_ES
dc.subject.otherAugmented datasetes_ES
dc.titleMuPeG—The Multiple Person Gait Frameworkes_ES
dc.typejournal articlees_ES
dc.type.hasVersionVoRes_ES
dspace.entity.typePublication
relation.isAuthorOfPublication046027b0-4274-40e8-b067-d162ba047b37
relation.isAuthorOfPublicationbed8ca48-652e-4212-8c3c-05bfdc85a378
relation.isAuthorOfPublication.latestForDiscovery046027b0-4274-40e8-b067-d162ba047b37

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
sensors-20-01358-v2.pdf
Size:
9.01 MB
Format:
Adobe Portable Document Format
Description:

Collections