JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Ponencias, Comunicaciones a congresos y Pósteres
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Ponencias, Comunicaciones a congresos y Pósteres
    • Ver ítem

    Multimodal Human Pose Feature Fusion for Gait Recognition.

    • Autor
      Cubero, Nicolás; Castro Payán, Francisco Manuel; Ramos-Cózar, JuliánAutoridad Universidad de Málaga; Guil-Mata, NicolásAutoridad Universidad de Málaga; Marín Jiménez, Manuel Jesús
    • Fecha
      2023
    • Palabras clave
      Biometría; Reconocimiento óptico de formas (Informática)
    • Resumen
      Gait recognition allows identifying people at a distance based on the way they walk (i.e. gait) in a non-invasive approach. Most of the approaches published in the last decades are dominated by the use of silhouettes or other appearance-based modalities to describe the Gait cycle. In an attempt to exclude the appearance data, many works have been published that address the use of the human pose as a modality to describe the walking movement. However, as the pose contains less information when used as a single modality, the performance achieved by the models is generally poorer. To overcome such limitations, we propose a multimodal setup that combines multiple pose representation models. To this end, we evaluate multiple fusion strategies to aggregate the features derived from each pose modality at every model stage. Moreover, we introduce a weighted sum with trainable weights that can adaptively learn the optimal balance among pose modalities. Our experimental results show that (a) our fusion strategies can effectively combine different pose modalities by improving their baseline performance; and, (b) by using only human pose, our approach outperforms most of the silhouette-based state-of-the-art approaches. Concretely, we obtain 92.8% mean Top-1 accuracy in CASIA-B.
    • URI
      https://hdl.handle.net/10630/27271
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    PoseGaitFusion2023_ibPRIA.pdf (450.4Kb)
    Colecciones
    • Ponencias, Comunicaciones a congresos y Pósteres

    Estadísticas

    Buscar en Dimension
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA