JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem

    Addressing significant challenges for animal detection in camera trap images: a novel deep learning-based approach.

    • Autor
      Mulero-Pázmány, Margarita Cristina; Hurtado-Requena, Sandro JoséAutoridad Universidad de Málaga; Barba-González, CristóbalAutoridad Universidad de Málaga; Antequera-Gómez, María Luisa; Díaz-Ruiz, Francisco; Real-Giménez, RaimundoAutoridad Universidad de Málaga; Navas-Delgado, IsmaelAutoridad Universidad de Málaga; Aldana-Montes, José FranciscoAutoridad Universidad de Málaga
    • Fecha
      2025-05-09
    • Editorial/Editor
      Springer-Nature
    • Palabras clave
      Animales - Identificación; Innovaciones tecnológicas
    • Resumen
      Wildlife biologists increasingly use camera traps for monitoring animal populations. However, manually sifting through the collected images is expensive and time-consuming. Current deep learning studies for camera trap images do not adequately tackle real-world challenges such as imbalances between animal and empty images, distinguishing similar species, and the impact of backgrounds on species identification, limiting the models’ applicability in new locations. Here, we present a novel two-stage deep learning framework. First, we train a global deep-learning model using all animal species in the dataset. Then, an agglomerative clustering algorithm groups animals based on their appearance. Subsequently, we train a specialized deep-learning expert model for each animal group to detect similar features. This approach leverages Transfer Learning from the MegaDetectorV5 (YOLOv5 version) model, already pre-trained on various animal species and ecosystems. Our two-stage deep learning pipeline uses the global model to redirect images to the appropriate expert models for final classification. We validated this strategy using 1.3 million images from 91 camera traps encompassing 24 mammal species and used 120,000 images for testing, achieving an F1-Score of 96.2% using expert models for final classification. This method surpasses existing deep learning models, demonstrating improved precision and effectiveness in automated wildlife detection.
    • URI
      https://hdl.handle.net/10630/38576
    • DOI
      https://dx.doi.org/10.1038/s41598-025-90249-z
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    2025-Scientific_Reports.pdf (3.802Mb)
    Colecciones
    • Artículos

    Estadísticas

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA