JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem

    Instrument Detection and Descriptive Gesture Segmentation on a Robotic Surgical Maneuvers Dataset

    • Autor
      Rivas-Blanco, IreneAutoridad Universidad de Málaga; López-Casado, Carmen; Herrera-López, Juan María; Cabrera-Villa, José; Pérez del Pulgar, Carlos J.
    • Fecha
      2024
    • Editorial/Editor
      MDPI
    • Palabras clave
      Cirugía - Aparatos e instrumentos; Robótica
    • Resumen
      Large datasets play a crucial role in the progression of surgical robotics, facilitating advancements in the fields of surgical task recognition and automation. Moreover, public datasets enable the comparative analysis of various algorithms and methodologies, thereby assessing their effectiveness and performance. The ROSMA (Robotics Surgical Maneuvers) dataset provides 206 trials of common surgical training tasks performed with the da Vinci Research Kit (dVRK). In this work, we extend the ROSMA dataset with two annotated subsets: ROSMAT24, which contains bounding box annotations for instrument detection, and ROSMAG40, which contains high and low-level gesture annotations. We propose an annotation method that provides independent labels for the right-handed tools and the left-handed tools. For instrument identification, we validate our proposal with a YOLOv4 model in two experimental scenarios. We demonstrate the generalization capabilities of the network to detect instruments in unseen scenarios. On the other hand, for gesture segmentation, we propose two label categories: high-level annotations that describe gestures at a maneuvers level, and low-level annotations that describe gestures at a fine-grain level. To validate this proposal, we have designed a recurrent neural network based on a bidirectional long-short term memory layer. We present results for four cross-validation experimental setups, reaching up to a 77.35% mAP.
    • URI
      https://hdl.handle.net/10630/32775
    • DOI
      https://dx.doi.org/https://doi.org/10.3390/app14093701
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    applsci-14-03701-v2.pdf (4.563Mb)
    Colecciones
    • Artículos

    Estadísticas

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA