JavaScript is disabled for your browser. Some features of this site may not work without it.

    Listar

    Todo RIUMAComunidades & ColeccionesPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditoresEsta colecciónPor fecha de publicaciónAutoresTítulosMateriasTipo de publicaciónCentrosDepartamentos/InstitutosEditores

    Mi cuenta

    AccederRegistro

    Estadísticas

    Ver Estadísticas de uso

    DE INTERÉS

    Datos de investigaciónReglamento de ciencia abierta de la UMAPolítica de RIUMAPolitica de datos de investigación en RIUMAOpen Policy Finder (antes Sherpa-Romeo)Dulcinea
    Preguntas frecuentesManual de usoContacto/Sugerencias
    Ver ítem 
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem
    •   RIUMA Principal
    • Investigación
    • Artículos
    • Ver ítem

    Federated deep reinforcement Learning for ENDC optimization

    • Autor
      Martin, Adrian; De la Bandera Cascales, Isabel; Mendo, Adriano; Outes, Jose; Ramiro, Juan; Barco-Moreno, RaquelAutoridad Universidad de Málaga
    • Fecha
      2025-05-07
    • Editorial/Editor
      IEEE
    • Palabras clave
      Telecomunicaciones; Aprendizaje automático (Inteligencia artificial)
    • Resumen
      5G New Radio (NR) network deployment in Non-Stand Alone (NSA) mode means that 5G networks rely on the control plane of existing Long Term Evolution (LTE) modules for control functions, while 5G modules are only dedicated to the user plane tasks, which could also be carried out by LTE modules simultaneously. The first deployments of 5G networks are essentially using this technology. These deployments enable what is known as E-UTRAN NR Dual Connectivity (ENDC), where a user establish a 5G connection simultaneously with a pre-existing LTE connection to boost their data rate. In this paper, a single Federated Deep Reinforcement Learning (FDRL) agent for the optimization of the event that triggers the dual connectivity between LTE and 5G is proposed. First, single Deep Reinforcement Learning (DRL) agents are trained in isolated cells. Later, these agents are merged into a unique global agent capable of optimizing the whole network with Federated Learning (FL). This scheme of training single agents and merging them also makes feasible the use of dynamic simulators for this type of learning algorithm and parameters related to mobility, by drastically reducing the number of possible combinations resulting in fewer simulations. The simulation results show that the final agent is capable of achieving a tradeoff between dropped calls and the user throughput to achieve global optimum without the need for interacting with all the cells for training.
    • URI
      https://hdl.handle.net/10630/38565
    • DOI
      https://dx.doi.org/10.1109/TMC.2025.3534661
    • Compartir
      RefworksMendeley
    Mostrar el registro completo del ítem
    Ficheros
    10854899.pdf (2.367Mb)
    Colecciones
    • Artículos

    Estadísticas

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
     

     

    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA
    REPOSITORIO INSTITUCIONAL UNIVERSIDAD DE MÁLAGA