Federated deep reinforcement Learning for ENDC optimization
| dc.centro | E.T.S.I. Telecomunicación | es_ES |
| dc.contributor.author | Martin, Adrian | |
| dc.contributor.author | De la Bandera Cascales, Isabel | |
| dc.contributor.author | Mendo, Adriano | |
| dc.contributor.author | Outes, Jose | |
| dc.contributor.author | Ramiro, Juan | |
| dc.contributor.author | Barco-Moreno, Raquel | |
| dc.date.accessioned | 2025-05-12T11:58:42Z | |
| dc.date.available | 2025-05-12T11:58:42Z | |
| dc.date.issued | 2025-05-07 | |
| dc.departamento | Ingeniería de Comunicaciones | es_ES |
| dc.description.abstract | 5G New Radio (NR) network deployment in Non-Stand Alone (NSA) mode means that 5G networks rely on the control plane of existing Long Term Evolution (LTE) modules for control functions, while 5G modules are only dedicated to the user plane tasks, which could also be carried out by LTE modules simultaneously. The first deployments of 5G networks are essentially using this technology. These deployments enable what is known as E-UTRAN NR Dual Connectivity (ENDC), where a user establish a 5G connection simultaneously with a pre-existing LTE connection to boost their data rate. In this paper, a single Federated Deep Reinforcement Learning (FDRL) agent for the optimization of the event that triggers the dual connectivity between LTE and 5G is proposed. First, single Deep Reinforcement Learning (DRL) agents are trained in isolated cells. Later, these agents are merged into a unique global agent capable of optimizing the whole network with Federated Learning (FL). This scheme of training single agents and merging them also makes feasible the use of dynamic simulators for this type of learning algorithm and parameters related to mobility, by drastically reducing the number of possible combinations resulting in fewer simulations. The simulation results show that the final agent is capable of achieving a tradeoff between dropped calls and the user throughput to achieve global optimum without the need for interacting with all the cells for training. | es_ES |
| dc.description.sponsorship | This work was supported in part by Ericsson under Grant MA-2020-003774, through Project 702C2000043 in part by R&D&I Support Program Line through the Junta de Andalucía (Andalusian Regional Government) in part by the Ministerio de Asuntos Económicos y Transformación Digital in part by European Union - NextGenerationEU, and in part by the Recuperación, Transformación y Resiliencia y elMecanismo de Recuperación y Resiliencia through Project MAORI. | es_ES |
| dc.identifier.citation | A. Martin et al., "Federated Deep Reinforcement Learning for ENDC Optimization" in IEEE Transactions on Mobile Computing, vol. 24, no. 06, pp. 5525-5535, June 2025, doi: 10.1109/TMC.2025.3534661. | es_ES |
| dc.identifier.doi | 10.1109/TMC.2025.3534661 | |
| dc.identifier.uri | https://hdl.handle.net/10630/38565 | |
| dc.language.iso | eng | es_ES |
| dc.publisher | IEEE | es_ES |
| dc.rights | Atribución 4.0 Internacional | * |
| dc.rights.accessRights | open access | es_ES |
| dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | * |
| dc.subject | Telecomunicaciones | es_ES |
| dc.subject | Aprendizaje automático (Inteligencia artificial) | es_ES |
| dc.subject.other | 5 G Mobile communication | es_ES |
| dc.subject.other | Optimization | es_ES |
| dc.subject.other | Long term evolution | es_ES |
| dc.subject.other | Training | es_ES |
| dc.subject.other | Heuristic algorithms | es_ES |
| dc.subject.other | Deep reinforcement learning | es_ES |
| dc.subject.other | Throughput | es_ES |
| dc.subject.other | Hysteresis | es_ES |
| dc.subject.other | Handover | es_ES |
| dc.subject.other | Federated learning | es_ES |
| dc.subject.other | RAN Optimization | es_ES |
| dc.subject.other | 5 G NSA | es_ES |
| dc.subject.other | Event B 1 | es_ES |
| dc.subject.other | Deep learning | es_ES |
| dc.subject.other | Learning agent | es_ES |
| dc.subject.other | Reinforcement learning agent | es_ES |
| dc.subject.other | Training cell | es_ES |
| dc.subject.other | Deep reinforcement learning agent | es_ES |
| dc.subject.other | Deep neural network | es_ES |
| dc.subject.other | Cell clusters | es_ES |
| dc.subject.other | Training phase | es_ES |
| dc.subject.other | Cellular networks | es_ES |
| dc.subject.other | Neighboring cells | es_ES |
| dc.subject.other | Individual agency | es_ES |
| dc.subject.other | Small step | es_ES |
| dc.subject.other | Optimal network | es_ES |
| dc.subject.other | User equipment | es_ES |
| dc.subject.other | Deep reinforcement learning algorithm | es_ES |
| dc.subject.other | Reinforcement learning algorithm | es_ES |
| dc.subject.other | Network capacity | es_ES |
| dc.subject.other | Mobile edge computing | es_ES |
| dc.subject.other | Key performance indicators | es_ES |
| dc.subject.other | Multi party computation | es_ES |
| dc.subject.other | Heterogeneous network | es_ES |
| dc.subject.other | Rest of the cells | es_ES |
| dc.subject.other | Rate of network | es_ES |
| dc.title | Federated deep reinforcement Learning for ENDC optimization | es_ES |
| dc.type | journal article | es_ES |
| dc.type.hasVersion | VoR | es_ES |
| dspace.entity.type | Publication | |
| relation.isAuthorOfPublication | c933e578-ad80-410f-88c2-f0dbdaa6cf72 | |
| relation.isAuthorOfPublication.latestForDiscovery | c933e578-ad80-410f-88c2-f0dbdaa6cf72 |
Files
Original bundle
1 - 1 of 1

