RT Journal Article T1 FedDelta: Incremental federated learning for heterogeneous data using dynamic leader election. A1 García-Luque, Rafael A1 Pimentel-Sánchez, Ernesto A1 Durán-Muñoz, Francisco Javier A1 Iroslavov, Ivan A1 Carreira, Emilio R. K1 Aprendizaje automático (Inteligencia artificial) K1 Análisis de regresión K1 Redes P2P (Redes informáticas) AB Federated learning enables collaborative model training across multiple devices or organizations without sharing raw data, thereby addressing privacy and data ownership concerns. However, most existing federated approaches rely on centralized coordination and often struggle to maintain robustness and convergence stability in scenarios with heterogeneous or unbalanced data. In this work, we propose FedDelta, a federated multivariable linear regression method that achieves strong performance across varying data quantities and distributions, including scenarios with unbalanced data across participants. FedDelta employs a closed-form ridge regression solution and a decentralized communication scheme, in which participating peers dynamically assume coordination roles to aggregate model updates efficiently, eliminating the need for a central server. We evaluate FedDelta on real-world datasets and show that it achieves competitive accuracy and robustness compared to centralized and traditional federated methods. Our results highlight FedDelta’s potential for privacy-preserving, scalable learning in resource-constrained and distributed environments. PB Universidad de Málaga YR 2026 FD 2026-03-04 LK https://hdl.handle.net/10630/45907 UL https://hdl.handle.net/10630/45907 LA eng NO García-Luque, R., Pimentel, E., Durán, F., Iroslavov, I., & Carreira, E. R. (2026). FedDelta: Incremental federated learning for heterogeneous data using dynamic leader election. Manuscript submitted for publication. DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 19 mar 2026