RT Journal Article T1 Dynamic learning rates for continual unsupervised learning. A1 Fernández-Rodríguez, Jose David A1 Palomo-Ferrer, Esteban José A1 Ortiz-de-Lazcano-Lobato, Juan Miguel A1 Ramos-Jiménez, Gonzalo Pascual A1 López-Rubio, Ezequiel K1 Aprendizaje automático (Inteligencia artificial) AB The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary inputdistributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. Thisstrategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devotedto unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handlenon-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning ratethat does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, differentconfigurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron orper-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitabilityof the proposal for real-world problems. PB IOS Press YR 2023 FD 2023 LK https://hdl.handle.net/10630/30312 UL https://hdl.handle.net/10630/30312 LA eng NO Integrated Computer-Aided Engineering 30 (2023) 257–273 DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 20 ene 2026