Dynamic learning rates for continual unsupervised learning.
Loading...
Identifiers
Publication date
Reading date
Collaborators
Advisors
Tutors
Editors
Journal Title
Journal ISSN
Volume Title
Publisher
IOS Press
Share
Center
Department/Institute
Abstract
The dilemma between stability and plasticity is crucial in machine learning, especially when non-stationary input
distributions are considered. This issue can be addressed by continual learning in order to alleviate catastrophic forgetting. This
strategy has been previously proposed for supervised and reinforcement learning models. However, little attention has been devoted
to unsupervised learning. This work presents a dynamic learning rate framework for unsupervised neural networks that can handle
non-stationary distributions. In order for the model to adapt to the input as it changes its characteristics, a varying learning rate
that does not merely depend on the training step but on the reconstruction error has been proposed. In the experiments, different
configurations for classical competitive neural networks, self-organizing maps and growing neural gas with either per-neuron or
per-network dynamic learning rate have been tested. Experimental results on document clustering tasks demonstrate the suitability
of the proposal for real-world problems.
Description
Bibliographic citation
Integrated Computer-Aided Engineering 30 (2023) 257–273













