RT Journal Article T1 Piecewise Polynomial Activation Functions for Feedforward Neural Networks. A1 López-Rubio, Ezequiel A1 Ortega-Zamorano, Francisco A1 Domínguez-Merino, Enrique A1 Muñoz-Pérez, José K1 Redes neuronales (Informática) K1 Aprendizaje automático (Inteligencia artificial) K1 Análisis de regresión AB Since the origins of artificial neural network research, many models of feedforward networks have been proposed. This paper presents an algorithm which adapts the shape of the activation function to the training data, so that it is learned along with the connection weights. The activation function is interpreted as a piecewise polynomial approximation to the distribution function of the argument of the activation function. An online learning procedure is given, and it is formally proved that it makes the training error decrease or stay the same except for extreme cases. Moreover, the model is computationally simpler than standard feedforward networks, so that it is suitable for implementation on FPGAs and microcontrollers. However, our present proposal is limited to two-layer, one-output-neuron architectures due to the lack of differentiability of the learned activation functions with respect to the node locations. Experimental results are provided, which show the performance of the proposal algorithm for classification and regression applications. PB Springer YR 2019 FD 2019-01-10 LK https://hdl.handle.net/10630/40266 UL https://hdl.handle.net/10630/40266 LA eng NO López-Rubio, E., Ortega-Zamorano, F., Domínguez, E. et al. Piecewise Polynomial Activation Functions for Feedforward Neural Networks. Neural Process Lett 50, 121–147 (2019). NO https://openpolicyfinder.jisc.ac.uk/id/publication/17302 DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 21 ene 2026