RT Conference Proceedings T1 Comparing Deep Recurrent Networks Based on the MAE Random Sampling, a First Approach A1 Camero Unzueta, Andres A1 Toutouh-el-Alamin, Jamal A1 Alba-Torres, Enrique K1 Inteligencia artificial AB Recurrent neural networks have demonstrated to be good at tackling prediction problems, however due to their high sensitivity tohyper-parameter configuration, finding an appropriate network is a tough task. Automatic hyper-parameter optimization methods have emerged to find the most suitable configuration to a given problem, but these methods are not generally adopted because of their high computational cost. Therefore, in this study we extend the MAE random sampling, a low-cost method to compare single-hidden layer architectures, to multiple-hidden-layer ones. We validate empirically our proposal and show that it is possible to predict and compare the expected performance of an hyper-parameter configuration in a low-cost way. YR 2018 FD 2018-11-26 LK https://hdl.handle.net/10630/16952 UL https://hdl.handle.net/10630/16952 LA eng NO Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech.This research was partially funded by Ministerio de Economı́a, Industria y Competitividad, Gobierno de España, and European Regional Development Fund grant numbers TIN2016-81766-REDT (http://cirti.es) and TIN2017-88213-R (http://6city.lcc.uma.es). DS RIUMA. Repositorio Institucional de la Universidad de Málaga RD 19 ene 2026