Comparing Deep Recurrent Networks Based on the MAE Random Sampling, a First Approach

Loading...
Thumbnail Image

Files

andrés.pdf (506.63 KB)

Description: Artículo (preprint)

Identifiers

Publication date

Reading date

Collaborators

Advisors

Tutors

Editors

Journal Title

Journal ISSN

Volume Title

Publisher

Metrics

Google Scholar

Share

Research Projects

Organizational Units

Journal Issue

Abstract

Recurrent neural networks have demonstrated to be good at tackling prediction problems, however due to their high sensitivity to hyper-parameter configuration, finding an appropriate network is a tough task. Automatic hyper-parameter optimization methods have emerged to find the most suitable configuration to a given problem, but these methods are not generally adopted because of their high computational cost. Therefore, in this study we extend the MAE random sampling, a low-cost method to compare single-hidden layer architectures, to multiple-hidden-layer ones. We validate empirically our proposal and show that it is possible to predict and compare the expected performance of an hyper-parameter configuration in a low-cost way.

Description

Bibliographic citation

Endorsement

Review

Supplemented By

Referenced by