The Recurrent Neural Networks (RNNs) represent an important class of bio-inspired learning machines belonging to the field of Artificial Intelligence. Due to the cyclic interconnections between the artificial neurons and of the activation functions, RNNs are nonlinear dynamical systems. From the point of view of the field of Dynamical Systems, a specific feature of RNNs is that their state space may consist of multiple equilibria, not necessary all stable. Thus, the usual local concepts of stability are not sufficient for an adequate description. Accordingly, the analysis have to be done within both the framework of the Stability theory and the framework of Qualitative theory of systems with several equilibria.
The presentation firstly focuses on the main structure and features of the human brain, that which have been taken into account for deriving the artificial simulators of its functions. The second part presents the basics for linear and nonlinear dynamical systems including the main concepts of stability – both for local equilibrium and for the global behavior of the system – as well as, the powerfull tool of Lyapunov-like methods used for systems’ analysis. In the third part, different models of RNNs are considered (Hopfield, competitive Cohen-Grossberg, Bidirectional Associative Memory, Cellular Neural Networks, K-Winner-Takes-All networks) and discussed within the framework of the Dynamical Systems.