1-1hit |
We introduce recurrent networks that are able to learn chaotic maps, and investigate whether the neural models also capture the dynamical invariants (Correlation Dimension, largest Lyapunov exponent) of chaotic time series. We show that the dynamical invariants can be learned already by feedforward neural networks, but that recurrent learning improves the dynamical modeling of the time series. We discover a novel type of overtraining which corresponds to the forgetting of the largest Lyapunov exponent during learning and call this phenomenon dynamical overtraining. Furthermore, we introduce a penalty term that involves a dynamical invariant of the network and avoids dynamical overtraining. As examples we use the H