Reservoir Computing — Echo State Networks

A fixed random recurrent network (reservoir) projects input into a high-dimensional space. Only the readout weights are trained. The reservoir must operate near the edge of chaos for optimal memory and nonlinearity.

Spectral radius: | MSE: | Nodes active: %
30
0.95
0.50
0.20
0.80
Echo state networks (Jaeger 2001): x(t+1) = (1−α)x(t) + α·tanh(Wx(t) + W_in·u(t)). Only W_out is trained via ridge regression. The echo state property holds when the spectral radius ρ < 1 — but optimal performance often occurs near ρ ≈ 1, the edge of chaos. Larger ρ → more memory, less stability.