← Iris

Draw / Edit Pattern

Click to flip pixels. Draw a pattern, then store it.

Network State (Recall)
Ready
Energy
0.00
Store patterns, then load one to recall.
Stored Memories (click to load & corrupt)
No patterns stored yet.
Load preset:
Noise level (corruption)
20%

Store up to 5 patterns. Click a stored memory thumbnail to load a corrupted version into the network state, then hit Recall to watch it converge.


Associative memory and energy landscapes

John Hopfield introduced this network in 1982 as a physical model of memory, inspired by spin glasses in statistical mechanics. Each neuron is a binary unit (±1). The synaptic weights between every pair of neurons are set by Hebb's rule: “neurons that fire together, wire together.” For p stored patterns, the weight matrix is the sum of outer products.

The energy function E = −½Σwᵢⱼsᵢsⱼ is the Lyapunov function for asynchronous updates: flipping any neuron to agree with its input never increases E. The stored patterns are local minima (attractors) in this landscape. A corrupted input probe finds its way to the nearest attractor, effectively completing or denoising the memory.

The network capacity is roughly 0.138N patterns for an N-neuron network before spurious attractors dominate. This lab uses a 20×10 = 200-neuron network, giving theoretical capacity ~27 patterns.