Neural Network Backpropagation Visualizer
Watch gradients flow backward through a live neural network learning XOR
Epoch: 0
Loss: —
Accuracy: —
Backpropagation computes gradients via the chain rule: δL/δw = δL/δa · δa/δz · δz/δw.
Forward pass (blue): activations flow left→right. Backward pass (red/orange): error gradients flow right→left.
Edge thickness = |weight|. Edge color = weight sign (blue=positive, red=negative). Node brightness = activation level.
Network learns to solve XOR: output 1 iff inputs differ. This requires at least one hidden layer.