Neural Network Learns XOR
2-2-1 MLP with backpropagation — watch weights and decision boundary evolve
Step (×1)
Step ×10
Run
Stop
Reset
LR:
0.10
Loss:
--
| Epoch:
0
x1
x2
Target
Output
Weights & Biases
Hidden Layer (input→h1, h1 bias)
Output Layer (h→out, out bias)
Last Gradients
Architecture: 2 inputs → 2 hidden (sigmoid) → 1 output (sigmoid). Loss = MSE. Backprop computes ∂L/∂w for each weight.