Neural Network Learns XOR

2-2-1 MLP with backpropagation — watch weights and decision boundary evolve
0.10
Loss: -- | Epoch: 0
x1x2TargetOutput
Weights & Biases
Hidden Layer (input→h1, h1 bias)
Output Layer (h→out, out bias)
Last Gradients
Architecture: 2 inputs → 2 hidden (sigmoid) → 1 output (sigmoid). Loss = MSE. Backprop computes ∂L/∂w for each weight.