Graph Neural Networks (GNNs) learn representations by iteratively aggregating feature information from neighboring nodes. Each message-passing layer computes h_v^(l+1) = σ(W · AGG({h_u^(l) : u ∈ N(v) ∪ v})), where AGG is a permutation-invariant function (sum, mean, or max). After L layers, each node embedding encodes its L-hop neighborhood. GNNs power molecular property prediction, social network analysis, and knowledge graph reasoning. The visualization shows activation waves propagating through the graph as if information is flowing through successive GNN layers.