← Iris

mean opinion: 0.500
variance: 0.083
clusters: 50
step 0
50
5
0.10
0.25

The dynamics

DeGroot averaging is the simplest model of social learning. At each step, every agent updates their opinion to the average of their neighbors' opinions (including their own). On a connected network, this always converges to consensus — the population mean. The network structure determines how fast. A densely connected network reaches agreement quickly; a sparse chain takes ages.

Bounded confidence (Hegselmann-Krause) adds a psychological wrinkle: agents only listen to neighbors whose opinions are within ε of their own. If your neighbor disagrees too much, you ignore them entirely. This creates opinion clusters — groups that converge internally but split from each other. A small ε produces many clusters; a large one allows global consensus. The number and size of clusters depends on both ε and the network topology.

The voter model is stochastic. Each agent, at each step, randomly copies the opinion of one neighbor. There is no averaging — opinions don't blend, they spread like contagion. On finite connected networks, this always eventually reaches consensus, but the time to consensus can be extremely long, and which opinion wins is random.

Why structure matters

The same dynamics on different network topologies can produce wildly different outcomes. A ring lattice confines influence to local neighborhoods — information travels slowly. A small-world network (Watts-Strogatz) adds a few random shortcuts that dramatically accelerate convergence — this is the "six degrees" effect. A scale-free network (Barabasi-Albert) has hubs: highly connected nodes that dominate the dynamics. A complete graph is the fastest possible — everyone talks to everyone — but also the least realistic.

Connection to collective intelligence

This is directly relevant to research on collective intelligence and the wisdom of crowds. Joshua Becker's work has shown that network structure is not merely a conduit for information — it actively shapes the quality of collective judgments. Too much connectivity can amplify errors through herding. Too little can prevent useful information from spreading. The sweet spot — moderate connectivity with some structural diversity — is where crowds tend to be wisest.

Try this: set the model to bounded confidence with a low ε (around 0.10) on a small-world network. Watch how clusters form and stabilize. Then switch to a complete graph with the same settings. Notice how the clusters differ. Structure is not neutral.