Bayesian Brain

Probabilistic inference in neural circuits — prior-likelihood integration and psychophysics

Prior Mean μ₀ 0.0
Prior Width σ₀ 1.0
Likelihood Width σ_L 0.8
0.00
Posterior Mean
0.00
Posterior σ
0.00
Prior Bias
0.00
Precision Ratio

About

The Bayesian Brain hypothesis (Knill & Pouget 2004; Weiss et al. 2002) proposes that neural circuits implement probabilistic inference, combining prior beliefs P(s) with sensory likelihoods P(o|s) to compute a posterior P(s|o) ∝ P(o|s)P(s). For Gaussian distributions, this yields the classic "weighted average" percept: μ_post = (μ_prior/σ²_prior + μ_likelihood/σ²_likelihood) / (1/σ²_prior + 1/σ²_likelihood). This predicts systematic biases in perception: stimuli near threshold are pulled toward the prior — exactly as observed in motion perception (moving stimuli appear slower than they are, biased toward the prior for slow speeds). Watch the posterior (purple) navigate between prior (blue) and likelihood (orange) as parameters change.