← Back to Labs

Entropy Maximization & MaxEnt Distributions

0.000
H(X) bits
0.000
H_max bits
0.000
KL divergence
Gaussian
Distribution
Gaussian
1.00
0.0

Maximum Entropy Principle (Jaynes 1957)

Given known constraints (mean, variance, support), the least-biased probability distribution is the one that maximizes Shannon entropy. This is the Maximum Entropy (MaxEnt) principle.

H(p) = −∫ p(x) log p(x) dx → maximize
Subject to: ∫p dx=1, ∫xp dx=μ, ∫x²p dx=σ²+μ² → Gaussian
Subject to: ∫p dx=1, x≥0 → Exponential (MaxEnt for non-negative mean)

The Gaussian maximizes entropy given fixed mean and variance. The exponential distribution maximizes entropy given fixed mean over [0,∞). The uniform distribution maximizes entropy over bounded support. Any deviation from MaxEnt implies additional information — otherwise it is unwarranted.