μ = 0.0
σ = 1.5
H(Gaussian) = -
H(Uniform) = -
λ₁ = -
λ₂ = -
About: The MaxEnt principle (Jaynes, 1957) states: given only the constraints ⟨x⟩=μ and ⟨x²⟩=μ²+σ², the distribution maximizing Shannon entropy H = -∫p log p dx is the Gaussian p(x) = exp(-λ₀-λ₁x-λ₂x²). The Lagrange multipliers λ₁=μ/σ² and λ₂=1/(2σ²) are determined by the constraints. This is why the Gaussian is "least informative" — it makes no extra assumptions. MaxEnt underlies statistical mechanics (Boltzmann distribution), machine learning (logistic regression), and Bayesian inference.