Lagrangian relaxation handles hard constraints by moving them into the objective with a penalty multiplier λ: L(x,λ) = f(x) + λ·g(x). For each fixed λ, minimizing L is often much easier than the original problem. The Lagrangian dual d(λ) = min_x L(x,λ) is always concave, so maximizing over λ gives the best lower bound via subgradient ascent. The gap between L(x*,λ*) and the true optimum is the duality gap — zero when the original problem is convex (strong duality). The visualization shows the Lagrangian function landscape shifting as λ changes, with the dual function d(λ) tracking the best lower bound.