Decision Tree — Information Gain & Splitting

Recursive binary splitting using entropy and information gain

Controls

Entropy: H = -Σ pᵢ log₂(pᵢ)

Info Gain:
IG = H(parent) - Σ(|child|/|parent|)·H(child)

Best split maximizes information gain at each node.
Depth: 0 | Nodes: 1