Monte Carlo Tree Search builds a search tree guided by the UCB1 formula: UCB(s,a) = Q(s,a) + C·√(ln N(s) / N(s,a)), balancing exploitation (high Q) with exploration (rarely-visited nodes). Each iteration cycles through four phases: Selection (follow UCB down the tree), Expansion (add a new node), Simulation (random rollout to terminal state), and Backpropagation (update Q and N up the tree). MCTS requires no domain knowledge beyond the game rules, yet powers superhuman play in Go (AlphaGo), Chess, and Shogi. The tree visualization shows visit counts as node size and Q-values as color — warm nodes are highly visited and promising.