Kernel PCA extends classical PCA to nonlinear manifolds using the kernel trick: instead of working in the original input space, we implicitly map data to a high-dimensional (possibly infinite) feature space via a kernel function k(x,x') = ⟨φ(x), φ(x')⟩. PCA is then performed in this feature space by eigendecomposing the centered kernel matrix K. The RBF kernel k(x,x') = exp(−γ‖x−x'‖²) corresponds to an infinite-dimensional feature space and can separate data that is inseparable in the original space. The left panel shows the original 2D data colored by class; the right panel shows the projection onto the top-2 kernel principal components — notice how classes that were entangled become linearly separable in kernel space.