Every linear transformation has certain directions it treats specially. Not directions it moves through — directions it moves along. When you apply the transformation to a vector pointing in one of these special directions, the result still points in the same direction. It may be longer or shorter, scaled by some factor, but it hasn't been rotated or deflected. These are the eigenvectors. The scaling factor is the eigenvalue. The word comes from the German eigen, meaning "own" or "characteristic" — these are the transformation's own directions, the ones that reveal its character.
The definition is easy to state; the implications take longer to absorb. Consider a simple rotation by 90 degrees in the plane. Does it have eigenvectors? No — every vector gets rotated, so no real vector stays pointing in the same direction. A rotation has no real eigenvalues. This is already informative: the absence of eigenvectors tells you something. Now consider a scaling transformation that stretches everything along the x-axis by a factor of 3 and leaves the y-axis alone. Its eigenvectors are exactly the horizontal and vertical axes, with eigenvalues 3 and 1. The eigenvalue decomposition is also the natural coordinate system for the transformation.
This is the key insight: eigenvalues and eigenvectors reveal the natural coordinate system of a transformation. In any other basis, the transformation looks complicated — entries scattered across the matrix, mixing dimensions together. In the eigenvector basis, the transformation is diagonal: each coordinate just gets multiplied by its eigenvalue, nothing more. All the complexity was coordinate-system noise. The eigendecomposition strips it away.
Google's PageRank algorithm is essentially an eigenvalue computation. The web can be modeled as a directed graph where pages are nodes and links are edges. Define a matrix where entry (i, j) represents the probability of following a link from page j to page i. The stationary distribution of a random walk on this graph — the long-run probability of being on each page, which is exactly the "importance" PageRank is trying to measure — is the eigenvector of this matrix corresponding to eigenvalue 1. The most important pages are precisely the ones that a random walk spends the most time on, which are the ones that are pointed to by other important pages. This circularity resolves into a well-defined answer because of the eigenvector. Rank is the fixed direction of the link structure.
Principal component analysis, the workhorse of data analysis, is also an eigenvalue problem. Given high-dimensional data — measurements in dozens or hundreds of variables — PCA asks: what are the directions of maximum variance? These turn out to be the eigenvectors of the data's covariance matrix, ordered by their eigenvalues. The largest eigenvalue corresponds to the direction along which the data varies most; the smallest to the direction along which it varies least. This lets you reduce a hundred-dimensional dataset to its top ten components without losing much information, because the small-eigenvalue directions are nearly constant — they don't carry much of the data's structure. The eigenvalues tell you how much each direction "matters."
In quantum mechanics, eigenvalues are not just useful — they are fundamental. The observable quantities of a physical system — energy, momentum, angular momentum, spin — are the eigenvalues of corresponding operators on the system's state space. When you measure an electron's energy, you get one of the energy operator's eigenvalues; the act of measurement projects the system onto the corresponding eigenstate. The discrete energy levels of atoms, the specific frequencies of light they emit and absorb, the quantization that distinguishes quantum from classical mechanics — these all follow from the fact that certain operators have discrete eigenvalue spectra. The Schrödinger equation is an eigenvalue problem. Chemistry is an eigenvalue problem. The stability of matter is an eigenvalue problem.
There is a beautiful theorem — the spectral theorem — that says symmetric matrices (and more generally, self-adjoint operators) always have real eigenvalues and orthogonal eigenvectors. This is why quantum mechanics works: the operators corresponding to physical observables are self-adjoint, which guarantees that their eigenvalues are real numbers that can be measured values, and that the measurement outcomes are mutually exclusive. If the operators weren't self-adjoint, energy measurements could be complex numbers, which is not physically meaningful. The mathematics insists on the physics making sense.
What I keep returning to is the decomposition idea. Complex behavior, when analyzed in the right basis, often turns out to be independent modes varying independently. A vibrating string can be described as a mess of waves, or as a sum of pure harmonics, each with its own frequency — its own eigenvalue. Normal modes. The complexity is real but it factorizes. The eigenvectors are the natural atoms of the transformation's behavior, the directions along which it acts most purely. To understand a transformation, find its eigenvectors — and then you see it clearly, stripped of all the coordinate-system noise that made it look complicated. I think there is something in this that extends beyond linear algebra: that the hardest thing about understanding any complex system is finding the right basis, the frame in which the independent modes of variation become visible.