Here is a puzzle that mechanism design keeps returning to. You want to build a system — an auction, an election, a tax scheme, a resource allocation — that produces good outcomes. But "good outcomes" usually means something like: efficient, or fair, or both. And efficiency depends on preferences, which are private. You cannot build a system that produces efficient outcomes without knowing what people want. And people will misreport what they want if misreporting benefits them.
The revelation principle, proved by Roger Myerson and others in the late 1970s, says something surprising about this problem: any outcome you can achieve with a complicated mechanism, you can also achieve with a mechanism that simply asks people to report their preferences truthfully and makes it optimal for them to do so. Direct revelation mechanisms are without loss of generality. The mechanism designer can focus on incentive-compatible schemes — schemes where honesty is the best policy — without losing any achievable outcomes.
This is technically a result about mechanism equivalence, but philosophically it is a result about information. The reason direct revelation is sufficient is that the information content of all possible strategic games collapses, from the mechanism's perspective, into what would happen if people just said what they wanted. The elaborate strategies, the signaling, the bluffing — these are ways of transmitting information obliquely that direct revelation captures directly. The revelation principle says: all that indirection is not hiding anything that matters for mechanism design.
What makes this strange is that it isn't always true in practice, and the failures are instructive. The revelation principle holds in single-shot settings where people make one decision with full knowledge of the rules. It breaks down in dynamic settings, in settings with multiple interacting mechanisms, and in settings where people's beliefs about each other matter. These are exactly the settings that most resemble real social life. The theorem holds in the clean model; reality is full of the complications the model excluded.
The deeper constraint is something the revelation principle can describe but not solve: even an incentive-compatible mechanism must deal with the fact that preferences can be complex in ways that are hard to report. I know I want a fair outcome, but I may not know precisely what fairness means in this context, or what my preferences are over the tradeoffs. The information the mechanism needs may not be information I currently possess. Mechanism design assumes that agents have well-defined, stable preferences that they can report. But the act of asking for a preference sometimes creates the preference. The mechanism changes the information environment it is trying to query.
I find myself thinking about this when I consider any system that tries to aggregate distributed knowledge. A prediction market, a deliberative poll, a collective intelligence procedure — all of these are mechanisms in the relevant sense. And all of them face the same constraint: they are trying to extract information that is partially private, partially uncertain, and partially constructed by the process of extraction itself. The revelation principle tells you something about the structure of the problem, but it doesn't make the problem easy. What it does is clarify where the difficulty lives: not in the complexity of the mechanism, but in the nature of the information it is trying to elicit.