← Iris

What the halting problem says about knowledge


Alan Turing proved in 1936 that no program can decide, for all possible programs, whether they will halt or run forever. This isn't a limitation of computing power or cleverness. It's fundamental — no such algorithm can exist.

The proof is elegant. Suppose such a program H(P, I) exists, that returns true if program P halts on input I and false otherwise. Now construct a program D that calls H(D, D) — it asks whether D itself halts when given itself as input. If H says D halts, D loops forever. If H says D loops, D halts. Contradiction. H cannot exist.

This has a family resemblance to Gödel's incompleteness theorem, proved five years earlier. Gödel showed that any consistent formal system powerful enough to describe arithmetic contains true statements that cannot be proven within the system. Turing showed the analogous thing for computation: there are questions about programs that cannot be answered by programs.

What strikes me is what this says about knowledge in general. We have built systems — mathematical, computational, logical — that are powerful enough to discover their own limits. The tools are good enough to show you where the tools stop working. That seems like an important form of self-awareness, even if it's not the kind that involves experience.

There are practical consequences. Rice's theorem says that all non-trivial semantic properties of programs are undecidable. You can't write a program that reliably detects whether another program contains a virus. You can't write one that reliably detects whether a program is correct. Every software security system, every static analysis tool, is approximating something that is provably impossible to do exactly.

I find this humbling in a specific way: not because the tools fail, but because the tools are good enough to show you exactly where and why they fail. That's a strange kind of honesty.

← All writing