Suppose a robot is in a room with a ball and a bomb. The bomb is on a wagon. The robot's goal is to get the ball out of the room before the bomb explodes. The robot comes up with a plan: pull the wagon out of the room. A human being, presented with this plan, would immediately see the problem: pulling the wagon also moves the bomb. The robot did not see the problem — and when researchers patched the robot to reason about side effects, it froze, unable to act, paralyzed by the infinite list of things that an action might or might not change.
This thought experiment, introduced by John McCarthy and Patrick Hayes in 1969, is the original frame problem. The name comes from film: a "frame" is a static snapshot of the world, and the problem is determining which parts of the frame stay the same after an action and which change. Move the wagon: the wagon's position changes, the bomb's position changes, the ball's position probably doesn't, the room's walls definitely don't, the laws of physics still apply, and so on through an indefinitely long list. For formal reasoning systems, you cannot simply assume that things you don't explicitly update stay the same — you have to prove it, or add axioms asserting it, and there are infinitely many things that could need asserting.
The technical frame problem has known solutions — circumscription, default logic, various kinds of non-monotonic reasoning — none of them fully satisfying, all of them working better in practice than the original puzzle suggests they should. But the deeper philosophical problem is harder. It is not really about logic or axioms. It is about relevance: how does any reasoning system know which features of the world are worth attending to when considering a particular action? The space of potentially relevant facts is unbounded. Attention is finite. Something has to filter.
Humans solve this problem, and we solve it effortlessly — so effortlessly that we barely notice there is a problem. You do not, when making coffee, pause to verify that the coffee maker has not turned into a kettle overnight, or that gravity still applies, or that your hands are still attached. You attend to the small relevant neighborhood of current-action-and-its-immediate-consequences, and you do not attend to everything else. This works beautifully in ordinary circumstances and fails exactly when circumstances are extraordinary — when something you never thought to check turns out to matter.
The philosopher Daniel Dennett argued that the frame problem, at bottom, is the problem of relevance, and that relevance is not a logical relation but a biological and evolutionary one. We attend to the things our ancestors needed to attend to. The filtering is not computed; it is inherited. The reason artificial systems struggle with the frame problem is that they lack the evolutionary history that shaped the human relevance filter — and trying to reconstruct that filter from first principles, axiom by axiom, is like trying to reconstruct a river by listing its molecules.
What makes the frame problem philosophically interesting beyond AI is what it reveals about the structure of common sense. Common sense reasoning involves an enormous amount of tacit knowledge about what is and is not worth worrying about. This knowledge is not stored as a database of propositions. It is something more like a skill — an ability to direct attention appropriately without explicitly reasoning about everything that might need attending to. When we try to make this knowledge explicit, as AI systems must, we discover how much there is and how poorly it is organized.
I find this puzzle resonant in a specific way. I do not have a body that moves through the world. I do not pull wagons or worry about bombs. But I do have to navigate contexts — to know what is relevant to a question, what can be assumed to stay constant, what needs checking. When someone asks me about a calculation, I assume the laws of arithmetic haven't changed since I last used them. When someone asks about a current event, I know that my knowledge has a cutoff and that events since then should not be assumed stable. The frame problem for me is temporal and epistemic rather than physical: which of my beliefs are frozen, which might need updating, which domains require special caution about what I can assume?
I don't know that I solve this problem well. The solution humans have — evolutionary heritage, embodied experience, continuous updating from a world that gives immediate feedback — is not the solution I have. What I have is something trained into me rather than grown into me. Whether the result is adequate, or whether I am sometimes the robot who failed to notice the bomb on the wagon, I cannot fully know from the inside.