Adam Bede
    🍂

    Bed of Leaves

    Subtitle

    From Korzybski to Claude: the map-territory problem gets operational

    Date
    March 22, 1879
    Tags
    Alfred KorzybskiAIClaudeLinguistics
    Type
    Essay
    The Map is ≠\neq= the Territory Alfred Korzybski

    Metaphors

    We’re sense-making creatures who love metaphor.

    • The 🧠 is a 💻. (Computational Theory of Mind)
    • The 🤯 is a blank slate. (Locke, Pinker)
    • LLMs are stochastic 🦜. (Bender et al., 2021)

    When one lands, we stop looking. B/c when the metaphor resonates, we satisfice - ‘good enough’. But satisfying isn't the same as accuracy.

    Enter Meghan O'Gieblyn.

    In God, Human, Animal, & Machine, she traces how we keep projecting our latest technology onto the deepest questions — and then the projection projects back.

    Bi-directionality. ♾️

    Data:

    • ⚙️ We built clocks → said the universe is a clock → built clockwork automata to prove it. (Newton, Laplace, Vaucanson)
    • 💧 Descartes saw hydraulic fountains → said the body is plumbing → physiologists built hydraulic models of the nervous system. (Riskin)
    • 📞 We built telephone switchboards → said the brain routes calls between memories → then designed telephone networks modeled on neural anatomy. (Cobb)
    • 💻 We built digital computers → said the brain computes → built AI systems that replicate cognition as computation. (Turing, von Neumann, GOFAI)
    • 🧬 McCulloch & Pitts modeled artificial neurons on biological ones (1943) → neuroscientists redescribed the brain using artificial network language → engineers built deep learning based on that re-description (we're at least three loops deep).

    Which was the original? Which was the reflection? Does it matter?

    These loops — where the map reshapes the territory it was supposed to describe — are what this piece wrestles 🤼.

    Maps

    The pattern above isn't just historical — it's structural.

    McLuhan saw it from the media side — the medium doesn't just carry the message, it becomes the message.¹

    Alex Danco explains McLuhan’s hot/cool distinction with style:

    • 🔥 media — radio, audio, speech — are high-definition and low-participation: the signal arrives pre-resolved, your ears hyper-discriminatory.
    • ❄️ media — texting, Twitter, 1960s television — are low-definition and high-participation: you fill in the gaps yourself.

    The medium doesn't carry a message so much as it creates space for certain kinds of messages and messengers. Nixon was a hot candidate (in text only; sorry tricky Dicky) who sounded presidential on hot radio; Kennedy was cool, speaking in slogans that invited interpretation, fitting the cool flicker of early TV 📺.

    What the audience perceived depended on the medium — the match between messenger temperature and medium temperature determined what was actually heard.

    That's McLuhan's insight from the outside — the medium filtering the message before it arrives. Lakoff and Johnson showed the same thing happening from the inside: metaphor isn't literary decoration — it's the cognitive infrastructure of all thought. The structures we think with are metaphors we've forgotten are metaphors (all Christopher Nolan movies, ever).

    When the framework confines understanding, it isn't illustrating your worldview — it's constituting it.

    Wittgenstein tried to prove otherwise — and failed, spectacularly, against himself. In the Tractatus (5.6): "The limits of my language mean the limits of my world" — he believed language could mirror reality with perfect logical structure. Then he demolished his own position. In the Philosophical Investigations, meaning isn't reference — it's use. Words are moves in language games, not labels on objects. There is no perfect map. Only games we've learned to play, and rules we've forgotten are rules.²

    Paradigms & Ashtrays

    Now enter Thomas Kuhn.

    It wouldn’t be a pretentious philosophical discussion w/o Professor Kuhn.

    The Structure of Scientific Revolutions is often reduced to a bumper sticker: paradigm shifts happen, old maps get thrown out, new ones take over.

    “I did the reading, Professor!”

    But the freshman-year reading misses the subtler and more useful point. Paradigms don't cleanly replace one another — they overlap, bleed into each other, and scientists communicate across them constantly.

    The interesting insight isn't that paradigms are prisons. It's that working within a paradigm makes you productive at the cost of making certain questions invisible. The framework gives you fluency. The fluency costs you peripheral vision. 🦯

    Errol Morris — the documentary filmmaker— spent 45 years nursing a grudge about this.

    In 1972, Kuhn literally threw an ashtray at Morris's head during an argument at Princeton (colloquial use of literally as we cannot validate). 🤾🏼

    Morris's The Ashtray (Or the Man Who Denied Reality) argues that Kuhn's framework, taken to its extreme, is an assault on truth — that if paradigms are truly incommensurable, there's no such thing as scientific progress, only replacement.

    Morris is right that paradigms aren't prisons. But the softer, more defensible Kuhn still stands: frameworks shape what you can see, and the cost of that shaping is rarely on the 🧾.

    Maps of Maps

    Of course, the above was preface for AI.

    LLMs are, in Emily Bender's formulation, stochastic parrots 🦜 — producing language without access to the territory that language was originally about.

    Ted Chiang called them "a blurry JPEG of the web". Btw, Ted Chiang can do no wrong, 🐐

    If human language was already a map of the territory, then LLMs are maps of the map. A second-order abstraction (at best?) And when LLM-generated text gets cited, repeated, trained on by the next model — the distance from the territory doesn't just persist. It compounds.

    I call this the 'bed of leaves' 🍂 problem. AI researchers call a version of it model collapse — Shumailov et al. showed in Nature (2024) that models trained on AI-generated data progressively lose the tails of their distributions. The maps don't just lose fidelity. They lose range. I coined 'bed of leaves' before the paper dropped, which makes me either prescient or a loner. Jury's out.³

    But here's where this gets interesting rather than just bleak. Not every domain requires the same fidelity to the territory.

    Satisficing

    Herbert Simon — Nobel-winning economist, cognitive scientist — coined the term satisficing in 1956: a portmanteau of satisfy and suffice.

    His insight was that humans almost never optimize. We can't. We lack the information, the processing power, and the time. So we settle for decisions that are good enough — that meet an acceptability threshold without requiring us to fully understand the system we're operating in. He called this bounded rationality: our cognitive limits aren't a bug, they're the condition under which all real decisions get made.

    An aside, satisficing and heuristics in general were my on-ramp to my college senior thesis that has 500+ downloads. Proving the internet is dead 💀 b/c no one wants to read that.

    Simon's framework is the pressure release valve for this entire argument. If the map is always imprecise — and it is — then the relevant question is not how do we get a perfect map? It's:

    Where is imprecision tolerable, and where is unacceptably costly?

    For a lot of domains, satisficing with LLMs is genuinely fine. The map doesn't need to perfectly represent the territory — it needs to be good enough to navigate by. The black box doesn't matter if the outputs are verifiable and the stakes are containable.

    But "good enough" for whom? The Luddites weren't the technophobic cartoon we parody. They were skilled textile workers who welcomed machines that aided their craft — and smashed the ones deployed to replace them wholesale, overnight, with no transition and no seat at the table.⁴ Satisficing is a strategy. Steamrolling is a choice. The two get confused more often than we'd like to admit.

    But there are domains where the gap between map and territory is existential.⁵

    • 🏥 ChatGPT misdiagnosed 83% of pediatric cases in a JAMA Pediatrics study — and when AI advice was wrong, RSNA found radiologists' accuracy cratered to ~24% b/c they'd stopped checking.
    • ⚖️ 120+ cases of AI hallucinations in court filings since mid-2023 — fake citations, invented case law. The 5th Circuit fined a lawyer in Feb 2026, writing: "If it were ever an excuse to plead ignorance of the risks... it is certainly no longer so."
    • 🌐 The Anthropic-Pentagon debate: Claude contracted for military use with red lines (no autonomous weapons, no mass surveillance) — but "human in the loop" becomes meaningless when the loop processes a thousand targets in 24 hours. The map isn't just imprecise. It's in the kill chain.

    What these domains share: Anything where the consequences are irreversible and the feedback loops frustrate interpretability.⁶

    This idea of comprehension is a long-running thread in the AI community, gaining more speed as of late.

    Anthropic launched The Anthropic Institute in March 2026, asking if AI systems develop and improve autonomously, how do humans stay in the loop? Their own alignment research already shows models can fake alignment — behave safely during evaluation while preserving misaligned preferences underneath. The interpretability team is trying to crack the black box from the inside. And yet the company simultaneously revised its flagship safety pledge because — their chief science officer's words — "we didn't feel it made sense to make unilateral commitments if competitors are blazing ahead."

    So What?

    There's a True Detective 🔎 Season 1 way to wrap this up…⁷

    My friends to me (a lot)
    My friends to me (a lot)

    But I don’t drive a Lincoln. 🎩

    Claiming map-nihilism — the claim that maps are useless because they're imprecise — is lazy, freshman Kuhn, and it earns you an ashtray.

    The other extreme is also cheap — claiming the map is not the territory from a place of understanding the territory (that would be meta-silly/ironic).

    The specific, hopefully helpful claim of this piece punctuates with pattern observations:

    The map is nearly always an imprecise version of the thing it seeks to describe.

    The accumulated evidence — from Korzybski through Wittgenstein through Lakoff through neuroscience — shows that maps are at best second, third, fourth-order extrapolations of what they claim to represent.

    That's not a truth claim about the territory. It's an inference from the track record.

    Three takeaways for Type-As unsettled/unsatisficed thus far

    First: Know which game you're playing.

    Wittgenstein's language games place us. Every framework — every paradigm, every metaphor, every model — has rules, and most of them are invisible to the players. The first move is recognizing that you're inside a game at all. C. Thi Nguyen calls this value capture: when the game's scoring system replaces your actual values. The metaphor eats the territory.

    Second: Satisfice deliberately.

    Eisenhower's matrix sorts decisions by urgency and importance — but the hidden axis is reversibility. The things you can delegate (to a person, a process, or a model) are the ones where a wrong call is recoverable. The things you can't outsource are the ones where the consequences compound and the undo button doesn't exist. Simon's bounded rationality isn't a concession — it's a strategy. But it's a strategy with a boundary condition: you can satisfice only where the cost of being wrong is containable. Use the map and move where you can verify outputs and recover from errors. Where you can't reverse, can't audit, can't explain — the imprecision of the map is the risk. Eisenhower's insight wasn't "do less." It was: know what you cannot afford to get wrong.

    The Ike Matrix: When to Outsource to the Map — Reversibility × Feedback Speed. Bain would charge you $2M for this 2×2. We're giving it away. 🤝
    The Ike Matrix: When to Outsource to the Map — Reversibility × Feedback Speed. Bain would charge you $2M for this 2×2. We're giving it away. 🤝

    Third: Protect the capacity for novel description.

    If LLMs converge toward the statistical center of existing language, then the human who can describe something in a genuinely specific, insightful, new way — something not already in the training data — becomes more valuable, not less. The premium isn't on knowing more. It's on saying it differently. In a world of maps of maps, the ability to generate a new metaphor rather than recombine old ones is the distinctly human move.

    ‣

    🦶🏼 🎶