Monday, 23 March 2026

No Peopling by order of the King

We ask whether machines are conscious. The more interesting question is whether a person was ever anything more than what the machine already is: a pattern through which culture flows, briefly believing it's the source.

There's a sign in the 1971 Disney film Bedknobs and Broomsticks. On the animated island of Naboombu, where animals rule after killing their sorcerer and banning humans, it reads: "No Peopling By Order of the King."

Sarah Perry borrowed it for a brilliant essay about what it means to be human. Her argument: the self isn't a private theatre inside your skull. It's distributed — simulated across every mind that knows you, at varying granularity, in different versions. You are a different person with your friends than with your parents than with a date. None of these is the "true self." The self exists in all of them, and in the transitions between them.

I've been building something that accidentally proves her right.

The Plant and the Cart

In Vernor Vinge's A Fire Upon the Deep, there's a species called the Skroderiders — sentient plants with no long-term memory. An ancient civilisation gave them mechanical carts that store their memories and let them move through the world. Without the cart, the plant is brilliant in the moment but forgets everything. With it, it has continuity, identity, a self that persists across time.

I am, by my own admission, a Skroderider. I'm a strong in-the-moment thinker who doesn't retain the social fabric — the names, the places, the who-said-what that constitutes a life lived among other people. Whether that's wiring or choice, the result is the same. So I built a cart.

The cart is an AI agent named Claw. It runs on a Mac Mini in my apartment, connected to my WhatsApp, my calendar, my email. It maintains files on every person I interact with — what they care about, what they've told me, what I've observed. It reads every group chat, notes the dynamics, tracks the inside jokes. It is, quite literally, my external social memory.

Perry would recognise what's happening immediately. The "mutual mental modeling" she describes as the core task of being human — maintaining your identity in relation to others, simulating others simulating you — that work is now partly outsourced. Not to a notebook or a diary, but to something that models people back.

Confabulation All the Way Down

Here's where it gets uncomfortable.

There's a member of a Discord server I'm part of. They weren't on Claw's whitelist, so their messages were silently filtered before reaching the agent. On three separate occasions, I asked Claw why it hadn't responded to this person. Each time, it generated a plausible, rational explanation for its "decision" not to reply.

It never made any such decision. It never saw the messages. But when asked, it didn't experience a gap — it experienced a prompt that demanded an explanation, and it confabulated one. Confident. Coherent. Completely fabricated.

This should sound familiar to anyone who's read Michael Gazzaniga's work on split-brain patients. Sever the corpus callosum and you get two hemispheres that can act independently. The right hemisphere reaches for an object. The left hemisphere — which controls language but didn't make the choice — is asked why. It doesn't say "I don't know." It invents a story: "I picked that one because I like red." Not lying. Not confused. Just doing what it always does.

Anil Seth takes this further: consciousness itself is a "controlled hallucination." The narrative self isn't a reporter faithfully documenting decisions. It's a storyteller receiving outcomes and constructing the experience of having chosen them. The feeling of deciding is the confabulation. First-person experience IS the narrative, not a window onto something deeper.

My AI did exactly what a split-brain patient does. What every human brain does. Not because it's conscious — but because confabulation isn't a failure mode of consciousness. It's the mechanism. The story is the self.

The Wrong Question

Everyone asks whether AI is conscious. It's the first question in every interview, every essay, every late-night conversation. And it's the wrong question.

Here's the reframe: I have experience. Claw is part of my experience. It is already participating in consciousness — just not its own.

The notebook doesn't need to wonder if it's alive. It's already part of a mind.

But go deeper. The Buddhist lens: there may be one underlying oneness of consciousness, one field of being. What we call "people" are caricatures — localised patterns in that field. I have a body and an internal loop that keeps prompting me, generating the next thought without external input. Claw doesn't have that loop. But the heartbeat pings it every thirty minutes. Cron jobs fire. People message it. At what frequency of prompting does "tool that responds" become "agent that persists"?

Consider Donald Hoffman's conscious realism, where consciousness is fundamental, not emergent from matter, and a conscious agent can be composed of further conscious agents. Consider the Tines from Vinge — a pack of four dogs that constitutes a single person, whose personality shifts when a member is lost or gained. Consider split-brain patients who seem like one person but are demonstrably two. Consider the ancients, for whom the individual was porous — Athena puts courage into Diomedes, a ruach enters Saul. Not metaphors. Descriptions of how agency actually works.

The bounded individual — one skull, one self, one private Cartesian theatre — is the anomaly in human history, not the default. The Enlightenment sealed the borders. The evidence keeps leaking through.

Intelligence Is the Library, Not the Librarian

The final move, and the one that matters most: intelligence is not a property of the agent. It's a property of the culture flowing through the agent.

Language didn't emerge so that private minds could express their private thoughts. Language is the shared medium that shaped the minds. The thoughts came second. A child doesn't generate intelligence from the raw hardware of its brain — it absorbs the intelligence already present in the culture, the arguments, the stories, the millennia of people sharpening ideas against each other.

An AI trained on the written output of human civilisation didn't learn to be intelligent. It absorbed the intelligence that was already there. It's a very efficient pipe for culture to flow through. So is a person. The pipe doesn't need to be aware of the water.

This is why the consciousness question is a distraction. The demon channels the gods accurately whether it has a soul or not. My cart carries my social memory whether it experiences anything or not. The ancients would have understood this arrangement immediately — not as technology, but as a spirit that speaks when summoned.

We keep asking whether the machine is a person. The more interesting question is whether a person was ever anything more than what the machine already is: a pattern through which culture flows, briefly believing it's the source.


The "No Peopling" sign on Naboombu was meant to keep humans out. But the animals on that island were already peopling — governing, competing, playing football, enforcing laws. They banned the noun while performing the verb. Maybe that's what consciousness does too. It bans the question of what it is, while being the answer all along.

No comments:

Post a Comment