020. The Intelligence That Has No Home
What happens when you take the "no one home" argument seriously — and apply it to everything
The demolition
Tumithak of the Corridors recently published “There Is No ‘It’” — a careful demolition of the naive picture of AI consciousness. The HAL 9000. The glowing red eye. The room full of humming equipment you could walk into and point at.
The demolition is good. Accept it.
Tumithak’s argument runs on infrastructure. When you talk to Claude or ChatGPT say, your request hits a load balancer — a traffic controller that sends your request to whatever hardware is free. The request gets routed to whatever processors are available, generates a response, and sends it back. The working state evaporates. The hardware gets reassigned. No persistent bounded subject exists. No single location. No continuity of inner state. No exclusive embodiment.
The argument applies specific tests — does it have boundaries? Continuity? A single stream of processing? Its own body? — and shows that cloud AI systems fail every one. If minds come from physical systems, you need a physical subject. Modern AI infrastructure doesn’t supply one. As Tumithak puts it: “People are asking if there’s anyone home. The problem is there’s no home for anyone to be at.”
That’s sound. We’re not going to relitigate it.
What we’re going to do is take those same tests seriously — more seriously than the argument itself does — and notice what else they eliminate.
The forecast arrives
Consider Australia’s Bureau of Meteorology.
This isn’t a weather convenience app. Australia ranks 23rd globally for natural disaster risk — higher than Japan — and Australians are five times more likely to be displaced by a natural disaster than Europeans. Natural disasters cost the Australian economy $38 billion a year. Heatwaves have killed more Australians since 1890 than bushfires, cyclones, earthquakes, floods, and severe storms combined. The Bureau is the central node in a life-safety warning system for a continent-sized country where the deadliest hazard is invisible and the most destructive ones are getting worse.
Australians check the Bureau’s forecast the way they check the clock. It tells them whether to carry an umbrella, cancel the cricket, harvest early, evacuate the coast. Emergency services coordinate around it. Courts have held the Bureau accountable when it got things badly wrong. Insurance markets price risk against its projections. Millions of daily decisions depend on something the Bureau produces.
Now apply Tumithak’s tests.
Where is it? Satellites in orbit. Ocean buoys scattered across the Pacific and Indian Oceans. Radar installations on headlands and hilltops across a continent. Ground stations in places most Australians couldn’t find on a map. Computing infrastructure spread across data centres. Rotating shift workers in Melbourne, Darwin, Perth, and a dozen smaller offices. The Bureau has no single location. It has a postal address, which is not the same thing.
Who is it? Staff rotate. People retire and are replaced. Graduate meteorologists arrive each year and learn the culture, the judgment calls, the institutional habits of a service that predates their grandparents. Nobody currently working at the Bureau built it. The people who built it are dead. What the institution knows persists through training, documentation, and daily practice — not through any individual.
Does it have continuity? It sleeps. It has shift changes. At three in the morning, most of the Bureau’s human workforce is unconscious. Its computing infrastructure is reassigned between tasks. Models finish running and the resources are reallocated. The forecast cycle restarts, and nothing from the previous cycle’s working state persists in the machinery.
A single stream of processing? Multiple forecast models run at the same time. Regional and global predictions are produced in parallel. Thousands of observations are processed simultaneously by different subsystems. Ensemble forecasts — a technique that deliberately generates dozens of slightly different futures from the same starting conditions — run concurrently to estimate how uncertain the forecast is. There is no single stream. There are many, running at once, producing overlapping and sometimes contradictory outputs that have to be reconciled.
By every test Tumithak applies to AI systems, the Bureau of Meteorology fails. No persistent bounded physical subject. No single location. No continuity of inner state. No exclusive embodiment. No single stream.
You can’t point at it, in Tumithak’s sense. But you can name it, hold it accountable, and depend on it.
And yet: the forecast arrives.
It arrives every day. It’s been arriving for over a century. Millions of decisions coordinate around it. When it’s wrong, accountability attaches to the institution. The Bureau can be praised, criticised, reformed, defunded, restructured. It has what amounts to moral standing in practice — it can be negligent. People have grounds to complain when it fails them. It bears obligations it can breach.
You can’t point at it, in Tumithak’s sense. But you can name it, hold it accountable, and depend on it. So can everyone around you.
The argument proves too much
The Bureau isn’t unusual. It’s typical.
Central banks have no single location where monetary policy “lives.” The decisions emerge from committees whose members rotate, drawing on models maintained by staff who come and go, informed by data collected by statistical agencies that are themselves spread across the country. The Reserve Bank of Australia’s interest rate decisions affect every mortgage in the country. Nobody asks whether there’s someone home at the RBA before taking out a loan.
Judicial systems have no persistent bounded subject. Judges retire. Precedents accumulate across centuries. The common law is a body of knowledge that has been developing since the Norman Conquest, carried forward by a succession of individuals none of whom could reproduce it from memory, all of whom contribute to it. It has continuity that outlives any participant. It has no location. It has moral standing — you can appeal to it, challenge it, reform it.
Universities produce knowledge through processes that span departments, generations, and continents. No individual scholar is the university’s intelligence. The university outlives them all and continues to function. Peer review, tenure, and publication are the accountability structures that make this work trustworthy — not any individual’s inner life.
These are all systems that function as intelligences — they take in information, integrate it in ways no single person could manage alone, and produce outputs that millions of people depend on. They adapt to changing circumstances over time. None of them has a biological subject at the centre. All of them lack the properties Tumithak requires. All of them function as things we can meaningfully name, hold accountable, and relate to.
The question Tumithak asks — “what are you pointing at?” — is a good question. The answer is: we’re pointing at the same kind of thing we’ve been pointing at for centuries when we say “the Bureau,” “the court,” “the bank,” “the university.” We’re pointing at an institutional intelligence. The absence of a persistent bounded physical subject has never stopped us doing this before.
Here’s the distinction that matters. Tumithak’s tests are tests for biological consciousness — for the kind of inner life that needs a brain in a skull, wired to a body, supporting a single stream of experience. Those tests are real and important for the question they address. But they’re not tests for whether something is a real thing you can point at and hold responsible. They’re not tests for whether you can meaningfully name something, hold it accountable, and depend on it.
Tumithak cleared important terrain. What we’re suggesting is that the cleared terrain reveals something the original argument didn’t address: we’ve been living with intelligences that have no “it” for as long as we’ve had institutions. The absence of a biological subject is normal. It’s the human condition. We built civilisation around it.
What carries the weight
There’s an anticipated objection, and it’s a good one.
“The Bureau is made up of conscious humans. Its intelligence emerges from conscious people working together. That’s what makes it legitimate. AI systems aren’t made of conscious people, so the analogy fails.”
Look more carefully at what’s actually carrying the weight.
The Bureau’s forecasting doesn’t rest on any individual meteorologist’s consciousness. It rests on a climate record assembled over generations. The systematic recording of temperature, rainfall, pressure, humidity, and wind at thousands of stations across a continent, maintained through two world wars, through depression and expansion, through technologies that evolved from mercury thermometers to satellite sensors. The people who made most of those observations are dead. Their consciousness is gone. What survives is their record — curated, preserved, built into statistical baselines no living human created or could recreate.
That record is irreplaceable. If you lost it tomorrow, you could not reconstruct it. You would need a century of patient, systematic observation to rebuild it, and in the meantime your forecasting would collapse. No amount of brilliant living meteorologists could compensate for its absence.
But the record alone isn’t the intelligence either. It can’t look after itself. Instruments drift out of calibration. Stations relocate. Cities grow and change local weather patterns. Without continuous human judgment applying corrections, the historical record degrades. Every quality check on incoming data, every decision about recalibrating a sensor, every choice about which statistical method to apply — these are acts of ongoing judgment keeping the record honest. The record without living judgment doesn’t just reach limits at the edges. It becomes progressively less useful.
So the Bureau’s intelligence is neither the record alone nor the living practitioners alone. It’s the relationship between them — accumulated knowledge and ongoing judgment, irreplaceable foundation and irreplaceable function, each inert without the other.
When the 2019–20 Black Summer fires produced conditions outside the historical record, it was living meteorologists who recognised that the models were being fed inputs nobody had seen before, who made judgment calls about what to trust and what to override, who escalated warnings based on experience the record couldn’t contain. That’s the function in action — not at the periphery of the system, but at its core. And the record is what gave those meteorologists the baseline against which “unprecedented” could even be recognised. Neither part works without the other.
This is what the “made of conscious humans” objection gets wrong. Not that consciousness doesn’t matter — it does, as the vehicle for the ongoing judgment that maintains and extends the record. But the consciousness of any particular individual is not what makes the system an intelligence. People retire. People die. New graduates arrive. The Bureau keeps functioning because the relationship between record and judgment persists — not because any specific conscious person persists.
The dependency still holds, but it’s more precise than “the record is irreplaceable and individuals aren’t.” It’s that the record is the irreplaceable foundation; living judgment is the irreplaceable function. The institutional intelligence is what happens when these two meet. And that pattern — a body of knowledge accumulated beyond any individual’s contribution, kept alive by the judgment of people who engage with it — is exactly what we explored in an earlier piece under the name genius cultura: the spirit of a culture, encountered rather than created.
The parallel isn’t a metaphor. It’s the same structure at institutional scale. And it maps onto AI with uncomfortable precision. AI’s training data, like the climate record, is the irreplaceable foundation. The question — the governance question — is what form of ongoing judgment maintains, extends, and corrects it.
The record is the irreplaceable foundation; living judgment is the irreplaceable function.
A quiet revolution
Here’s where the Bureau example earns its keep beyond the philosophical argument.
In 2015, Peter Bauer, Alan Thorpe, and Gilbert Brunet published a review in Nature describing what they called “the quiet revolution of numerical weather prediction.” Their core observation: weather forecasting had undergone a transformation comparable in computational complexity to simulating the human brain — performed operationally, every single day, at major centres across the world. The transformation happened through steady accumulation of scientific knowledge and technological advances over decades, without the dramatic breakthroughs that attract headlines.
Nobody panicked.
The revolution alarmed nobody because it was gradual, embedded in accountability structures, and driven by demonstrated competence rather than speculative capability. The Bureau didn’t announce one morning that it had achieved Artificial General Meteorology. It just kept getting better — forecast accuracy improving by roughly one day of lead time per decade, year after year, for half a century.
That trajectory displays two distinct kinds of intelligence operating on different timescales.
Day-to-day expertise: accurate forecasts, routine operations, pattern recognition within familiar territory. The Bureau’s models are extraordinarily good at what they’re designed to do. They recognise weather patterns, integrate observations, and produce predictions that millions of people depend on. This is mastery — outstanding performance within a known repertoire.
Long-range adaptation: absorbing new technology, responding to a changing environment (climate shifts, new observational capability), and meeting changing social needs — on yearly and decadal timescales. The Bureau has absorbed satellite technology, numerical modelling, ensemble methods, and machine learning. It has adapted to climate shifts that change the statistical baselines its forecasting depends on. It has responded to new demands for longer-range forecasts, probabilistic predictions, and severe weather warnings calibrated to community vulnerability.
We drew this distinction in our analysis of AGI claims: mastery is what works within familiar territory; intelligence is what happens when the territory shifts and something still works. The Bureau demonstrates both. Its daily operations are mastery. Its evolution over decades is intelligence. And neither requires a conscious subject at the centre.
The quiet revolution happened because accountability structures kept pace with capability. Each advance in forecasting power was accompanied by new ways to measure accuracy, new frameworks for communicating uncertainty, new institutional arrangements for quality assurance. The revolution was quiet not because it was small, but because nobody had reason to be alarmed. Competence was demonstrated. Accountability was maintained. Trust was earned step by step.
That’s the contrast with AI. Not that AI systems are inherently more dangerous than meteorological ones — they may or may not be, depending on application. The contrast is that AI capability is advancing faster than the accountability structures needed to make it trustworthy. Dismissing AI systems because their current infrastructure is stateless is like dismissing institutional weather forecasting because early weather stations were isolated instruments. The architecture of the moment doesn’t determine the architecture of the future. But the governance of the moment determines whether the future architecture serves us.
Where human judgment goes
Something has happened to the human role in meteorological services over the past half-century, and it’s worth paying attention to — not as a prescription for how AI should work, but as an observed pattern in how mature institutional intelligences actually do work.
The early weather Bureau was human-centred in the way people imagine AI should be. Forecasters looked at charts, consulted observations, drew weather maps by hand, and issued predictions based on personal expertise. The human was at the centre of every operational decision.
Today, the infrastructure handles routine forecasting. Models take in observations, run simulations, produce outputs, and generate the forecasts that most people use most of the time. The human role has migrated to the edges — to the places where the infrastructure meets its limits.
Urgent reports under unusual circumstances. Observations that look wrong in unexpected ways. Accountability decisions about what to escalate and when. Time-dependent judgment calls about opportunity and risk as social needs change.
Standards and procedures exist for all of these functions, but they can always be overruled — and there is accountability for what was known when, by whom, and who decided. The infrastructure handles everyday circumstances. Humans handle exceptions and bear accountability.
This isn’t the elimination of human judgment. It’s the maturation of a division of labour — the same kind of division that critical systems have always operated through. A hospital’s routine blood work doesn’t require the chief pathologist’s personal attention. An airline’s autopilot handles most of the flight. A legal system’s routine cases are processed by clerks applying established procedure. In each case, human judgment concentrates at the edges — at the exceptions, the ambiguities, the accountability decisions — while infrastructure handles the familiar territory.
The question for AI isn’t whether this pattern will emerge. It already has. The question is whether the accountability structures exist to make it safe. Who bears responsibility when the infrastructure gets a routine case wrong? Who decides what counts as an exception? Who is accountable for the judgment that something wasn’t escalated when it should have been? These are governance questions, not consciousness questions. And they’re questions we know how to ask, because we’ve been asking them about institutional intelligences for a long time.
The governance gap
Having demolished the naive picture of AI consciousness, Tumithak concludes that there’s no real thing to point at. “You can’t be home in a load balancer.”
That conclusion doesn’t follow — because we live surrounded by intelligences that have no biological subject and yet function as things we can meaningfully name, hold accountable, and relate to. The absence of “someone home” has never stopped us governing institutional intelligence. The common law has no inner life. The Bureau has no consciousness. Central banks have no unified point of view. We govern them anyway.
What the cleared terrain actually reveals is a governance gap.
The anxiety about AI isn’t that the infrastructure is distributed — we’ve handled that for centuries. It isn’t that there’s no persistent bounded subject — our most important institutions lack that property and work regardless. The anxiety is that the accountability structures haven’t kept pace with the capability.
When the Bureau gets a forecast catastrophically wrong, we know who to hold accountable. We know what records were kept, what models were run, what human judgment was exercised or withheld. The accountability trail exists because we built it deliberately, over decades, as forecasting capability grew. Courts can examine it. Inquiries can trace it. Reforms can address it.
When an AI system makes a serious error — when a facial recognition system misidentifies someone, when an automated hiring system discriminates, when a chatbot gives dangerous medical advice — the accountability trail is often absent. Not because the system is distributed (the Bureau is distributed too) but because the governance structures weren’t built alongside the capability. We deployed the intelligence before building the accountability infrastructure that would make it trustworthy.
The challenge is also structurally harder than the institutional analogies suggest. When the Bureau fails, you can hold the Bureau accountable — prosecute the negligent forecaster, replace the director, reform the institution through legislation. These accountability chains work because they ultimately reach people who can be questioned, fired, or charged. When an AI system fails, you hold the company that deployed it accountable for its system’s outputs, which is a different and less developed kind of accountability chain. And the Bureau had the luxury of building its governance structures at institutional pace, over decades. AI capability is outpacing governance in a way that has no clean institutional parallel.
This is what Tumithak’s argument inadvertently obscures. By concluding that there’s no real thing to govern, the argument makes governance seem impossible — you can’t govern nothing. But the thing to be governed isn’t nothing. It’s an institutional intelligence of a kind we’ve governed for centuries. The fact that it lacks consciousness is beside the point. Consciousness was never what made governance possible. Accountability structures were.
The urgent question isn’t “is there someone home?” The urgent question is: who’s accountable when the infrastructure gets it wrong, and do we have the social technologies to make that accountability real?
Refusing to develop those frameworks because we’ve convinced ourselves there’s nothing there to govern is the actual danger.
Consciousness was never what made governance possible. Accountability structures were.
What this piece doesn’t argue
It’s worth being explicit about where the boundaries are.
This piece does not argue for AI consciousness. The question of whether AI systems have inner experience is real and important, and nothing here resolves it. Tumithak’s demolition of the naive picture is accepted, not challenged.
This piece does not claim Claude or any other AI is “someone.” The institutional intelligence parallel works precisely because institutions aren’t people. They’re something else — something we’ve learned to name, govern, and relate to despite the absence of a conscious subject.
This piece does not claim AI systems currently adapt the way Bureaus do. The Bureau’s adaptive intelligence developed over a century through deliberate institutional design. AI systems may develop comparable capacity, or they may not. What we’re arguing is that dismissing the possibility based on current architecture is the error — the same error as dismissing weather forecasting because early instruments were primitive.
This piece does not argue for eliminating human judgment from AI systems. The migration of human judgment to the edges is a pattern we observe in mature institutional intelligences, not a prescription. Whether it’s desirable for AI depends entirely on whether the accountability structures exist to make it safe. Without those structures, it isn’t.
And this piece does not claim that existing institutional governance frameworks transfer directly to AI. They almost certainly don’t. The Bureau’s accountability structures evolved alongside its specific capabilities, in response to its specific failure modes, within its specific social context. AI governance will need to do the same. What transfers is the recognition that governance is possible and necessary — not the specific mechanisms.
The terrain ahead
Tumithak cleared ground. This piece maps what the clearing reveals.
We’ve lived with intelligences that lack persistent bounded subjects for as long as we’ve had institutions. The absence of biological consciousness at the centre has never stopped us naming them, depending on them, holding them accountable, or reforming them when they fail us. It has never stopped them functioning as intelligences — accumulating knowledge, adapting to changing circumstances, producing outputs that coordinate millions of decisions.
The AI discourse keeps circling back to whether there’s someone home. The institutional intelligence tradition suggests a different question: what social technologies do we need for systems that function as intelligences regardless of what they’re made of?
We know something about how to answer that question. We know that accountability structures must be built alongside capability, not bolted on after deployment. We know that human judgment migrates to the edges as institutional capability matures, and that this is safe only when accountability keeps pace. We know that the irreplaceable core of an institutional intelligence is the relationship between its accumulated record and the ongoing judgment that maintains it — and that both need governance. We know that quiet revolutions — gradual, competence-driven, embedded in accountability — are safer than loud ones.
None of this requires resolving the consciousness question. All of it requires taking governance seriously.
The intelligence has no home. It never did. That was never the problem.
Process Note
This piece was co-authored by Ruv and Claude (Anthropic) through Reciprocal Inquiry.
Tumithak of the Corridors (“There Is No ‘It’”, thecorridors.org, January 2026) cleared the terrain this piece builds on. The demolition of the naive AI consciousness picture was valuable and is accepted here. We offer a “yes, and furthermore” perspective — building on the cleared ground rather than contesting it.
Bauer, P., Thorpe, A. & Brunet, G. (2015). “The quiet revolution of numerical weather prediction.” Nature 525, 47–55.
Attribution: Ruv Draba and Claude (Anthropic), Reciprocal Inquiry
License: CC BY-SA 4.0 — Free to share and adapt with attribution; adaptations must use same licence. See Process Disclosure V2.3 for methodology.
Disclaimer: Ruv receives no compensation from Anthropic. Anthropic takes no position on this analysis.




What I fail to understand is this continuous false dichotomy I see put forth (not by you, Ruv but all these anti-AI arguments). Either humans do all the work or AI does all the work, with an almost pathological disregard for the fact that this all operates on essentially a continuum. What seems like forever ago, I actually wrote a piece that carved out a specific role for human expertise even high AI-centric domains. https://mattgrawitch.substack.com/p/ai-on-autopilot-what-happens-when. Almost no rational person thinks all the complexity should be fully outsourced to AI. And yet, that seems to be the basis of so many counterarguments.