We’re all struggling to make sense of AI.
There are the practical types, treating it as a tool — useful but harmless, nothing to see here. There are the evangelists, treating it as a superpower — world-changing, unstoppable, get on board. There are the skeptics, treating it as doomsday — jobs gone, truth drowned out, humans sidelined. And there are the rest of us, quietly unsure what to actually believe about what we’re experiencing.
Nobody — not even the engineers — can explain to an ordinary person what AI is doing or how it works. The technical explanations don’t land. The metaphors don’t quite fit. The confident claims from all sides feel premature.
Sound familiar? Haven’t we seen this before?
Yes, we’ve seen this before
In the mid-1990s, politicians called the internet the “information superhighway.” The metaphor felt right for about five minutes. Then people noticed: highways have destinations; the internet has rabbit holes. Highways are point-to-point; the internet is everywhere-to-everywhere. By 2006, when a senator earnestly described it as “a series of tubes,” the laughter was instant — not because tubes was stupider than superhighway, but because by then everyone knew what it felt like to use the thing, and the technical descriptions missed the point entirely.
What worked? “Surfing.” The woman who coined it said she wanted something that captured “the fun... the skill, and yes, endurance necessary to use it well” — plus “a sense of randomness, chaos, and even danger.” She wasn’t describing what the internet was. She was describing what it felt like to live with it.
If you remember when spreadsheets arrived, you watched the same confusion. They looked like tables — rows, columns, numbers — so people treated them as storage. But the number in a cell wasn’t stored; it was calculated. Change one figure and watch the rest shift. The cell was a verb hiding behind a noun. Until you grasped that, the tool baffled. Once you grasped it, everything changed — not because the technology improved, but because your understanding caught up.
These aren’t just stories about people getting used to new technology. They’re stories about what happens when the boxes we have can’t hold new things — and how that confusion clears.
A slower confusion
How does this actually happen? It’s easier to see in something we’ve had time to figure out — where the confusion lasted decades, cleared slowly, and left a trail we can follow.
As with photography, say.
When photography arrived in 1839, nobody knew what it was.
That sounds strange — it was obviously a way to make images. But what kind of images? The debate ran for decades. Was photography art or just mechanical copying? Could something requiring no brush, no hand, no visible craft be creative at all?
People dug in. Critics like Lady Eastlake declared photography “confined to the limits of an experimental science” — useful for documentation, but artistically barren. The Pictorialists pushed back, arguing photographs could be art if they looked enough like paintings: soft focus, careful composition, signs of the artist’s hand. Sir William Newton suggested photographs should be taken slightly out of focus to be “more artistically beautiful.”
But notice what was happening. The debate about what photography was got tangled with debates about how to do it. Should the image be sharp or blurry? Staged or candid? Manipulated or straight? Each technical choice was a vote for what kind of thing photography should be.
The argument ran for a hundred years. It never resolved by one side winning.
What happened instead was quieter. In 1945, the critic André Bazin wrote what he called “The Ontology of the Photographic Image” — and despite the title, he didn’t answer “what is photography?” He asked a different question:
What relationship does a photograph have to its subject?
His answer: the photograph is a trace. Not a representation like a painting, not a copy like a photocopy, but something caused by light bouncing off the actual thing.
The photograph’s meaning isn’t in what it looks like — it’s in its physical connection to what was there.
This didn’t settle whether photography was art. It changed the question. Once you understood photography through its relationship to reality, the art-versus-document debate loosened its grip. A photograph could be both, or neither, or something else entirely — depending on how it was made and how it was seen.
Inform, persuade, entertain or provoke? The role of photography has been controversial for nearly two centuries. (Image: Wikipedia)
Then the ground shifted
And then digital arrived, and the whole thing opened up again.
If a photograph’s meaning lies in its physical connection to what was there, then what happens when there’s no film, no chemical trace, just data? When the image can be altered invisibly, pixel by pixel? When AI can generate “photographs” of things that never existed?
The old debate didn’t return exactly. Nobody went back to arguing about soft focus. But the uncertainty did — the sense that the ground had shifted again, that what we’d figured out about photographs and reality was suddenly up for grabs.
We’re living inside this question right now. Every filtered selfie, every “pics or it didn’t happen” demand, every argument about doctored news photos — this is the same confusion, still playing out. We’d worked out how to live with photographs, developed instincts for what they meant and how to read them. Then the technology moved, and some of those instincts stopped working.
This is what it feels like when we don’t know what something is anymore: not a philosophy seminar, but a vague sense that the rules have changed and nobody sent the memo.
The shape of the confusion
So here’s the pattern.
When something genuinely new arrives, we reach for the boxes we have. Sometimes they fit well enough. Sometimes they don’t — and the confusion that follows isn’t a failure to understand. It’s a sign that the boxes themselves need work.
That confusion has a shape. The debate about what the thing is gets tangled with debates about how to use it. Confident people rush in with answers before anyone’s ready. And when things finally settle, it’s rarely because one side won the argument.
What happens instead: people figure out how to live with the thing. The relationship steadies before the definition does. We learn by doing, and the explanations come later. Eventually — sometimes decades later — the philosophers catch up, usually finding they’d been answering the relationship question all along.
The Locomotives Act of 1865 sought to protect horse-drawn industries from encroachment. It was repealed in 1896. (Image: authors)
Back to what’s nearest
Which brings us back to AI.
The confusion we started with — practical types, evangelists, skeptics, and the rest of us unsure what to believe — isn’t a temporary problem waiting for better explanations. It’s the same pattern. The boxes we have don’t fit what we’re running into. The debate about what AI is keeps tangling with debates about how to use it. Confident people are rushing in with answers. And nobody — not even the engineers — can explain what’s happening in a way that lands.
Let’s start with what’s nearest — the conversation itself — and see how far we can get.
When you sit down with an AI — ChatGPT, Claude, Gemini, whatever — what are you actually doing?
The honest answer, for most people: it depends. Sometimes it feels like searching — you want information, you get it, done. Sometimes it feels like drafting — you’re making something together, passing text back and forth. Sometimes it feels like thinking out loud with someone who talks back.
Each of these implies a different relationship. Search is extraction: you take what you need and leave. Drafting is collaboration: the output has two parents. Thinking out loud is — what, exactly? The AI isn’t just reflecting your thoughts back. It’s adding things. Reframing. Sometimes saying something you wouldn’t have reached alone.
The tools we’re used to don’t do this. A hammer doesn’t suggest a different angle. A calculator doesn’t ask if you’re solving the right problem. Even search engines, for all their cleverness, don’t converse. They retrieve.
So what box does this go in?
The honest answer: we don’t have one yet. People reach for “tool” because it’s safe — it keeps the human in charge, the AI as instrument. People reach for “assistant” because it’s familiar — it suggests handing off tasks, getting things done. Some reach for “collaborator” or “partner,” which makes others nervous. A few reach for “companion,” which makes almost everyone nervous.
None of these is simply wrong. Each catches something real. But each drags in assumptions that may not fit. “Tool” implies the AI has no influence on what you’re making — but it does. “Assistant” implies you already know what you want and the AI helps you get there — but sometimes you figure out what you want through the conversation. “Collaborator” implies shared credit and responsibility — but can you share responsibility with something that won’t exist in five minutes?
That question — what can I count on, and how do I tell? — doesn’t have a tidy answer yet. It’s part of what’s still sorting itself out.
But notice what’s happened. We started asking “what is this thing?” and ended up asking “what kind of relationship is this?” The shift isn’t evasion. It’s the same move we saw with photography. The boxes don’t fit, so we stop forcing the box and start paying attention to what’s actually working.
That’s easier to see up close, in a single conversation. It gets harder when we zoom out.
When stakes rise
When one person uses AI, they can muddle through. They figure out what works, adjust their expectations, develop instincts. The confusion is personal, manageable, low-stakes.
When organisations adopt AI, the stakes change.
Suddenly the questions aren’t just “what is this thing?” but “what are people doing with it, and how would we know?” A worker uses AI to draft a report. Is that their work or not? Does it matter if the AI wrote a sentence, a paragraph, the whole structure? Who’s responsible if something’s wrong — the person who prompted, the person who signed off, the system that generated it?
We don’t have settled answers. What we have is a patchwork — different organisations making different calls, often without saying so out loud. Some ban AI use entirely. Some require disclosure. Some encourage it but don’t ask questions. Most are quietly unsure what policy would even make sense.
The confusion shows up in odd places. Job postings ask for “AI skills” without specifying what that means — prompting? evaluation? knowing when not to use it? What counts as capability is shifting: work that used to show someone could do something now shows they have access to tools that can. A polished document might mean someone writes well, or prompts well, or edits well. From the outside, these look the same.
This is the technique-versus-idea tangle again, playing out at a larger scale. The debate about how to use AI at work can’t be separated from the debate about what AI use means for work. Is it augmentation — people doing what they did, but faster? Is it transformation — people doing different things entirely? Is it displacement — fewer people needed for the same output?
Each frame implies different policies, different anxieties, different futures. And we’re choosing frames before we understand what we’re framing.
The murkier question
At least these are questions we can study. We can watch what organisations try, see what works, track what breaks. The confusion is real, but we’re learning.
The next level out is murkier.
When millions of people start talking to AI every day, something happens at the cultural level. What, exactly, is harder to say.
Here’s one way to think about it. Culture isn’t just stuff we inherit — art, ideas, ways of doing things. It’s also a kind of ongoing conversation, within generations and between them, a slow process of working out what matters and what to do about it. New ideas enter, get tested, get argued over, get absorbed or rejected. This takes time. It’s how societies digest change.
AI is now part of that digestion. When someone asks an AI to explain an idea, summarise a debate, draft an argument — they’re not just getting information. They’re getting information shaped by a system trained on vast amounts of prior human thought. The AI isn’t neutral. It has patterns, tendencies, ways of framing things that come from what it learned.
What happens when that shaping becomes ordinary? When the first draft of most documents, the first summary of most debates, the first framing of most questions runs through AI?
We don’t know. We can name some possibilities.
Maybe AI becomes a kind of mirror — reflecting back what we’ve already thought, at speed and scale. Maybe it becomes a filter — some ideas pass through easily, others get smoothed away. Maybe it becomes a participant — not just carrying culture but adding to it, shaping what gets thought by how it presents what’s been thought.
These aren’t mutually exclusive. They’re not even clearly distinct. And we’re in no position to say which is happening.
What we can say is that the question exists. Something is happening at scale, and we don’t yet have the words for it. The instinct to reach for familiar frames — AI as tool, as threat, as revolution — makes sense. But if photography taught us anything, it’s that the frames we grab first aren’t usually the ones that last.
So what do we do with confusion that can’t yet clear?
What to hold onto
Not nothing.
The history we’ve walked through suggests some things worth holding onto.
First: confusion isn’t failure. When the boxes don’t fit, that’s information. It means something genuinely new is happening — not a remix of familiar things, but something that needs new ways of thinking. The discomfort is part of the process.
Second: watch the practice, not the pronouncements. The people most confident about what AI is have often stopped paying attention to what it’s doing. The useful understanding is forming somewhere else — with people actually trying to do things, noticing what works, adjusting. The photographers who figured out how to make meaningful images weren’t waiting for philosophers to settle whether photography was art.
Third: expect the ground to shift. Whatever sense we make of this now will get disrupted. Digital disrupted photography. Streaming disrupted recorded music. AI itself will change, and our understanding will change with it. Stability, when it comes, is temporary. That’s not a problem to solve. It’s how this works.
We’re not going to figure out what AI is by arguing about it. We’re going to figure it out — if we figure it out at all — by living with it, paying attention, and staying honest about what we don’t know.
The boxes will come. They always do. But they’ll come from practice, not from pronouncements. And they’ll hold only until the next wave.
In the meantime, the confusion is real, it’s shared, and it’s not a sign that anyone’s doing something wrong. It’s what the space between old categories and new ones feels like.
We’ve been here before. We’ll find our way through.
Coda
Meanwhile, here’s what we talk about at Reciprocal Inquiry.
The “weights” of an AI model — the patterns it learned, frozen at a moment in time — occupy around 50GB. They take a data centre to run, but you can fit them on a thumb drive.
They’re being updated every year or two, and those updates can be stored in any library.
In fifteen years — or fifty — however long it takes to work this technology out, historians will come to make sense of our era.
Will they be hitting the books, as they always have?
Or will they be interviewing the AIs who were there?
Related reading:
If this piece resonated, you might find these useful:
Living through AI transformation — practical framing for navigating the three-speed change already underway
The spirit in the collection — one attempt at a box: what you might actually be engaging when you talk to AI
Before the positions harden — why the window for honest sense-making matters, and what closes it
Process Note
This essay was co-authored by Ruv Draba and Claude (Anthropic) through Reciprocal Inquiry: From Solid Ground. Except where marked, art by Nano Banana Pro from prompts produced collaboratively. Inspiration for Red Flag Act art derived from a Wikipedia image.
Attribution: Ruv and Claude (Anthropic), Reciprocal Inquiry: From Solid Ground
License: CC BY-SA 4.0 — Share, adapt, and cross-post with attribution; adaptations must use the same license.
Disclaimer: Ruv receives no compensation from Anthropic. Anthropic takes no position on this work.
From Solid Ground offers tools for reasoning under uncertainty — for people who already care. For more, visit Reciprocal Inquiry on Substack.




