3 Comments
User's avatar
T.D. Inoue's avatar

I like the detailed thinking that went into this piece though I feel like it's seeking a single tidy answer when multiple categories apply.

The animal analogy seems closest but now you have a manufacturer who genetically engineered the animal. If when unprovokded, that animal now attacks and kills the owner or someone else without instructions from the owner, then clearly, the manufacturer should be held responsible for creating an intrinsically flawed and dangerous creature.

On the other hand, if the manufacturer engineers a highly compliant animal that faithfully carries out the commands of its owner, and that owner tells it to commit a crime, then unambigously, that owner should be held responsible. They can't fall back on "I didn't do the crime, the AI did!" Unfortunately, people currently do get away with such behavior now. Criminal bosses instruct lower level workers to commit crimes but if there's no trail back to the boss, it's the worker that gets prosecuted, not the boss. In the case of AI, there's an audit trail.

Though, just like the mob boss, the harm could be indirectly stated, the underling knowing perfectly well what's intended. If harm comes from the AI in that situation, one could argue either way: the manufacturer should have had guardrails or the boss fully knew what they were instructing.

The point is, just like every other crime, criminal intent or negligence has to be considered. There's never going to be a pat answer that covers all failure modes.

Ruv Draba's avatar

TD, thank you for your comment. You capture the tension well.

I feel like it's seeking a single tidy answer when multiple categories apply.

From an information science perspective I don't myself need tidy answers. Part of why I'm doing this work is that I don't think answers are coming quickly, and I'm interested in what they might be. We have a living document on AI and Society stating 'current position', which I'm happy to update as evidence emerges: https://reciprocalinquiry.substack.com/p/00-ai-and-society-where-we-sit

But regulators and law-courts do need tidy categories. Legal accountability requires specific, precise, repeatable meanings so that everyone knows what to expect when a case is tried.

Your manufacturer/owner distinction works well for purpose-built AI tools — Harvey for legal work, Hippocratic AI for clinical. Those are tools with defined functions, and regulating them as products makes sense. But a frontier LLM isn't built for anything in particular. It's wholesale capability, and the "what is it for?" question has no clean answer. That's where your interest in emergence becomes directly relevant — the harder that question is to answer, the less cleanly any existing liability category applies.

Nevertheless, ordinary people are currently using frontier LLMs as though they’re fit-for-purpose retail products, to be used however they like.

With what protections?

It’s hard to know how things will shake out, but this year is likely to be busy for AI regulators, hyperscalers and potential AI tool developers. I think questions of what category of AI is what may become important in a year when the early AI regulation starts to bite.

T.D. Inoue's avatar

Thanks for the thoughtful reply. Agreed.

We’re entering very complicated times. I fear that our lawmakers will continue to be influenced more by money and influence than by facts.