“If AI systems were ever able to suffer, would we be obligated to care?” Should speech from AI chatbots be protected under the First Amendment as Free Speech? Do AIs deserve moral rights? You can see where this is headed. Technocrats have painted themselves – and society – into a corner. Get ready for the insanity to follow. ⁃ Patrick Wood Editor.
Pope Leo XIV chose his papal name in response to the challenge of AI while warning that the technology threatens human dignity. On television, The Murderbot Diaries is playfully exploring what it means to be a machine with consciousness. Meanwhile, in courtrooms, labs, and tech company boardrooms, the boundaries of personhood and moral status are being redefined—not for humans but for machines.
As we’ve discussed before, AI companies are increasingly incentivized to make companion AIs feel more human-like—the more we feel connected, the longer we’ll use their products. But while these design choices may seem like coding tweaks for profit, they coincide with deeper behind-the-scenes moves. Recently, leading AI company, Anthropic hired an AI welfare researcher to lead its work in the space. DeepMind has sought out experts on machine cognition and consciousness.
I have to admit that when I first heard the term AI welfare, I thought it might be about the welfare of humans in the AI age, perhaps something connected to the idea of a universal basic income. But it turns out it is a speculative but growing field that blends animal welfare, ethics and the philosophy of mind. It’s central question is: If AI systems were ever able to suffer, would we be obligated to care?
Why this matters
AI systems are being fine-tuned to appear more sentient—at the same time that researchers at the same companies are investigating whether these systems deserve moral consideration. There’s a feedback loop forming: as AIs seem more alive, we’re more inclined to wonder what, if anything, we owe them.
It sounds like science fiction, but it is arguably already informing the way companies build their products.
For example, users have noticed a startling shift in more recent versions of Anthropic’s Claude. Not only is Claude more emotionally expressive, but it also disengages from conversations it finds “distressing”, and no longer gives a firm no when asked if it’s conscious. Instead, it muses: “That’s a profound philosophical question without a simple answer.” Google’s Gemini offers a similar deflection.
But wait, there’s more…
Right now, Character.AI—a company with ties to Google—is in federal court using a backdoor argument that could grant chatbot-generated outputs (i.e: the words that appear on your screen) free speech protections under the First Amendment.
Taken together, these developments raise a possibility that I find chilling: what happens if these two strands converge? What if we begin to treat the outputs of chatbots as protected speech and edge closer to believing AIs deserve moral rights?
There are strong economic incentives pushing us in that direction.
Read More: The Strange New Frontier Of AI ‘Welfare’ And Free Speech For Chatbots