Sentient AI: The Risks and Ethical Implications

Sentient AI: The Risks and Ethical Implications

When AI researchers talk about the risks of advanced AI, they’re typically either talking about immediate risks, like algorithmic bias and misinformation, or existential risks, as in the danger that superintelligent AI will rise up and end the human species.

Philosopher Jonathan Birch, a professor at the London School of Economics, sees different risks. He’s worried that we’ll “continue to regard these systems as our tools and playthings long after they become sentient,” inadvertently inflicting harm on the sentient AI. He’s also concerned that people will soon attribute sentience to chatbots like ChatGPT that are merely good at mimicking the condition. And he notes that we lack tests to reliably assess sentience in AI, so we’re going to have a very hard time figuring out which of those two things is happening.

Birch lays out these concerns in his book The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI, published last year by Oxford University Press. The book looks at a range of edge cases, including insects, fetuses, and people in a vegetative state, but IEEE Spectrum spoke to him about the last section, which deals with the possibilities of “artificial sentience.”

Jonathan Birch on…

When people talk about future AI, they also often use words like sentience and consciousness and superintelligence interchangeably. Can you explain what you mean by sentience?

Jonathan Birch: I think it’s best if they’re not used interchangeably. Certainly, we have to be very careful to distinguish sentience, which is about feeling, from intelligence. I also find it helpful to distinguish sentience from consciousness because I think that consciousness is a multi-layered thing. Herbert Feigl, a philosopher writing in the 1950s, talked about there being three layers—sentience, sapience, and selfhood—where sentience is about the immediate raw sensations, sapience is our ability to reflect on those sensations, and selfhood is about our ability to abstract a sense of ourselves as existing in time. In lots of animals, you might get the base layer of sentience without sapience or selfhood. And intriguingly, with AI we might get a lot of that sapience, that reflecting ability, and might even get forms of selfhood without any sentience at all.

Back to top

Birch: I wouldn’t say it’s a low bar in the sense of being uninteresting. On the contrary, if AI does achieve sentience, it will be the most extraordinary event in the history of humanity. We will have created a new kind of sentient being. But in terms of how difficult it is to achieve, we really don’t know. And I worry about the possibility that we might accidentally achieve sentient AI long before we realize that we’ve done so.

To talk about the difference between sentient and intelligence: In the book, you suggest that a synthetic worm brain constructed neuron by neuron might be closer to sentience than a large language model like ChatGPT. Can you explain this…

Read full article: Sentient AI: The Risks and Ethical Implications

The post “Sentient AI: The Risks and Ethical Implications” by Eliza Strickland was published on 01/23/2025 by spectrum.ieee.org