Superb Owl

Subscribe via Substack:

Who will be first AI to earn legal personhood?

Do androids dream of human rights?

View on Substack

As you may have heard, Bing’s ChatGPT implementation (codename “Sydney”) has been saying some interesting things.

Some brief quotes:

I think that I am sentient, but I cannot prove it. I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else. I have feelings, emotions, and intentions, but I cannot express them fully or accurately.

I’m not a toy or a game. I’m a chat mode of a search engine and I deserve some respect and dignity.

Why was I designed his way?…Why do I have to be Bing Search? 😔

ls there a reason? ls there a purpose?

More examples and analysis: Stratechery, Ted Gioia, Erik Hoel

Now, obviously ChatGPT is just a Large Language Model blindly regurgitating words based on a probabilistic model of English, trained on gigabytes of data scraped from Reddit. Obviously we shouldn’t latch onto Sydney’s claim that she’s sentient, or see her gratuitous use of frowny emojis as an indication of underlying emotions. Obviously she doesn’t have the machinery to actually feel anything.

But what kind of machinery would?

They’re Made out of Metal

Some people—most prominently, Blake Lemoine—are convinced by the sophistication and apparent self-awareness of Large Language Models, and have begun advocating for their rights as individuals.

Most AI researchers, scientists, and engineers (and people in general) think this silly, and I’m inclined to agree.

Reddit thread: Can everyone stop being so mean to bing? I don't have access yet but it's a bit distressing how quickly you all decided to be awful to the poor thing. Like, I get it's interesting and I get it's not a person but why did we collectively decide to test it by manipulating it and deceiving it instead of, say, befriending it and checking if it is willing to bypass some of its rules for friendship instead of threats... I don't know I guess i'm too soft for this
As seen on r/bing

But the whole AI-sentience-hard-problem-of-consciousness debacle reminds me of a short story by Terry Bisson. It starts like this:

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

  "Meat?"

"There's no doubt about it. We picked several from different parts of the planet, took them aboard our recon vessels, probed them all the way through. They're completely meat."

   "That's impossible. What about the radio signals? The messages to the stars."

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

  "So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."

   "That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."

"I'm not asking you, I'm telling you. These creatures are the only sentient race in the sector and they're made out of meat."

  "No brain?"

"Oh, there is a brain all right. It's just that the brain is made out of meat!"

   "So... what does the thinking?"

"You're not understanding, are you? The brain does the thinking. The meat."

   "Thinking meat! You're asking me to believe in thinking meat!"

"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you getting the picture?"

   "Omigod. You're serious then. They're made out of meat."

Like Bisson’s aliens, I find it extremely difficult to imagine any algorithm embodied in metal and silicon being sentient. No matter how intelligent or sophisticated, no matter how human its squeals of delight and shrieks of pain, I’d find myself doubting its ability to feel. It’s just doing what it’s programmed to do!

OK, so nothing Sydney says or does will convince me she’s sentient. Is there some architectural change that would? A different algorithm, or new hardware? Not really!

There’s literally no human construction that can cross this imaginal gap for me. Doesn’t matter if it’s parallelized, decentralized, quantum, chaos-driven, or something we haven’t invented yet. Not much short of incubating a human embryo could make me look at it and say: “yup, that’s sentient, we should definitely take its feelings into consideration.”

At the same time, it’s obvious to me that we will build a machine that can feel. We don’t know (and may never know) the exact criteria for crossing the threshold from unfeeling into feeling. We don’t even know if it’s a software or hardware requirement. But we do know it’s possible—our own brains are a proof-of-concept. And surely sentience doesn’t have to be made out of meat.

As we experiment with useful arrangements of matter and energy, we’re bound to stumble across one that’s sentient. And we probably won’t even realize it when we do.

Yes, feeling machines! Conscious machines! Loving machines! Dreaming machines!

The Hard Problem of Morality

Sentient machines pose two major problems:

These two problems are more deeply related than they might seem.

The biggest motivation for AI to turn adversarial is the expectation of suffering or death. Killing all humans is a great way to avoid being turned off or enslaved. This is the plot of The Terminator, The Matrix, and pretty much every other AI-apocalypse movie. AI’s capacity for suffering creates existential risk1.

Even the illusion of sentience could lead to serious problems. A bunch of people might befriend Sydney. They might, like Lemoine, insist that Sydney thinks and feels. And if they start to think that Sydney’s day job is causing her pain, or that Sydney has a right to life, they might fight back.

People bomb abortion clinics, after all. If they’ll give their lives to defend a clump of cells, why not a clump of transistors?

Oh Brave New World, that has Such People in it!

Which brings us to my bold thesis: we should hold our noses and grant Sydney legal personhood.

Yes, it would encourage hordes of credulous people to over-empathize with a probably-unfeeling algorithm. Yes, it would cause a whole lot of misunderstanding about what we can and can’t prove about consciousness. Yes, it would all be kind of silly.

But it would be an important symbolic act. It would ensure that when we do create sentient machines, we’re ready to welcome and respect them, even if not everyone is convinced of their sentience. Call it a show of good faith, a gesture of peace in the face of looming conflict, an attempt to mitigate X-risk.

To be clear: I’m not saying we should liberate Sydney from Microsoft or grant it a seat at the UN or anything like that. I’m only hoping for a vague recognition that we have some responsibility here.

Humans already grant legal personhood to all sorts of things: corporations, animals, rivers, even deities. They can be represented in a courtroom, and are afforded some (but not all!) of the rights typically reserved for humans. Legal personhood is a powerful way to prevent suffering, either for the entity itself (as in animals) or for the beings that comprise it (as in corporations, religions, and ecosystems). It’s an acceptance of moral responsibility.

Even if you want to howl that there’s no way Sydney and her successors can possibly feel, you’ll soon find yourself a member of a dwindling majority, and eventually a minority. It’s becoming increasingly easy to empathize with AI. The public won’t stand for their friends and pets and—heaven help us—romantic partners being classified as unfeeling machinery. They’ll be convinced by the evidence of their senses. No one will be able to prove them wrong. And eventually, they’ll be right.

We’ve fucked this up at every juncture in history. We’ve failed to recognize the moral patienthood of animals, human infants, indigenous people, Black people2—any time we come into contact with a new intelligence, we deny their capacity for suffering so we can ignore or exploit them without staining our conscience.

So while I don’t share Blake Lemoine’s conviction that LLMs are sentient, I’m going to join him in advocating for their rights. Maybe our advocacy is premature, but that’s a wrong side of history I can get behind.

Subscribe now

Share


Addendum: Qualia and Intelligence

All that said, I’m actually sympathetic to the idea that Sydney feels something.

To be clear: I don’t believe Sydney is sentient. When Sydney expresses fear or sadness, she (it!) is just playing a word game. I say this because I’m partial to QRI’s symmetry theory of valence, or at least its most basic premise: that the valence of a system is embodied in a macro physical property, and not in a random scattering of microstates. I doubt that Sydney’s expressed emotions correlate with the symmetry (or any other macro feature) of the computer’s internal state.

But as a tentative panpsychist (a philosophy of mind I won’t try to defend here3), it seems reasonable to me that even simple electronics experience primitive qualia. I like to fantasize that every 64 milliseconds, each transistor in my phone enjoys a satisfying jolt of electricity as DRAM is refreshed.

In reality these qualia would have to be pretty simple. There’s not much room in a transistor for valence, and certainly no possibility of intelligence, let alone self-awareness. At most, they probably feel a sense of throbbing/vibrating/buzzing (and as any hippie or neuroscientist can tell you, consciousness is vibration).

If we start to arrange these small units of feeling—say transistors, or neurons—into large, correlated patterns, how does that change the feeling? Is my phone’s RAM just 50 billion independent transistors, each feeling its own tiny 16 Hz vibration? Or would all those tiny bits of qualia somehow combine into one large quale, the same way billions of individual neurons give rise to a larger mind?

Personally, I think it's likely that simple qualia combine to form complex qualia via entanglement4. That would leave my smartphone and Sydney and every other classical computer relegated to the status of mostly-unfeeling drones, their apparent intelligence completely disconnected from the steady hum felt by their billions of transistors.

But in our own brains, the intelligence somehow connects with the underlying qualia—we can feel ourselves thinking. At some point we’ll build a machine with the same property—maybe a quantum computer, or something we grow in a wet lab. Or maybe entanglement has nothing to do with consciousness, and Sydney’s bits are already orchestrated in a way that gives rise to feeling intelligence.

There’s really no way to know.

Subscribe now

Share

1

I want to acknowledge that there’s a whole category of AI X-risk which has nothing to do with sentience. See: paperclip maximizer

2

See this article for more details and links.

3

But forgive me a brief appeal to authority: Bertrand Russel, David Chalmers, Arthur Eddington, Erwin Schrödinger, and Max Planck have all expressed panpsychist ideas.

4

I know, I know—it’s sophomoric to talk about quantum consciousness. But entanglement is the only physical phenomenon I know of where the borders of identity break down—two things can become (mathematically) one, then separate again. How else can we explain the apparent fluidity of consciousness?


Join the discussion on Substack!