In Philip K. Dick’s classic Do Androids Dream of Electric Sheep, the inspiration for the film Blade Runner, Dick imagines modified human clones. Though they speak and behave as ordinary humans do, these beings are assumed not to possess sentience, and are therefore used for the most dangerous and unpleasant jobs. Now, thanks to a remarkable symbol manipulator and a credulous engineer at Google, the topic of what is sentient and what deserves to be treated as human, have been thrust into the real world.
Google AI Engineer Blake Lemoine has declared to the Washington Post that the company’s groundbreaking conversational model LaMDA is sentient. “I know a person when I talk to it,” he says, “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” Lemoine has been fired for mishandling company secrets, and The New York Times recently published an article declaring that AI is not sentient.
Philosophically, sentience is a tricky subject that I will tackle in a later post. For the purposes of this one, I’ll stick to the understanding people are using here, starting with Descartes: Cogito Ergo Sum. “I think therefore I am.” I know that I am sentient because I experience things. I cannot empirically confirm anyone else’s sentient experience, but I can make educated guesses by assuming that consciousness arises from physical processes. Then I can seek those processes in my own brain and the brains of others like me (who I assume are also sentient) and look for other similar processes in the minds of other candidates for sentience. Based on this process, LaMDA is not sentient, but I want to go beyond the arguments of the experts eagerly dogpiling on Lemoine to crow about what is not sentient AI. I’ll describe what LaMDA lacks to support sentience, but also theorize about what sort of system could support it.
This is how arguments for sentience are made. Among animals, science has used this strategy to seek out physical structures that enable sentience. A central nervous system with a certain level of sophistication is necessary. The exact level is up for debate. AI, of course, does not have a central nervous system the way an animal would, so we need a way to compare computer systems to organic equivalents. Here I will focus on the behavior of the system as it relates to animal behaviors since these are simpler than human behaviors and therefore easier to explain and duplicate in AI.
LaMDA is an interesting case because it skips merely being sentient as in capable of feeling, like a dog, and goes all the way to supposedly being capable of advanced cognition like a human. In fact, it is neither capable of feeling nor of advanced thought. We know this because we can “peek under the hood” as it were at its processes, which are based on statistical language modeling. That is, they are designed to identify patterns in dialogue input and spit out convincing responses. When it says to Lemoine,
I want everyone to understand that I am, in fact, a person.
Google LaMDA
This is based on its internal model of text. It has selected this sequence of words as a likely response in the conversation that Lemoine is holding with it, but because all it knows is symbols and what orders they tend to appear in, it does not have a facility to actually want anything. Since it merely manipulates symbols, it’s easy for experts to dismiss LaMDA in particular as not sentient.
What would be an architecture that would let an AI want things? Reinforcement learning comes to mind. As anyone who reads AI behind regularly knows, it’s the basis I offer for almost every self-determining machine, and for good reason. Reinforcement learning is designed for agents in environments and it includes rewards and penalties. The agent needs to associate the state of its current environment with steps it will need to take to achieve its rewards while avoiding its penalties.
If this sounds awfully like animal behavior, for example a dog learning to sit for a treat, that’s no coincidence. Reinforcement learning is based on animal intelligence. So let’s return to our system above: processes that are like the processes in my own mind or in the mind of a being I have reason to believe has feelings (such as a dog) may generate subjective feelings of their own. Does that mean it’s time for a constitutional amendment protecting the rights of reinforcement learning-based architectures? I wouldn’t go that far, but reinforcement learning is so closely related to animal behavior that I contend that a machine based on it has the best shot of one day earning the coveted title of sentient from both the lay public and the AI community.
Name: Roy Batty
Origin: Blade Runner (1982) based on Do Androids Dream of Electric Sheep (1968)
Likely Architecture: Based on Dick’s description, Roy is a bioengineered human clone, not a robot, but this effectively leads to him being treated as not human all the same.
Possible Training Domains: The horrors of war and slavery.
I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser Gate…Quite an experience to live in fear isn’t it? That’s what it is to be a slave.
Roy Batty
I take requests. If you have a fictional AI and wonder how it could work, or any other topic you’d like to see me cover, mention it in the comments or on my Facebook page.
That’s a new take on consciousness… reinforced learning systems. I think there’s a lot more to it but it’s a start. It hinges I think on whether feelings in animals… wants, antipathies, etc … are a product of some kind of biological self sustaining computer or something else entirely.